Did you read the study? It’s hilarious. They’re using LLMs to “grade” the number of observed “skills” based on the output of LLMs. They’re using a stochastic parrot to evaluate another stochastic parrot, and concluding that there is some kind of emergent “skill” going on. Sheeeesh. It’d assume the authors of the paper are just having a laugh. But, one thing is for sure, the AI stupidity train keeps chugging along.
the correct term is Stochastic Parrot… that is what LLM do. It sound even more cool that AI imho
No its not. They havent been this way for years
https://pli.princeton.edu/blog/2023/are-language-models-mere-stochastic-parrots-skillmix-test-says-no
There are several dozens of these studies
Doesn’t matter. There is no cognition. Just word salads mixed and matched with no possibility of receiving “I don’t know” for a answer.
Still stochastic. Even now they still can’t reliably do repeat tasks
So they no more use probability to choose next word? I wonder how they do it now
That’s was a remarkably uninsightful way to approach that topic. Please link to more of these “studies”, that one was way too short.
The virgin cited study vs the Chad Ad Hominem
Did you read the study? It’s hilarious. They’re using LLMs to “grade” the number of observed “skills” based on the output of LLMs. They’re using a stochastic parrot to evaluate another stochastic parrot, and concluding that there is some kind of emergent “skill” going on. Sheeeesh. It’d assume the authors of the paper are just having a laugh. But, one thing is for sure, the AI stupidity train keeps chugging along.