- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
AGI can’t come from these LLMs because they are non-sensing, stationary, and fundamentally not thinking at all.
AGI might be coming down the pipe, but not from these LLM vendors. I hope a player like Numenta, or any other nonprofit, open-source initiative manages to create AGI so that it can be a positive force in the world, rather than a corporate upward wealth transfer like most tech.
It is like raising a baby.
Machine Learning is like teaching computers how to learn new things.
Large Language Models is like teaching computers how to speak.
On its own it won’t do much, it is only a small step. But a necessary one to reach the fully formed being that is an AGI.
(these are analogies, I don’t believe in fully sentient AI yet)
Intelligence and consciousness are not related in the way you seem to think.
We’ve always known that you can have consciousness without a high level of intelligence (think of children, people with certain types of brain damage), and now for the first time, LLMs show us that you can have intelligence without consciousness.
It’s naive to think that as we continue to develop intelligent machines, suddenly one of them will become conscious once it reaches a particular level of intelligence. Did you suddenly become conscious once you hit the age of 14 or whatever and had finally developed a deep enough understanding of trigonometry or a solid enough grasp of the works of Mark Twain? No of course not, you became conscious at a very early age, when even a basic computer program could outsmart you, and you developed intelligence quite independently.
because they are non-sensing, stationary, and fundamentally not thinking
I don’t follow, why would a machine need to be able to move or have its own sensors in order to be AGI? And can you define what you mean by “thinking”?
The argument is best made by Jeff Hawkins in his Thousand Brains book. I’ll try to be convincing and brief at the same time, but you will have to be satisfied with shooting the messenger if I fail in either respect. The basic thrust of Hawkins’ argument is that you can only build a true AGI once you have a theoretical framework that explains the activity of the brain with reference to its higher cognitive functions, and that such a framework necessarily must stem from doing the hard work of sorting out how the neocortex actually goes about its business.
We know that the neocortex is the source of our higher cognitive functions, and that it is the main area of interest to the development of AGI. A major part of Hawkins’ theory states that because the neocortex is arranged into many small columns (cortical columns), it is chiefly the number of them that differs between creatures of different intelligence level, and it forms essentially a basic repeating unit across the whole of the neocortex to model and make predictions about the world based on sensory data. He holds that these columns vote amongst each other in realtime about what is being perceived, constantly piping up and shushing each other and changing their models based on updated data almost like a rowdy room full of parliamentarians trying to come to a consensus view, and that it is this ongoing internal hierarchy of models and perceptions that makes up our intelligence, as it were.
The reason I ventured to argue that sensorimotor integration is necessary for an AI to be an AGI is because I got that idea from him as well; in order to gather meaningful sensory data, you have to be able to move about your environment to make sense of your inputs. Merely receiving one piece of sensory data fails to make any particular impression, and you can test this for yourself by having a friend place an unknown object against your skin without moving it, and having you try to guess based on that one data point. Then, have them move the object and see how quickly you gather enough information to make a solid prediction - and if you were wrong, your brain will hastily rewire its models to update based on that finding. An AGI would similarly fail to make any useful contributions unless it has the ability to move about its environment (asterisk - that includes a virtual environment) in order to continually learn and make predictions. The sort of thing we cannot possibly expect from any conventional LLM, at least as far as I’ve heard so far.
I’d better stop there and see if you care to tolerate more of this sort of blather. I hope I’ve given you something to sink your teeth into, at any rate.
thanks for this very yummy response. I’m having to read up about the technicalities you’re touching on so bear with me!
According to wiki, the neocortex is only present in mammals but as I’m sure you’re aware mammals are not the only creatures to exhibit intelligence. Are you arguing that only mammals are capable of “general intelligence”? I can get on board with what you’re saying as *one way* to develop AGI - work out how brains do it and then copy that - but I don’t think it’s a given that that is the *only* way to AGI, even if we were to agree that only animals with a neocortex can have “general intelligence”. Hence the fact that a given class of machine architecture does not replicate a neocortex would not in my mind make that architecture incapable of ever achieving AGI.
As for your point about the importance of sensorimotor integration, I don’t see that being problematic for any kind of modern computer software - we can easily hook up any number of sensors to a computer, and likewise we can hook the computer up to electric motors, servos and so on. We could easily “install” an LLM inside a robot and allow it to control the robot’s movement based on the sensor data. Hobbyists have done this already, many times, and it would not be hard to add a sensorimotor stage to an LLM’s training.
I do like what you’re saying and find it interesting and thought-provoking. It’s just that what you’ve said hasn’t convinced me that LLMs are incapable of ever achieving AGI for those reasons. I’m not of the view that LLMs *are* capable of AGI though, it’s more like something that I don’t personally feel well enough informed upon to have a firm view. It does seem unlikely to me that we’ve currently reached the limits of what LLMs are capable of, but who knows.
For a snappy reply all I can say is that I did qualify that a “conventional” LLM likely cannot become intelligent. I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any. Although I have been often inclined to describe human intelligence as merely a bag of tricks that, taken together, give the impression of a coherent whole, we have a rather well developed bag of tricks that can’t easily be teased apart. Merely interfacing a Boston Dynamics robo-dog with the OpenAI API may have some amusing applications, but nothing could compel me to admit it as an AGI.
I think current LLMs are already intelligent. I’d also say cats, mice, fish, birds are intelligent - to varying degrees of course.
I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any
If you’re referring to my comment about hobbyist projects, I was just thinking of the sorts of things you’ll find on a search of sites like YouTube, perhaps this one is a good example (but I haven’t watched it as I’m avoiding YouTube). I don’t know if anyone has tried to incorporate a “learning to walk” type of stage into LLM training, but my point is that it would be perfectly possible, if there were reason to think it would give the LLM an edge.
The matter of how intelligent humans are is another question, and relevant because AFAIK when people talk about AGI now, they’re talking about an AI that can do better on average than a typical human at any arbitrary task. It’s not a particularly high bar, we’re not talking about super-intelligence I don’t think.
I’ve watched a couple of these. You might find FreeTube useful for getting YT content without the ugly ads and algo stuff.
There are shortcomings that keep an LLM from approaching AGI in that way. They aren’t interacting (experiencing) with the world in a multisensory or realtime way, they are still responding to textual prompts within their frame of reference in a more discrete, turn-taking manner. They still require domain-specific instructions, too.
An AGI that is directly integrated with its sensorimotor apparatus in the same way we are would, for all intents and purposes, have a subjective sense of self that stems from the fact that it can move, learn, predict, and update in real time from its own fixed perspective.
Jeff Hawkins’ work still has me convinced that the fixed perspective to which we are all bound is the wellspring of subjectivity, and that any intermediary apparatus (such as an AI subsystem for recognizing pictures that feeds words about those pictures to an LLM that talks to another LLM etc, in order to generate a semblance of complex behaviour) renders the whole as a sort of Chinese room experiment, and the LLM remains a p-zombie. It may be outwardly facile at times, even enough to pass Turing tests and many other such standards of judging AI, but it would never be a true AGI because it would never have a general facility of intelligence.
I do hope you don’t find me churlish, I hasten to admit that these chimerae are interesting and likely to have important considerations as the technology ramifies throughout society and the economy, but I don’t find them to be AGI. It is a fundamental limitation of the LLM technology.
I’m going to repeat myself as your last paragraph seems to indicate you missed it: I’m *not* of the view that LLMs are capable of AGI, and I think it’s clear to every objective observer with an interest that no LLM has yet reached AGI. All I said is that like cats and rabbits and lizards and birds, LLMs do exhibit some degree of intelligence.
I have been enjoying talking with you, as it’s actually quite refreshing to discuss this with someone who doesn’t confuse consciousness and intelligence, as they are clearly not related. One of the things that LLMs do give us, for the first time, is a system which has intelligence - it has some kind of model of the universe, however primitive, to which it can apply logical rules, yet clearly it has zero consciousness.
You are making some big assumptions though - in particular, when you said an AGI would “have a subjective sense of self” as soon as it can “move, learn, predict, and update”. That’s a huge leap, and it feels a bit to me like you are close to making that schoolboy error of mixing up intelligence and consciousness.