I’m going to repeat myself as your last paragraph seems to indicate you missed it: I’m *not* of the view that LLMs are capable of AGI, and I think it’s clear to every objective observer with an interest that no LLM has yet reached AGI. All I said is that like cats and rabbits and lizards and birds, LLMs do exhibit some degree of intelligence.
I have been enjoying talking with you, as it’s actually quite refreshing to discuss this with someone who doesn’t confuse consciousness and intelligence, as they are clearly not related. One of the things that LLMs do give us, for the first time, is a system which has intelligence - it has some kind of model of the universe, however primitive, to which it can apply logical rules, yet clearly it has zero consciousness.
You are making some big assumptions though - in particular, when you said an AGI would “have a subjective sense of self” as soon as it can “move, learn, predict, and update”. That’s a huge leap, and it feels a bit to me like you are close to making that schoolboy error of mixing up intelligence and consciousness.
Intelligence and consciousness are not related in the way you seem to think.
We’ve always known that you can have consciousness without a high level of intelligence (think of children, people with certain types of brain damage), and now for the first time, LLMs show us that you can have intelligence without consciousness.
It’s naive to think that as we continue to develop intelligent machines, suddenly one of them will become conscious once it reaches a particular level of intelligence. Did you suddenly become conscious once you hit the age of 14 or whatever and had finally developed a deep enough understanding of trigonometry or a solid enough grasp of the works of Mark Twain? No of course not, you became conscious at a very early age, when even a basic computer program could outsmart you, and you developed intelligence quite independently.