The inner workings of “AI” (see: large language model) are nothing more than a probabilistic game of guess the next token. The inner workings of human intelligence and consciousness are not fully understood by modern science. Our thought processes are somehow “better” because the artificial version of them are a cheap imitation that’s practically no better than flipping a coin, or rolling a die.
Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.
The inner workings of “AI” (see: large language model) are nothing more than a probabilistic game of guess the next token. The inner workings of human intelligence and consciousness are not fully understood by modern science. Our thought processes are somehow “better” because the artificial version of them are a cheap imitation that’s practically no better than flipping a coin, or rolling a die.
You say our thought processes are not well understood, yet you’re so sure it’s better. How are you so sure?
Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.
Emotions and a conscience are separate from intelligence.
How can you be so sure?