Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.
You say our thought processes are not well understood, yet you’re so sure it’s better. How are you so sure?
Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.
Emotions and a conscience are separate from intelligence.