Facts, reasoning, ethics, ect. are outside the scope of an LLM. Expecting otherwise is like expecting a stand mixer to bake a cake. It is helpful for a decent part of the process, but typically is lacking in the using heat to process batter into a tasty desert area. An AI like one from the movies would require many more pieces than an LLM can provide and saying otherwise is a a category mistake*.
That isn’t to say that something won’t be developed eventually, but it would be FAR beyond an LLM if it is even possible.
(* See also: https://plato.stanford.edu/entries/category-mistakes/)
I really needed this today. Thank you.