

Note that the train of thought thing originated from users as a prompt “hack”: you’d ask the bot to “go through the task step by step, checking your work and explaining what you are doing along the way” to supposedly get better results. There’s no more to it than pure LLM vomit.
(I believe it does have the potential to help somewhat, in that it’s more or less equivalent to running the query several times and averaging the results, so you get an answer that’s more in line with the normal distribution. Certainly nothing to do with thought.)
The key thing is that the basilisk makes a million billion digibidilion copies of you to torture, and because you know statistics you know that there’s almost no chance you’re the real you and not a torture copy.