Imagine coming across, on a reasonably serious site, an article that starts along the lines of:

After observing the generative AI space for a while, I feel I have to ask: does ChatGPT (and other LLM-based chatbots)… actually gablergh? And if I am honest with myself, I cannot but conclude that it sure does seem so, to some extent!

(…)

Naturally, your immediate reaction would not be to make a serious thinking face and consider deeply whether or not GPT indeed “gablerghs”, and if so to what degree. Instead, you would first expect the author to define the term “gablergh” and provide some relevant criteria for establishing whether or not something “gablerghs”.

Yet somehow when hype-peddlers claim that LLMs (and tools built around them, like ChatGPT) “think”, nobody demands of them clarification of what they actually mean by that…