I fucking hate that everyone has just accepted calling them AI. They are not AI they are LLMs which are nowhere close to what the sci-fi idea of AI is and no matter what sam says I don’t think that they are the way to getting an AGI.
For clarification I understand that LLMs are a subset of AI but it feels like calling squares rectangles technically true but misleading.
Not only that, calling the field “AI” is built in hype.
-
I work in the field of intelligent machines.
-
Oh cool, so you can build intelligent machines?
-
Hell no. We just call the field that. For reasons.
Edit: my dialogue dashes became blocks. Must be an intelligent machine changing them or something.
-
I fucking knew this story was bullshit, and the scientist emailing a random plebe at Google (like some low level employee would know or be allowed to be honest about AI shitfuckery) all shocked was a joke, too. Pretty disappointed with a scientist feeding into this horseshit.
When it first ran, I posited that if they had emailed their documentation to someone with a Gmail address, it might have been up for grabs for sucking down the maw of Google’s AI monstrosities.
Finally, even when it comes to the “right” answer there is no way to know if it hallucinated it’s way to such an answer! Which makes it getting the “right” answer effectively pointless.
The AI guys are really playing with the exact same cheat every time, aren’t they? Thanks to pivot-to-ai for continuing to shine a light on this… I hope the wider press eventually learns about it, too.
aren’t so-called AIs really just a computer version of the infinite monkey/ typewriter thought experiment?
No, the typewriters are supposed to be random, this is guided based on previous work, so a whole space of output becomes extremely unlikely (so without looking at math for it, those spaces would show up very rarely if you then ran the infinite monkey experiment infinite times).