- cross-posted to:
- techtakes@awful.systems
- cross-posted to:
- techtakes@awful.systems
AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits
nothing to do with actual capabilities… just the ability to make piles and piles of money.
The same way these capitalists evaluate human beings.
Guess we’re never getting AGI then, there’s no way they end up with that much profit before this whole AI bubble collapses and their value plummets.
AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.
The vast majority of those implementations are worthless. Mostly ignored by it’s intended users, seen as a useless gimmick.
LLM have it’s uses but companies are pushing them into every areas to see what sticks at the moment.
Not the person you replied to, but I think you’re both “right”. The ridiculous hype bubble (I’ll call it that for sure) put “AI” everywhere, and most of those are useless gimmicks.
But there’s also already uses that offer things I’d call novel and useful enough to have some staying power, which also means they’ll be iterated on and improved to whatever degree there is useful stuff there.
(And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)
I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn’t mean AI isn’t useful just that most of the chaf well disapear.
The dotcom bubble was based on technology that had already been around for ten years. The AI bubble is based on technology that doesn’t exist yet.
Yeah, it’s so a question of if OpenAI won’t lose too many of its investors when all the users that don’t stick fall down.
To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.
This is simply false.
To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.
LLMs clearly have utility, but is it enough to turn them into a profitable business line?
You’re right about the definition, and I do think the LLMs will aid in a product offering’s profitability, if not directly generate profits. But OP didn’t mean economically, they meant LLMs will go the way of slap bracelets.
… before this whole AI bubble collapses and their value plummets.
Sounds like they meant economics to me.
They said “AI bubble collapses” first then “their value” - meaning the product’s practical use stops functioning (people stop using it) first thus causing economic breakdown for the companies as a result.
It’s obvious that the OP is expecting LLMs to be a fad that people will soon be forgetting.
It’s a bubble. It doesn’t mean the tech does not have its uses. And it is exactly like the .com situation.
Completely wrong.
Ever heard of the internet bubble?
Yeah. That’s what I just mentioned.
That’s an Onion level of capitalism
The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it’s not a philosophical term but a business one.
Right but that’s not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you’ve developed something that you might not have developed.
honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn’t have anything to do with “AGI”. For context, Apple’s net income is 90 Billion this year.
I’ve listened to enough interviews to know that all of AI leaders want this holy grail title of “inventor of AGI” more than anything else so I don’t think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.
For fun, I used this idea in chatgpt using similar logic:
Hello, can you explain to me how a strict profit generated number would result in a specific goal, such as if I make 1000 dollars I would suddenly be capable of owning a house. I find the idea strange and backwards.
You’re correct to find this idea odd—it reflects a misunderstanding of how financial goals, like owning a house, relate to income. Here’s why this thinking is flawed:
-
Profit vs. Net Worth: Making $1,000 in profit doesn’t directly equate to being able to afford a house because buying a house requires not just income but also savings, creditworthiness, and a down payment. Profit is only a small piece of the puzzle.
-
Costs and Expenses: Owning a house involves more than just the purchase price. You need to consider closing costs, property taxes, maintenance, and potentially ongoing mortgage payments. $1,000 would likely not be enough to cover even a fraction of these.
-
Leverage and Financing: Most people don’t buy houses outright; they use loans or mortgages. To secure a mortgage, lenders look at your total financial picture: income, savings, debt, and credit score. Simply generating $1,000 won’t improve these factors significantly.
-
Specificity of the Goal: “Making $1,000” is a general financial milestone, not a specific strategy tailored to the requirements of owning a home. A more actionable goal would involve budgeting, saving for a down payment, and improving credit.
-
The Backwards Logic: The idea that reaching an arbitrary profit threshold suddenly makes you eligible for something like owning a house conflates short-term gains with long-term financial stability. A better approach is to reverse-engineer the goal: start by determining the total cost of the house, then work backwards to figure out how much you need to save monthly, how much income you need to earn, and what steps are required to secure financing.
Example of Goal-Oriented Thinking:
If you wanted to own a $200,000 house:
You might need a 10% down payment ($20,000).
Closing costs could add $5,000–$10,000.
Monthly income would need to support a mortgage payment (let’s say $1,200/month).
Steps would include saving regularly, reducing debt, and improving credit, rather than focusing on an isolated profit milestone like $1,000.
Summary:
Focusing on a single, arbitrary profit number like $1,000 doesn’t align with the comprehensive planning required for significant financial goals like home ownership. Instead, success depends on a holistic view of your finances and structured goal-setting.
-
We taught sand to do math
And now we’re teaching it to dream
All the stupid fucks can think to do with it
Is sell more cars
Cars, and snake oil, and propaganda
Then we untaught it to do math.
https://medium.com/tomorrows-jobs-today/why-chatgpt-cant-do-basic-math-fc1a132e9773
I dunno, I don’t do math very well when I dream.
this is almost a haiku
“It’s at a human-level equivalent of intelligence when it makes enough profits” is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.
We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.
So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.
Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.
It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.
This is more about robotics than AGI. A system can be generally intelligent without having a physical body.
You’re - of course - right. Though I’m always a bit unsure about exactly that. We also don’t attribute intelligence to books. For example an encyclopedia, or Wikipedia… That has a lot of knowledge stored, yet it is not intelligent. That makes me believe being intelligent has something to do with being able to apply knowledge, and do something with it. And outputting text is just one very limited form of interacting with the world.
And since we’re using humans as a benchmark for the “general” part in AGI… Humans have several senses, they’re able to interact with their environment in lots of ways, and 90% of that isn’t drawing and communicating with words. That makes me wonder: Where exactly is the boundary between an encyclopedia and an intelligent entity… Is intelligence a useful metric if we exclude being able to do anything useful with it? And how much do we exclude by not factoring in parts of the environment/world?
And is there a difference between being book-smart and intelligent? Because LLMs certainly get all of their information second-hand and filtered in some way. They can’t really see the world itself, smell it, touch it and manipulate something and observe the consequences… They only get a textual description of what someone did and put into words in some book or text on the internet. Is that a minor or major limitation, and do we know for sure this doesn’t matter?
(Plus, I think we need to get “hallucinations” under control. That’s also not 100% “intelligence”, but it also cuts into actual use if that intelligence isn’t reliably there.)
On the same hand… “Fluently translate this email into 10 random and discrete languages” is a task that 99.999% of humans would fail that a language model should be able to hit.
Agree. That’s a super useful thing LLMs can do. I’m still waiting for Mozilla to integrate Japanese and a few other (distant to me) languages into my browser. And it’s a huge step up from Google translate. It can do (to a degree) proverbs, nuance, tone… There are a few things AI or machine learning can do very well. And outperform any human by a decent margin.
On the other hand, we’re talking about general intelligence here. And translating is just one niche task. By definition that’s narrow intelligence. But indeed very useful to have, and I hope this will connect people and broaden their (and my) horizon.
any cognitive Task. Not “9 out of the 10 you were able to think of right now”.
Any is very hard to benchmark and is also not how humans are tested.
Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.
I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.
But we know too little about whether the limits of the turing machine are also limits of human cognition.
Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)
The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.
No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?
Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?
Logic, from which follows the incompleteness theorem, reified in material reality as cause and effect. Instead of completeness you could throw out soundness (that is, throw out cause and effect) but now the physicists are after you because you made them fend off even more Boltzmann brains. There is theory on hypercomputation but all it really boils down to is “if incomputable inputs are allowed, then we can compute the incomputable”. It should be called reasoning modulo oracles.
Or, put bluntly: Claiming that brains are legit hypercomputers amounts to saying that humanity is supernatural, as in aphysical. Even if that were the case, what would hinder an AI from harnessing the same supernatural phenomenon? The gods?
You say an incompleteness theorem implies that brains are computable? Then you consider the possibility of them being hypercomputers? What is this?
Im not saying brains are hypercomputers, just that we dont know if thats the case. If you think that would be “supernatural”, ok, i dont mind. And i dont object to the possibility of eventually having AI on hypercomputers. All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.
You say an incompleteness theorem implies that brains are computable?
No, I’m saying that incompleteness implies that either cause and effect does not exist, or there exist incomputable functions. That follows from considering the universe, or its collection of laws, as a logical system, which are all bound by the incompleteness theorem once they reach a certain expressivity.
All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.
Adequate in which sense? Architecturally, of course not, and neither would be lambda calculus or other common models. I’m not talking about specific abstract machines, though, but Turing-completeness, that is, the property of the set of all abstract machines that are as computationally powerful as Turing machines, and can all simulate each other. Those are a dime a gazillion.
Or, see it this way: Imagine a perfect, virtual representation of a human brain stored on an ordinary computer. That computer is powerful enough to simulate all physical laws relevant to the functioning of a human brain… it might take a million years to simulate a second of brain time, but so be it. Such a system would be AGI (for ethically dubious values of “artificial”). That is why I say the “whether” is not the question: We know it is possible. We’ve in fact done it for simpler organisms. The question is how to do it with reasonable efficiency, and that requires an understanding of how the brain does the computations it does so we can mold it directly into silicon instead of going via several steps of one machine simulating another machine, each time incurring simulation overhead from architectural mismatch.
No,
Ok. So nothing you said backs the claim that “logic” implies that the brain cannot be using some uncomputable physical phenomenon, and so be uncomputable.
I’m not sure about what you mean by “cause and effect” existing. Does it mean that the universe follows a set of laws? If cause and effect exists, the disjunction you said is implied by the incompleteness theorem entails that there are uncomputable functions, which I take to mean that there are uncomputable oracles in the physical world. But i still find suspicious your use of incompleteness. We take the set of laws governing the universe and turn it into a formal system. How? Does the resulting formal system really meet all conditions of the incompleteness theorem? Expressivity is just one of many conditions. Even then, the incompleteness theorem says we can’t effectively axiomatize the system… so what?
Adequate in which sense?
I dont mean just architecturally, the turing machine wouldnt be adequate to model the brain in the sense that the brain, in that hypothetical scenario, would be a hypercomputer, and so by definition could not be simulated by a turing machine. As simple as that. My statement there was almost a tautology.
As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.
I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.
I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.
Oh yeah!? If I’m so dang smart why am I not generating 100 billion dollars in value?
Any or every task?
It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.
So then how do we define natural general intelligence? I’d argue it’s when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don’t do very well but they probably do better than a random set of button inputs.
Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.
That’s kind of too broad, though. It’s too generic of a description.
The key word here is general friend. We can’t define general anymore narrowly, or it would no longer be general.
That’s the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.
This is just so they can announce at some point in the future that they’ve achieved AGI to the tune of billions in the stock market.
Except that it isn’t AGI.
But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved
The real motivation is to not be beholden to Microsoft
That’s not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It’s reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.
If they actually achieve AGI I don’t understand what money would even mean anymore. It essentially is just a mechanism for getting people to do things they don’t otherwise want to do, if the AI can do it just as well as the human, but for free other than the electricity costs, why the hell would you pay a human to do it?
It’s like saving up money, in case of nuclear war. There are a few particular moments in history where the state of the world on the far side of the event is so different to the world on this side of the event that there’s no point making any kind of plans based on today systems.
I see what you’re saying and I agree that if, for example, we get an AI god then money won’t be useful. However, that’s not the only possible near-future outcome and if the world as we know it doesn’t end then money can be used by AIs to get other AIs to do something they don’t otherwise want to do.
My point is if AI takes over all of the work there won’t be any jobs for humans. So they won’t have any money.
So who are all the AI companies going to sell their products to? The whole system doesn’t work in an AI future and we don’t need AI gods to be able to do our jobs, after all most humans are idiots.
Also AI doesn’t need motivation.
Trade (facilitated by money) doesn’t require humans. It just requires multiple agents and positive-sum interactions. Imagine a company, run by an AI, which makes robots. It sells those robots to another company, also run by an AI, which mines metal (the robots do the mining). The robots are made from metal the first company buys from the second one. The first AI gets to make more robots than it otherwise would, the second AI gets to mine more metal than it otherwise would, and so both are better off.
They don’t care that they’re stuck in a loop, the same way humans keep creating new humans to create new humans to create new humans and so forth.
There are jobs that require hands still.
AGI hell regular “Ai” (LLM) trained on all the automotive repair books should be able to diagnose a fault but it still needs a human to go repair the vehicle.
On board diagnostics are smart, they can tell you the rear tail lights are in open circuit etc. What they can’t tell is the back half of the car was ripped off by a train and a set of bulbs just won’t cut it
hence the worldcoin stuff - not just machine to machine. allows “ai” to perform real world action through human incentivization. entirely disturbing if you ask me.
So they don’t actually have a definition of a AGI they just have a point at which they’re going to announce it regardless of if it actually is AGI or not.
Great.
I’m gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.
it would be quite trope inversion if people sided with the ai overlord
Why does OpenAI “have” everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there… They have a definition of AGI… Yet, they release none of that…
Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company’s value, and you’d better not tell the truth. But with all the other things, it’s just silly not to share anything.
Either they’re even more greedy than the Metas and Googles out there, or all the articles and “leaks” are just unsubstantiated hype.
Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.
Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.
They don’t have AGI. AGI also won’t happen for another laege amount of years to come
What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That’s it.
AGI is a completely different beast to the current LLM flower leaves
You’re right. The current LLM approach has some severe limitations. If we ever achieve AGI, it’ll probably be something which hasn’t been invented yet. Seems most experts also predict it’ll take some years and won’t happen over night. I don’t really agree with the “statistical” part, though. I mean that doesn’t rule anything out… I haven’t seen any mathematical proof that a statistical predictor can’t be AGI or anything… That’s just something non-expert people often say… But the current LLMs have other/proper limitations as well.
Plus, I don’t have that much use for something that does the homework assignments for me. If we’re dreaming about the future anyways: I’m waiting for an android that can load the dishwasher, dust the shelves and do the laundry for me. I think that’d be massively useful.
Does anyone have a real link to the non-stalkerware version of:
https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition
-and the only place with the reference this article claims to cite but doesn’t quote?