Blockchain also has some useful applications. Most (but not all) of them are also possible with technology and such that existed when bitcoin was first created, at far lower cost for a minor tradeoff in accuracy. On top of that, almost none of them are related to speculative markets.
It’s a way to do distributed transaction logs in a non-refutable and independantly verifiable way. That’s useful and important, but it was a solution in search of a problem. Even for the highest security, most at risk transactions, the existing international fincancial systems are “good enough” to ensure reliability of transaction logs.
In the end, blockchain and now AI are just falling victim to con men trying to milk as much money as they can from things before people build a working understanding of them. They’ll just keep moving onto the next big thing as it comes.
I just wish people had long enough memories to see the cycle for terms like these. Some new word catches vogue, companies fall over themselves trying to find ways to implement them for shareholders and consumers who have no idea what they actually represent. As that fades, a new term arises… it’s sad.
Blockchain also has problems its solving I recon the whole not bullshit was a psyop by thr us government cos finances that they couldn’t have absolute control over would allow the people to bs free. I recon monero is the best as of present especially since its actually anonymous payments.
I am so over hearing about AI. It’s getting to the point that I can assume anyone dropping the term at work is an idiot that hasn’t actually used or utilised it.
It’s this LLM phase. It’s super cool and a big jump in AI, but it’s honestly not that good. It’s a handy tool and one you need to heavily scrutinise beyond basic tasks. Businesses that jumped on it are now seeing the negative effects of thinking it was magic from the future that does everything. The truth is, it’s stupid and people need to learn about it, understand it, and be trained in how to use it before it can be effective. It is a tool, not a solution—at least for now anyways.
I think more like discovering making fire or something. 90% of all the energy burnt is people worshipping it as it blazes away, never actually fulfilling any practical use except being marvelous to be around.
But once the forest is all chopped down, people are forced to understand fire and realise a couple small logs in a contained place was all they needed to have it be incredibly effective.
Oh, but that’s too hard. It’s magic right now. All hail the AI bonfire!
Massive energy consumption. Huge datacenters and not enough green energy. Now they want to build small nuclear plants. Without talking about the waste problem.
Their waste is less destructive than coal plant though. Perhaps this could be a silver lining to finally get nuclear back in action and get closer to dropping coal once and for all.
Ah yeah, just choose a different energy souce. Simples.
Have you seen the growth in % of renewable (incl, nuc biofuel and waste) electricity generation over the past 30 years.
(36% i in 1990 , dropped to about 33% in late 2000s up to 38% recently) this is global, IEA figures.
There have been two years since 1990 when renewable electricity output has grown faster than total electricity demand.
2008/9 recession and 2020 covid.
The only way renewables will come close to meeting current electricity consumption is actually to start reducing those demands.
If we start transerffing gas( domestic heating), and petrol( low-capacity road transportation) onto the electricitry grid then the scale and speed of renewables needs to ramp up inconcievably quickly - it has grown fast over the past decade, but it hasn’t been cheap nor has it been fast enough to keep up with current demands.
TBH I don’t know where AI lines up next to EVs in scale of potential extra demand, probably lower but still an added demand (unless it can substitute for other stuff and improve efficiency somehow).
Electricity source is not really a choice, it is resource and tech constrained many sources are needed; the cheapest fuels will continue to be in the mix used so long as demand keeps increasing so fast.
Maybe, If you ran all AI in peooles houses in cold countries in winter, it’d substitute for heating - that’d be one way it could reduce its impact. Or maybe it can get its act together and spark widespread, frequent, deep, long lasting recessions in economic activity.
Maybe renewables is not the solution to our energy needs if it can’t scale up like we thought it could. Conservation of energy is not the answer. We as a society must find new, cleaner, sources of energy. Maybe AI can help us do it.
Perpetual motion machines are one of the mothers of all snake oil.
Maybe AI can turn base metals into gold too.
Do these AIs even really have a demonstrable understanding of thermodynamics yet?
It needs to prove itself with a usable output for a clear observabe application on a small scale scale before anyone should start chucking vast amounts of energy at it in hope of what it can “maybe” do.
I’d much rather chuck all that energy into trials of tokamaks or something like that.
Tbf the energy issues are getting better, or at least there are some more efficient models being created. Back in April there was a version of Llama that only needed 8gb to almost match GPT4
There’s one good use case for me: produce a bigload of trialcontent in no time for load testing new stuff. “Make 2000 yada yada with column x and z …”. Keeps testing fun and varied while lots of testdata and that it’s all nonsense doesn’t matter.
I’ve found that testing code or formulas with LLM is a 50/50 now. Very often replying “use function blabla() and such snd so” very detailed instructions while this suggested function just doesn’t exist at all in certain language asked for… it’s still something I’ld try if I’m very stuck tho, never know.
Very often replying “use function blabla() and such snd so” very detailed instructions while this suggested function just doesn’t exist at all in certain language asked fo
I’ve noticed this a lot too—especially for M. But even though it makes up a function, it sometimes inspires a more optimised idea/method that can be more flexible for future datasets.
But most times it starts to massacre things and disregard prompted parameters or even producing an identical suggestion immediately after being told not to, why not to, and reconfirming original parameters of the query.
Some times punching in the same prompts five times for five iterations produces completely different results, but one may be on the right track and I can code the rest. It helps to set it’s personality first, so it’s sharing ideas it’s seen out there, rather than trying to please.
At the least, it’s a big time saver. Gone are the days where I get a few days spare to work on solving a complex problem through trial and discovery, so it’s an excellent tool for reducing testing time and speeding up the route to an optimised method.
notice how all of those crypto features were quietly removed from platforms after people realised they were paying millions for some numbers, i think that will happen with Ai
I just got a notification on my phone telling me that I can chat with my PDF documents. Why the fuck would I want to do that? Do these companies realize that literally no one is asking for this shit? I also saw an ad for a computer mouse that had AI inside it. Whatever that means.
Don’t knock it too quickly. I thought like you but one evening I was a little tipsy and started chatting with a PDF document. Let’s just say things got a heated and now we’re engaged.
Oddly enough, that’s one of the few functions I’ve found the LLMs useful for. Looking through big pdfs for specific information, lots of times “ctrl+f” doesn’t do the trick because the exact term I’m looking for doesn’t appear. Worse sometimes it’s a phrase that could be in there under many synonyms. Using the LLM to find the actual info is pretty nice, it just isn’t “AI”.
My research was literally on AI back in college. Most AI solutions are just basic algorithms and don’t use real AI solutions. There’s a huge difference.
It’s even better than that. A lot of companies are taking NVIDIA’s pre-built workflows, running their data through them and selling the results as their own AI. “We build proprietary RAG AI!”
Watched a bit of a video of a guy that went to Computex and asked any vendor with AI plastered somewhere what they were doing with it. Most spouted some meaningless word salad and a few literally shrugged.
Majority of the “AI inside” software and solutions. It’s in a bubble and everyone is throwing crap to a wall hoping it sticks.
“AI” is the new “blockchain”. It’s a solution looking for a solid problem to tackle, with some niche applications
I mean, at least Ai has SOME useful applications, the blockchain was just wasting energy for some numbers.
Blockchain also has some useful applications. Most (but not all) of them are also possible with technology and such that existed when bitcoin was first created, at far lower cost for a minor tradeoff in accuracy. On top of that, almost none of them are related to speculative markets.
It’s a way to do distributed transaction logs in a non-refutable and independantly verifiable way. That’s useful and important, but it was a solution in search of a problem. Even for the highest security, most at risk transactions, the existing international fincancial systems are “good enough” to ensure reliability of transaction logs.
In the end, blockchain and now AI are just falling victim to con men trying to milk as much money as they can from things before people build a working understanding of them. They’ll just keep moving onto the next big thing as it comes.
Mhhh
I just wish people had long enough memories to see the cycle for terms like these. Some new word catches vogue, companies fall over themselves trying to find ways to implement them for shareholders and consumers who have no idea what they actually represent. As that fades, a new term arises… it’s sad.
Blockchain also has problems its solving I recon the whole not bullshit was a psyop by thr us government cos finances that they couldn’t have absolute control over would allow the people to bs free. I recon monero is the best as of present especially since its actually anonymous payments.
I am so over hearing about AI. It’s getting to the point that I can assume anyone dropping the term at work is an idiot that hasn’t actually used or utilised it.
It’s this LLM phase. It’s super cool and a big jump in AI, but it’s honestly not that good. It’s a handy tool and one you need to heavily scrutinise beyond basic tasks. Businesses that jumped on it are now seeing the negative effects of thinking it was magic from the future that does everything. The truth is, it’s stupid and people need to learn about it, understand it, and be trained in how to use it before it can be effective. It is a tool, not a solution—at least for now anyways.
So, like a hammer. A very expensive, environment-destroying hammer.
That’s actually a pretty good analogy.
I think more like discovering making fire or something. 90% of all the energy burnt is people worshipping it as it blazes away, never actually fulfilling any practical use except being marvelous to be around.
But once the forest is all chopped down, people are forced to understand fire and realise a couple small logs in a contained place was all they needed to have it be incredibly effective.
Oh, but that’s too hard. It’s magic right now. All hail the AI bonfire!
Genuinely curious, how does it destroy the environment?
Massive energy consumption. Huge datacenters and not enough green energy. Now they want to build small nuclear plants. Without talking about the waste problem.
Their waste is less destructive than coal plant though. Perhaps this could be a silver lining to finally get nuclear back in action and get closer to dropping coal once and for all.
So AI uses energy, and it’s how we are choosing to provide that energy is destructive to the environment? So AI isn’t itself destructive.
Ah yeah, just choose a different energy souce. Simples.
Have you seen the growth in % of renewable (incl, nuc biofuel and waste) electricity generation over the past 30 years. (36% i in 1990 , dropped to about 33% in late 2000s up to 38% recently) this is global, IEA figures.
There have been two years since 1990 when renewable electricity output has grown faster than total electricity demand. 2008/9 recession and 2020 covid. The only way renewables will come close to meeting current electricity consumption is actually to start reducing those demands.
If we start transerffing gas( domestic heating), and petrol( low-capacity road transportation) onto the electricitry grid then the scale and speed of renewables needs to ramp up inconcievably quickly - it has grown fast over the past decade, but it hasn’t been cheap nor has it been fast enough to keep up with current demands.
TBH I don’t know where AI lines up next to EVs in scale of potential extra demand, probably lower but still an added demand (unless it can substitute for other stuff and improve efficiency somehow).
Electricity source is not really a choice, it is resource and tech constrained many sources are needed; the cheapest fuels will continue to be in the mix used so long as demand keeps increasing so fast.
Maybe, If you ran all AI in peooles houses in cold countries in winter, it’d substitute for heating - that’d be one way it could reduce its impact. Or maybe it can get its act together and spark widespread, frequent, deep, long lasting recessions in economic activity.
Maybe renewables is not the solution to our energy needs if it can’t scale up like we thought it could. Conservation of energy is not the answer. We as a society must find new, cleaner, sources of energy. Maybe AI can help us do it.
Perpetual motion machines are one of the mothers of all snake oil. Maybe AI can turn base metals into gold too. Do these AIs even really have a demonstrable understanding of thermodynamics yet? It needs to prove itself with a usable output for a clear observabe application on a small scale scale before anyone should start chucking vast amounts of energy at it in hope of what it can “maybe” do. I’d much rather chuck all that energy into trials of tokamaks or something like that.
Tbf the energy issues are getting better, or at least there are some more efficient models being created. Back in April there was a version of Llama that only needed 8gb to almost match GPT4
There’s one good use case for me: produce a bigload of trialcontent in no time for load testing new stuff. “Make 2000 yada yada with column x and z …”. Keeps testing fun and varied while lots of testdata and that it’s all nonsense doesn’t matter.
I’ve found that testing code or formulas with LLM is a 50/50 now. Very often replying “use function blabla() and such snd so” very detailed instructions while this suggested function just doesn’t exist at all in certain language asked for… it’s still something I’ld try if I’m very stuck tho, never know.
I’ve noticed this a lot too—especially for M. But even though it makes up a function, it sometimes inspires a more optimised idea/method that can be more flexible for future datasets.
But most times it starts to massacre things and disregard prompted parameters or even producing an identical suggestion immediately after being told not to, why not to, and reconfirming original parameters of the query.
Some times punching in the same prompts five times for five iterations produces completely different results, but one may be on the right track and I can code the rest. It helps to set it’s personality first, so it’s sharing ideas it’s seen out there, rather than trying to please.
At the least, it’s a big time saver. Gone are the days where I get a few days spare to work on solving a complex problem through trial and discovery, so it’s an excellent tool for reducing testing time and speeding up the route to an optimised method.
notice how all of those crypto features were quietly removed from platforms after people realised they were paying millions for some numbers, i think that will happen with Ai
I just got a notification on my phone telling me that I can chat with my PDF documents. Why the fuck would I want to do that? Do these companies realize that literally no one is asking for this shit? I also saw an ad for a computer mouse that had AI inside it. Whatever that means.
I just got a notification on my phone telling me that I can chat with my PDF documents
I belive you got that notification but I honestly have no idea what it even means.
It’s from the Adobe Acrobat app. Basically you can ask it to give you a summary of whatever document you’re reading.
Don’t knock it too quickly. I thought like you but one evening I was a little tipsy and started chatting with a PDF document. Let’s just say things got a heated and now we’re engaged.
Oddly enough, that’s one of the few functions I’ve found the LLMs useful for. Looking through big pdfs for specific information, lots of times “ctrl+f” doesn’t do the trick because the exact term I’m looking for doesn’t appear. Worse sometimes it’s a phrase that could be in there under many synonyms. Using the LLM to find the actual info is pretty nice, it just isn’t “AI”.
My research was literally on AI back in college. Most AI solutions are just basic algorithms and don’t use real AI solutions. There’s a huge difference.
It’s even better than that. A lot of companies are taking NVIDIA’s pre-built workflows, running their data through them and selling the results as their own AI. “We build proprietary RAG AI!”
I can’t wait to get a Smart AI refrigerator that tells me I have a bunch of food that isn’t really in there even when I didn’t ask it to.
Watched a bit of a video of a guy that went to Computex and asked any vendor with AI plastered somewhere what they were doing with it. Most spouted some meaningless word salad and a few literally shrugged.