Seems to me it’d be pretty easy to tell. If the footage was AI generated, fingers would be appearing and disappearing.
It’s a solved problem now. Most good AI models generate correct fingers these days.
Well, no, actually: The AI image model will generate bad fingers constantly it’s just become easier to fix via a secondary step (e.g. img2img) or you just tell it to generate 50 images and just pick the ones that don’t have messed up fingers 🤷
Cries in Artbreeder credits.
I mean, probably not in 3-5 years let alone 10.
And by that time the neural nets will have figured out anatomically correct hands anyway making this product doubly moot
It’s scary that the speed of improvement in the AI sector is so fast that people are still talking about something that wasn’t a problem one month after it’s launch if the person sharing the picture spent a bit more time than just writing the prompt and tapping “GENERATE”. not only it wasn’t a problem even back then, you could even choose the pose of the character and all other sorts of parameters. Since several months you can just do the bare minimum and it output the correct number of fingers.
People are underestimating AI improvement rate by a lot and big tech’s gonna abuse it.
Big tech proved in 48 hours with the OpenAI fiasco that, as with every over industry, ethics are gone and money wins in today’s hyper-capitalist system. Whatever promise AI ever held for being used for good is now vastly overshadowed by its likelihood to be used to increase quarterly profits for the highest bidder, along with whatever side effects that entails.
Luckily, AI is a public technology. That’s why they’re already trying their hand at regulatory capture. And they might just get it. Just like they’re trying to destroy encryption. Support open source development, It’s our only chance. Their AI will never work for us. John Carmack put it best.
AM and the other AIs from the short story “I have no mouth and I must scream” could be a reality. The deep hatred it has towards humans was never explained and could be an alignment problem. They’re AGIs made to wage wars after all.
I really recommend robert miles videos. He’s been uploading videos about AI research safety for 6 years, when the most powerful AI were in millions of parameters and vastly under trained.
Here is an alternative Piped link(s):
https://piped.video/bJLcIBixGj8
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
big tech’s gonna abuse it.
Actually, it’s everyone that’s going to abuse it. Big tech wants to be the exclusive “AI provider” for everyday people’s AI needs and desires but the reality is that the tech isn’t that easy to keep secret/proprietary because most of the innovations pushing AI forward come from individuals fooling around with the technology and academia. Not from big tech R&D (which lately seems to all be spent trying to improve business processes).
Big tech is spending billions on hardware and entire data centers just to do AI stuff with the expectation that it’ll give them a competitive advantage but the truth is that it’ll be the small companies and individuals that end up taking advantage of AI in ways that actually improve things for everyday people and/or make real money.
My guess is that they’re betting on acquisitions of companies using their AI processing power 🤷. Either that or it’s just wishful thinking.
AI scams are already rampant where they pretend to be a loved one asking for help (read: “I’m in a bad situation right now can you send me as much money as you can?”) And unsurprisingly it’s unreasonably effective especially for older people.
Just a reminder that the tech companies absolutely do not see the above as an issue BTW, in fact all they seem to do is tacitly endorse it by advertising that you can use their service to clone people and “bring them to life” virtually and stuff. Because they’re still making money when you use the AI (not to mention they collect and retain the training data you give them, with or without the subject’s consent) and it’s not like it’s that easy for investigators to tell which AI was responsible for a particular scam campaign so there’s really no risk to their reputation at all.
Modern problems require modern solutions
Dystopian problems require dystopian solutions
The fingers wouldn’t work, as they’d move in real action like fake fingers rather than blending in and out like AI blended footage.
This reminds me of the product image (it was never a real product) of a gun that disguises itself as a cell phone. It was never a real product, but US Law Enforcement uses it to justify shooting people brandishing a cell phone.
IANAL but this seems stupid to try in court assuming the footage is under good chain of custody.
criminals got to crimi
Edward Penishands got it way before we did.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Saw it years ago. Surprisingly boring.
One needs a talent to turn one stupid joke into something special, compelling. They laked it. At least, they were dedicated to make a material for meme cuts. And I feel they themselves had fun filming it (:
True, but it wasn’t even good porn. And that doesn’t take a huge amount of talent.
You mean politicians.
truth is dead