what is even the use case?
like for a normal user, instead of like clicking on a programm, they tell the ai to open the program?
like a shell that is just … worse?
or is it supposed to help fix you with having a problem with the operating system? which would be ironic
like only use case i can imagine is for like old people who just can not handle their pc. But the result will just be them sending all their money to a scammer
You know, I don’t remember them trying to talk to the Enterprise’s computer in any of the TOS films. They downplayed that a lot, I think for enhanced realism. It stayed canon that you could talk to computers, but they only did to set the self destruct sequence.
Either way, by TNG they were back at it, and they got in the habit of “Computer: create a holodeck program simulating the appearance and personality of the engineer who designed the Enterprise. Run it in Holodeck 2.” We’re getting pretty close to that capability now, minus the room full of VR.
The problem? LLMs will tell you to add elmer’s glue to pizza sauce to hold the cheese in place, because someone joked about that on Reddit once. They have no sense of reality.
In fact, it’s kind of amazing how well the writers of TNG did with that episode and it’s follow-up. The simulated Leah started showing romantic behavior, and Geordi formed a parasocial relationship with Leah Brahms…a married woman who didn’t know Geordi personally, and so there was this uncomfortable moment where Geordi was uncomfortably familiar with a complete stranger. The Enterprise’s computer hallucinated a Leah Brahms who was single and ready to mingle.
They’re working on making that problem into a consumer product.
The end game is probably something like “Write me a report from x and y about z and send it to my phone” or somerhing. And it does that in Microsoft-approved software and sends the data to MS too.
For example, you can just tell Copilot to do a thing instead of spending however long it would take to google a solution. Will it gimp learning? Of course it will. But that’s technology advancement in a nutshell - making things more accessible, by making them easier, thus making some baseline levels of knowledge obsolete. We no longer learn Assembly to interact with our computers, future generations won’t know where Settings are, because an AI Agent will be doing it all for them.
Another example is people suffering from disabilities or even just temporary injuries. Imagine being able to fire up your PC and just tell it to play your favourite show, instead of having to use your feet or, I don’t know, a pencil in your mouth, to click through the windows.
The idea is excellent. The implementation is some 5 years too early. The AI systems available, in their current form, are just not there yet for this to work well. Microsoft even pulled an ad they made recently, because people noticed that Copilot made an error there. Or even multiple errors.
I think you have a point, but i also think it would cause a hemorrhage of skilled users. Especially because it would become a very easy tool for increased corpo control over your device. Suddenly your ripped TNG dvds that are indistinguishable from pirated ones are no longer accessible by you just going to your well labeled file location. Now you have a really convenient os that you can tell in plain English that you’d like you pull up some star trek and it says ok and tries to download a streaming service app. You tell it that you’d like to watch the DVD rips you have of it instead and it tells you that those files have been locked for possible copyright violations.
That’s just the scenario that came to mind immediately.
I think you have a point, but i also think it would cause a hemorrhage of skilled users
I agree. However, at the same time, I recognise that when the first OS with GUI was released, people were saying the exact same thing.
Especially because it would become a very easy tool for increased corpo control over your device
In its current form, the agent will be run locally (requiring a device with an NPU), so corpo won’t have any access or control to your device.
Suddenly your ripped TNG dvds that are indistinguishable from pirated ones are no longer accessible by you just going to your well labeled file location
It would be impossible for an AI agent to distinguish a ripped DVD from a home-made recording, so that’s not possible.
Also - it’s not replacing the GUI. You can still do everything manually if you prefer that.
Imagine being able to fire up your PC and just tell it to play your favourite show, instead of having to use your feet or, I don’t know, a pencil in your mouth,
[…]
In its current form, the agent will be run locally (requiring a device with an NPU), so corpo won’t have any access or control to your device.
what is even the use case? like for a normal user, instead of like clicking on a programm, they tell the ai to open the program?
like a shell that is just … worse?
or is it supposed to help fix you with having a problem with the operating system? which would be ironic
like only use case i can imagine is for like old people who just can not handle their pc. But the result will just be them sending all their money to a scammer
Yeah, so they can feel like they’re in Star Trek. Except it’s shitty and nowhere near utopia.
You know, I don’t remember them trying to talk to the Enterprise’s computer in any of the TOS films. They downplayed that a lot, I think for enhanced realism. It stayed canon that you could talk to computers, but they only did to set the self destruct sequence.
Either way, by TNG they were back at it, and they got in the habit of “Computer: create a holodeck program simulating the appearance and personality of the engineer who designed the Enterprise. Run it in Holodeck 2.” We’re getting pretty close to that capability now, minus the room full of VR.
The problem? LLMs will tell you to add elmer’s glue to pizza sauce to hold the cheese in place, because someone joked about that on Reddit once. They have no sense of reality.
In fact, it’s kind of amazing how well the writers of TNG did with that episode and it’s follow-up. The simulated Leah started showing romantic behavior, and Geordi formed a parasocial relationship with Leah Brahms…a married woman who didn’t know Geordi personally, and so there was this uncomfortable moment where Geordi was uncomfortably familiar with a complete stranger. The Enterprise’s computer hallucinated a Leah Brahms who was single and ready to mingle.
They’re working on making that problem into a consumer product.
It’s for people who are functionally illiterate. They think this opens up a whole new market for them to sell to.
The end game is probably something like “Write me a report from x and y about z and send it to my phone” or somerhing. And it does that in Microsoft-approved software and sends the data to MS too.
Just so you can imagine yourself having a personal Jarvis instead of doing the thing yourself and get better with it.
There’s a bunch of usecases.
For example, you can just tell Copilot to do a thing instead of spending however long it would take to google a solution. Will it gimp learning? Of course it will. But that’s technology advancement in a nutshell - making things more accessible, by making them easier, thus making some baseline levels of knowledge obsolete. We no longer learn Assembly to interact with our computers, future generations won’t know where Settings are, because an AI Agent will be doing it all for them.
Another example is people suffering from disabilities or even just temporary injuries. Imagine being able to fire up your PC and just tell it to play your favourite show, instead of having to use your feet or, I don’t know, a pencil in your mouth, to click through the windows.
The idea is excellent. The implementation is some 5 years too early. The AI systems available, in their current form, are just not there yet for this to work well. Microsoft even pulled an ad they made recently, because people noticed that Copilot made an error there. Or even multiple errors.
I think you have a point, but i also think it would cause a hemorrhage of skilled users. Especially because it would become a very easy tool for increased corpo control over your device. Suddenly your ripped TNG dvds that are indistinguishable from pirated ones are no longer accessible by you just going to your well labeled file location. Now you have a really convenient os that you can tell in plain English that you’d like you pull up some star trek and it says ok and tries to download a streaming service app. You tell it that you’d like to watch the DVD rips you have of it instead and it tells you that those files have been locked for possible copyright violations.
That’s just the scenario that came to mind immediately.
I agree. However, at the same time, I recognise that when the first OS with GUI was released, people were saying the exact same thing.
In its current form, the agent will be run locally (requiring a device with an NPU), so corpo won’t have any access or control to your device.
It would be impossible for an AI agent to distinguish a ripped DVD from a home-made recording, so that’s not possible.
Also - it’s not replacing the GUI. You can still do everything manually if you prefer that.
I’m imagining this so hard right now.