A chatbot can be the user facing side of a specialized agent.
That’s actually how original change bots were. Siri didn’t know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.
Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.
My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.
what is the message to the audience? That ChatGPT can investigate just as well as BBC.
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.
“AI Chatbot”. Which is what to 99% of people, almost certainly including the journalist who doesn’t live under a rock? They are just avoiding naming it.
No. You are the one who knows, without doubt, they used ChatGPT and can’t be wrong. If you think “hey, there are other options, don’t jump to unproven conclusions” is to like to argue I’m not the one with a problem.
I’m open to being proven wrong, but you need a bit more than “trust me, I must know”.
A chatbot can be the user facing side of a specialized agent.
That’s actually how original change bots were. Siri didn’t know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.
Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.
My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.
If the article were written 10 years ago I would’ve just assumed they had used something like:
https://fotoforensics.com/
ChatGPT is a fronted for specialized modules.
If you e.g. ask it to do maths, it will not do it via LLM but run it through a maths module.
I don’t know for a fact whether it has a photo analysis module, but I’d be surprised if it didn’t.
It’s not like BBC is a single person with no skill other than a driving license and at least one functional eye.
Hell, they don’t even need to go, just call the local services.
For me it’s most likely that they have a specialized tool than an LLM detecting correctly tampering with the photo.
But if you say it’s unlikely you’re wrong, then I must be wrong I guess.
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.
About that part I would say the article doesn’t mention ChatGPT, only AI.
“AI Chatbot”. Which is what to 99% of people, almost certainly including the journalist who doesn’t live under a rock? They are just avoiding naming it.
Yes. It’s ChatGPT. You got them good. You passed the test Neo. Now get the pills.
deleted by creator
No. You are the one who knows, without doubt, they used ChatGPT and can’t be wrong. If you think “hey, there are other options, don’t jump to unproven conclusions” is to like to argue I’m not the one with a problem.
I’m open to being proven wrong, but you need a bit more than “trust me, I must know”.