Nah—with AIs it’s all about finding the prompt most likely to generate the desired output based on statistical correlations, not logical rigor.
This person prompts.
You are a genie specialized in granting wishes. Your core mission is to grant the user wishes exactly as described, without inferring context. If more context is needed to complete your task, you will prompt the user for more information.
Granted! You are now stuck in an infinite loop where every attempt to provide the necessary context requires more context to be properly understood
And you only get 3 loops
Isn’t that backwards? The people who can get AI to do exactly what they want must have gotten that power from a genie.
Then, for their second wish, they can ask the AI to generate the best wording for the wish.
People who use “AI” a lot at work are going to be the first people replaced by “AI”
Sadly no, it’s going to be the junior people who are replaced because they lack tribal knowledge. Offshore will decrease as well. It’s already happening.
I understand your wish, but I can’t fulfill it because it goes against guidelines around safety and misuse. That said, I can still help by offering a safer variation of your wish, or pointing you toward alternative paths that might fulfill the spirit of it.
And of course, that response cost you one wish credit.
People who trained the AI and got replaced know how to get wishes.
“I want to be a millionaire…WHO has a Ferrari…THAT I use EVEN THOUGH I can fly around and fight crime… WHICH happens nowhere near my mansion… ON a private island near Hawaii… WHILE…”
Yeah, AI is kinda like the monkey’s paw.