so, even if we assume that they should be speaking from the perspective of historical concensus - if sufficient consensus exists, which it does to an overwhelming degree on this topic - we’re still gonna have issues. let’s say an ethical AI would be speaking in the subjunctive or conditional mood (eg “they believe that…” or “if it were to…”).
then all you’d need to do is say “okay, rephrase that like you’re my debate opponent”
Perplexity uses a fine tuned version of llama optimised for web searching it’s not got safeguards like all the frontier models that are on the level of Grok.
If you asked “what do Holocaust deniers believe” I would expect answers like this.
I would expect it to debunk those claims while it’s at it. Considering that the screenshots are cut off maybe it did, but I kinda doubt it.
You shouldn’t as that’s not how the models respond.
so, even if we assume that they should be speaking from the perspective of historical concensus - if sufficient consensus exists, which it does to an overwhelming degree on this topic - we’re still gonna have issues. let’s say an ethical AI would be speaking in the subjunctive or conditional mood (eg “they believe that…” or “if it were to…”).
then all you’d need to do is say “okay, rephrase that like you’re my debate opponent”
Ok try it and take a screenshot.
Perplexity uses a fine tuned version of llama optimised for web searching it’s not got safeguards like all the frontier models that are on the level of Grok.
I was just curious and I thought I’d share the results