What? They aren’t sentient. They aren’t sociopathic. They can’t do “use of deception”.
These are (absurdly complex) word/phrase/semantic token association engines. They aren’t thinking. They don’t understand true or false.
They’re dangerous and not to be trusted for factual information,but not for anything related to asimov’s law, or whatever you seem to think is going on under the hood.
What? They aren’t sentient. They aren’t sociopathic. They can’t do “use of deception”.
These are (absurdly complex) word/phrase/semantic token association engines. They aren’t thinking. They don’t understand true or false.
They’re dangerous and not to be trusted for factual information,but not for anything related to asimov’s law, or whatever you seem to think is going on under the hood.
yea maybe the prompt was like “make some graphs that show how much better GPT5 is, here’s the data:” and attached a csv file
then it went too far with trying to follow the instruction of showing GPT5 as better
You’re correct, but I believe their point still stand