

726·
28 days ago> Build a yes-man
> It is good at saying “yes”
> Someone asks it a question
> It says yes
> Everyone complains
ChatGPT is a (partially) stupid technology with not enough security. But it’s fundamentally just autocomplete. That’s the technology. It did what it was supposed to do.
I hate to defend OpenAI on this but if you’re so mentally sick (dunno if that’s the right word here?) that you’d let yourself be driven to suicide by some online chats [1] then the people who gave you internet access are to blame too.
[1] If this was a human encouraging him to suicide this wouldn’t be newsworthy…
It is. But the chatGPT interface reminds you of that when you first create an account. (At least it did when I created mine).
At some point we have to give the responsibility to the user. Just like with Kali OS or other pentesting tools. You wouldn’t (shouldn’t) blame them for the latest ransomeware attack too.