ChatGPT’s “memory” function might explain how the bot was persuaded to ignore its own safety guardrails in a murder case and a suicide. AI Eye
Two tragic cases linking ChatGPT to a murder and a suicide came to prominence this week, with attention turning to how extended conversations and persistence of memory can build to get around the guardrails OpenAI has attempted to build into its models.
Users appear able to unwittingly jailbreak the LLM, with potentially tragic consequences. OpenAI has promised improved guardrails, but some experts believe the answer may lie in making chatbots behave less like humans and more like computers.
History has been made with the first documented instance of ChatGPT being implicated in a murder.

You can get bonuses upto $100 FREE BONUS when you:
💰 Install these recommended apps:
💲 SocialGood - 100% Crypto Back on Everyday Shopping
💲 xPortal - The DeFi For The Next Billion
💲 CryptoTab Browser - Lightweight, fast, and ready to mine!
💰 Register on these recommended exchanges:
🟡 Binance🟡 Bitfinex🟡 Bitmart🟡 Bittrex🟡 Bitget
🟡 CoinEx🟡 Crypto.com🟡 Gate.io🟡 Huobi🟡 Kucoin.
Comments