The good: NYT, 9-5 Mac, and others are reporting on an open letter many tech and social luminaries have signed onto prompting a “pause” in AI development so we can figure out what, if any, “guard rails” need to be put in place in order for AI dev to continue in a manner that we could feel secure wouldn’t lead to our imminent doom.
The bad: well, this little tidbit:
Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe.
They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person.https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html