by Jason Crawford · June 28, 2023 · 3 min read
What does it mean for AI to be “safe”?
Right now there is a lot of debate about AI safety. But people often end up talking past each other because they’re not using the same definitions or standards.
For the sake of productive debates, let me propose some distinctions to add clarity:
Here are four levels of safety for any given technology:
Another way to think about this is, roughly:
All of this is oversimplified, but hopefully useful.
The most harmful drugs and other chemicals, and arguably the most dangerous pathogens and most destructive weapons of war, are level 1.
Operating a power plant, or flying a commercial airplane, is level 2: only for trained professionals.
Driving a car, or taking prescription drugs, is level 3: we make this generally accessible, perhaps with a modest amount of instruction, and perhaps requiring a license or some other kind of permit. (Note that prescribing drugs is level 2.)
Many everyday or household technologies are level 4. Anything you are allowed to take on an airplane is certainly level 4.
Again, all of this is oversimplified. Just to indicate some of the complexities:
There are more than four levels you could identify; maybe it’s a continuous spectrum.
“Safe” doesn’t mean absolutely or perfectly safe, but rather reasonably or acceptably safe: it depends on the scope and magnitude of potential harm, and on a society’s general standards for safety.
Safety is not an inherent property of a technology, but of a technology as embedded in a social system, including law and culture.
How tightly we regulate things, in general, is not only about safety but is a tradeoff between safety and the importance and value of a technology.
Accidental vs. deliberate misuse are arguably different things that might require different scales. Whether we have special security measures in place to prevent criminals or terrorists accessing a technology may not be perfectly correlated with what safety level you would designate a technology when considering only accidents.
Related, weapons are kind of a special case, since they are designed to cause harm. (But to add to the complexity, some items are dual-purpose, such as knives and arguably guns.)
The strongest AI “doom” position argues that AI is level 1: even the most carefully designed system would take over the world and kill us all. And therefore, AI development should be stopped (or “paused” indefinitely).
If AI is level 2, then it is reasonably safe to develop, but arguably it should be carefully controlled by a few companies that give access only through an online service or API. (This seems to be the position of leading AI companies such as OpenAI.)
If AI is level 3, then the biggest risk is a terrorist group or mad scientist who uses an AI to wreak havoc—perhaps much more than they intended.
AI at level 4 would be great, but this seems hard to achieve as a property of the technology itself—rather, the security systems of the entire world need to be upgraded to better protect against threats.
The “genie” metaphor for AI implies that any superintelligent AI is either level 1 or 4, but nothing in between.
People talk past each other when they are thinking about different levels of the scale:
“AI is safe!” (because trained professionals can give it carefully balanced rewards, and avoid known pitfalls)
“No, AI is dangerous!” (because a malicious actor could cause a lot of harm with it if they tried)
If AI is at level 2 or 3, then both of these positions are correct. This will be a fruitless and frustrating debate.
Bottom line: When thinking about safety, it helps to draw a line somewhere on this scale and ask whether AI (or any technology in question) is above or below the line.
The ideas above were initially explored in this Twitter thread.
Social media link image credit: Wikimedia / Eric from Minneapolis
« Links and tweets, 2023-06-21 Links and tweets, 2023-06-28 »
Get posts by email: