AI Safety

What is AI Safety?

AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence systems. It includes technical research on how to make AI systems more robust and aligned with human values, as well as policy and standards work to ensure responsible development and deployment.

Where did the term "AI Safety" come from?

A growing field of research and public concern.

How is "AI Safety" used today?

A critical part of the development of advanced AI systems.

Related Terms