AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence systems. It includes technical research on how to make AI systems more robust and aligned with human values, as well as policy and standards work to ensure responsible development and deployment.
A growing field of research and public concern.
A critical part of the development of advanced AI systems.