Alignment is the field of AI safety focused on ensuring that AI systems' goals and behaviors match human intent and values.
Grew alongside capabilities research to address potential risks.
Critical for deploying safe and trusted AI systems.