Research
Our broad purpose is to address emergent risks from advanced AI systems. To this end, we build and support multidisciplinary teams focused on fields including:
Interpretability: Seeks to make the decision-making processes of AI systems transparent and understandable to humans.
Robustness: Pertains to the resilience and stability of AI systems in the face of adversarial attacks, novel inputs, or changing environments.
Governance: Delves into the frameworks, policies, and regulations guiding AI development and deployment.