We host several resarch projects investigating open problems in AI safety.

Focus Areas

Our broad purpose is to address emergent risks from advanced AI systems. We welcome a variety of interests in this area. Here are a few prominent areas of interest:

Current Projects

Supervised Program for Alignment Research

Organized by groups at UC Berkeley, Georgia Tech, and Stanford, the Supervised Program on Alignment Research (SPAR) is an intercollegiate project-based research program for students interested in AI safety running this fall. SPAR matches students around the world with advisors to do guided projects in AI safety.

Learn more »

Past Projects

We have a few papers under review that we will share soon!