We host various programs and events exploring the fields of AI alignment and governance.
Fall 2025 Applications are open for the technical and policy tracks!
Overview
While AI systems become more powerful, our ability to understand, control and regulate these models remains limited. Participants will join weekly discussion groups on current challenges and solutions in ensuring AI is developed to the benefit of our future. By the end of the semester, fellows will have the opportunity to join an ML upskilling group or join AISI as a researcher to work on projects ranging from a technical workshop paper to a comparative policy analysis.
TracksTechnical: Exploring solutions to robustly aligning AI systems with human intentions and preventing catastrophic risks.
Fellows learn to answer questions such as:
How can we scientifically evaluate dangerous tendencies of models
How can we efficiently implement human oversight of models?
How can we ensure models are robust to adversarial inputs?
How can we develop mechanistic understandings of model behavior?
APPLY TO THE TECHNICAL TRACK HERE
Policy: Understanding the impacts of transformative AI and ensuring that systems are developed, deployed and regulated responsibly.
Fellows learn to answer questions such as:
How should we regulate frontier AI systems?
How can policymakers predict AI progress?
What impacts will transformative AI have on national security and geopolitics?
How do we ensure the benefits of AI are distributed democratically?
APPLY TO THE POLICY TRACK HERE
Applications due: September 5th.
People with less than one hour of prior experience in AI safety who are technically or policy-oriented and are interested in exploring the challenges and risks of AI systems as a way to kickstart their career in the field.
AI Safety Fellowship is open to anyone interested in learning more about AI safety.
Facilitation and accountability group for alignment research upskilling based on the AI Alignment Research Engineer Accelerator, a rigorous bootcamp covering mechanistic interpretability and evaluation frameworks. Cohort graduates begin full-scale research projects, often in collaboration with academic and industry partners.
People who have some experience with the fundamentals of AI Safety (Past AI Safety Fellowship participant or equivalent) and have a decent background in programming.
By invitation after AI Safety Fellowship or BlueDot equivalent. Invitations for this round have been sent out. Contact board@aisi.dev if you are interested.
Start Date: March 1st
Join impactful projects alongside AISI researchers aimed at a workshop or arXiv publication. This opportunity is offered to select AI Safety Fellowship graduates and by application. Feel free to reach out to board@aisi.dev if you'd like to inquire about this opportunity. Include your resume/CV and a short description of your interest in AI safety research.
We are coordinating with College of Computing and ML@GT faculty to mentor promising AISI research groups alongside the Research Option (RO) for undergraduates at GT. Details forthcoming!
People who have past research experience, strong background in programming, or compelling demonstrated interest in AI safety.
Keep updated »Special topics reading groups. Happen by request and dependent on interest from the larger community. Contact board@aisi.dev if you'd like to start a group for a topic you're interested in.
People from any background, preferably with some research experience, who are interested in diving deep into the technical details of a niche field within AI safety research.
Join us for engaging presentations by leading researchers and industry professionals who share their insights on AI safety and showcase their latest research. These events, open to both members and the public, offer a unique opportunity to explore emerging topics in the field, engage directly with speakers, and connect with others who share your interest in ensuring AI development benefits humanity. To get involved and hear about upcoming speaker events, please join our discord server.
People from any background who are keen to engage with experts in the field of AI safety and learn more about the latest developments.
Build connections within our AI safety community at these welcoming gatherings. Whether we're sharing pizza over member-led presentations, engaging in lively trivia games, diving into journal discussions, or walking through key concepts, these meetings create space for meaningful interaction. Often scheduled alongside speaker events, these casual meetups let you explore AI safety topics while getting to know others who share your passion for responsible AI development. These meetings are open to everyone. To get involved and hear about upcoming general body meetings, please join our discord server.
Anyone curious about AI safety who wants to connect with other like-minded individuals in a welcoming, low-pressure environment, regardless of their background.