AI Safety Fellowship applications are now open; apply before February 6th!
We host various programs and events exploring the fields of AI alignment and governance.
Overview
While AI systems become more powerful, our ability to understand, control and regulate these models remains limited. Participants will join weekly discussion groups on current challenges and solutions in ensuring AI is developed to the benefit of our future. By the end of the semester, fellows will have the opportunity to join an ML upskilling group or join AISI as a researcher to work on projects ranging from a technical workshop paper to a comparative policy analysis.
People with less than one hour of prior experience in AI safety who are technically or policy-oriented and are interested in exploring the challenges and risks of AI systems as a way to kickstart their career in the field.
AI Safety Fellowship is open to anyone interested in learning more about AI safety.
Exploring solutions to robustly aligning AI systems with human intentions and preventing catastrophic risks.
Fellows learn to answer questions such as:
How can we scientifically evaluate risks from models?
How can we efficiently implement oversight of models?
How can we ensure models are robust to adversarial inputs?
How can we develop mechanistic understandings of model behavior?
Apply here for the technical track by February 6th, 2026 11:59PM AOE. Check out the syllabus here.
Understanding the impacts of transformative AI and ensuring that systems are developed, deployed and regulated responsibly.
Fellows learn to answer questions such as:
How should we regulate frontier AI systems?
How can policymakers predict AI progress?
What impacts will transformative AI have on national security and geopolitics?
How do we ensure the benefits of AI are distributed democratically?
Apply here for the policy track by February 6th, 2026 11:59PM EST.
Check out the syllabus here.
Join impactful projects alongside AISI researchers aimed at a workshop or arXiv publication. This opportunity is offered to select AI Safety Fellowship graduates and by application. Feel free to reach out to board@aisi.dev if you'd like to inquire about this opportunity. Include your resume/CV and a short description of your interest in AI safety research.
We are coordinating with College of Computing and ML@GT faculty to mentor promising AISI research groups alongside the Research Option (RO) for undergraduates at GT. Details forthcoming!
People who have past research experience, strong background in programming, or compelling demonstrated interest in AI safety.
Keep updated »Special topics reading groups. Happen by request and dependent on interest from the larger community. Contact board@aisi.dev if you'd like to start a group for a topic you're interested in.
People from any background, preferably with some research experience, who are interested in diving deep into the technical details of a niche field within AI safety research.
Join us for engaging presentations by leading researchers and industry professionals who share their insights on AI safety and showcase their latest research. These events, open to both members and the public, offer a unique opportunity to explore emerging topics in the field, engage directly with speakers, and connect with others who share your interest in ensuring AI development benefits humanity. To get involved and hear about upcoming speaker events, please join our discord server.
People from any background who are keen to engage with experts in the field of AI safety and learn more about the latest developments.
Build connections within our AI safety community at these welcoming gatherings. Whether we're sharing pizza over member-led presentations, engaging in lively trivia games, diving into journal discussions, or walking through key concepts, these meetings create space for meaningful interaction. Often scheduled alongside speaker events, these casual meetups let you explore AI safety topics while getting to know others who share your passion for responsible AI development. These meetings are open to everyone. To get involved and hear about upcoming general body meetings, please join our discord server.
Anyone curious about AI safety who wants to connect with other like-minded individuals in a welcoming, low-pressure environment, regardless of their background.