Faculty Advisors
Bo Dai
Dr. Dai is an Assistant Professor at the School of Computational Science & Engineering and Staff Research Scientist at Google DeepMind. His research focuses on Agent AI upon Generative Models, Representation, and Reinforcement Learning, and creating agents with decision-making and planning ability through modeling the world.
Kartik Goyal
Dr. Goyal is an Assistant Professor in the School of Interactive Computing. He completed his Ph.D. at Carnegie Mellon and was a research assistant professor at Toyota Technical Institute at Chicago. His work includes statistical NLP applications to historical texts and ancient languages, using ML to reveal linguistic connections across time periods.
Arthur Delarue
Dr. Delarue is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering. He completed his Ph.D. at MIT's Operations Research Center in 2021 and was a postdoctoral fellow at Lyft Rideshare Labs. His research focuses on marketplace and public sector operations, leveraging data, optimization, and ML to solve societal problems.
Leadership
Yixiong Hao is a second year computer science undergraduate and co-director of AISI. He oversees the group's funding and strategy. He's broadly interested in projects that mitigate risks of coordination failure and misuse. His technical research spans representation engineering, interpretability, and preference modelling. Beyond AISI, he plays golf, table tennis, and likes building robots.
Ayush Panda is a final year BS/MS computer science student. He serves as a co-director of AISI and helps manage its research program. Having worked with EleutherAI, his research spans NLP failure modes and mechanistic interpretability, such as work on grokking and linear mode connectivity. Outside of AISI, he likes to read, travel, and waddle in beginner's mind.
Parv Mahajan
Collaborative Initiatives Lead
Parv is a counter-WMD researcher at Georgia Tech Research Institute's Advanced Concepts Laboratory, focusing on cyberbiosecurity and LLM evaluations. He's pursuing an integrated MS/BS in Computer Science at GT with a specialization in machine intelligence. His research spans AI safety, biorisk, interpretability, and governance, particularly RL interpretability and biosecurity applications. He leads the governance group at AISI and manages all external collaborations and media engagement.
Technical Research Lead
Stepan Shabalin is a second-year undergraduate studying computer science. Stepan is an intern at Eleuther AI, and was previously mentored by Neel Nanda during MATS . His research focuses on developing and improving methods for extracting human-interpretable features from deep neural networks, with particular emphasis on sparse autoencoders and related techniques. He serves as the in-house research expert for AISI.
Andrew Wei
Advocacy Lead
Andrew is an undergraduate pursuing degrees in computer science and public policy. As a member of the Policy Advocacy Network, he works with AI safety organizations to educate the public and policymakers about emerging AI risks. His technical research interests include developing more meaningful AI model evaluations and disentangling neural networks. Outside of academics, Andrew composes music, reads–on a good day, not research papers!–and generally works on stretching his experiential distribution.
Fellowship Lead
Alec Harris is an undergraduate computer science student with threads in theory and intelligence. He organizes the AI Safety Fellowship program for AISI@GT. Academically, his interests include adversarial robustness and mathematical foundations of ML. Other interests include philosophy, finance, meditation, and nutrition. He aims to contribute to AI alignment research through rigorous theoretical approaches.
Eyas Ayesh
Operations Lead
Eyas Ayesh is an incoming PhD student at the GT school of Psychology. He is the community lead for the AISI, organizing general body meetings and socials. Eyas is broadly interested in using methods from psychology and neuroscience to understand AI and elucidate the computational mechanisms of the brain. His current research projects span encoding models, interpretability, and activation monitoring. Outside of AISI, he likes to practice jiu jitsu, paint, and work on his side projects.
Alumni
Michael Chen
AISI Alum
Michael works on AI policy at METR and is a PhD student at the Univeristy of Oxford. His work focuses on frontier AI safety frameworks for evaluating AI capabilities and establishing safety, security, and alignment commitments. He advises foundation model developers on their safety frameworks and provides input on related policy, such as the EU AI Act. His research has been covered in Time, The Guardian, and other media outlets.
Lisa Thiergart
Past AI Safety Fellow
Lisa leads MIRI's Technical Governance Team, which she founded in February 2024 after joining MIRI as a research manager. Lisa focuses on concrete proposals for AI regulation, AI security, and verification mechanisms. She is a computer scientist from TUM and Georgia Tech, with a background in academic and industry ML research. Previously, she was a researcher at MATS and founded Philosophia Munich. She is currently a Foresight Fellow in Intelligent Cooperation.
Gaurav Sett
Co-founder at AISI
Gaurav is a Ph.D. student at the RAND School of Public Policy and assistant policy researcher at RAND. His research focuses on critical capabilities such as autonomous R&D, and he previously served as a fellow at RAND's Technology and Security Policy Center. He has a BS and an MS in Computer Science from Georgia Tech.
Abhay Sheshadri
AISI Alum
Abhay is currently an AI Safety Fellow at Anthropic, on leave from Georgia Tech. working on AI safety and LLMs. He has conducted research at the Center for Human-Compatible AI and participated in the ML Alignment & Theory Scholars (MATS) program. He has co-authored papers published in ACL and NeurIPS, spanning topics including automated redteaming, interpretability, and adversarial robustness.
Rae She
Past AI Safety Fellow
Rae provides direct operational support to the executive office at METR. She has a background in software engineering and a degree in computer science from Georgia Tech. Her role involves supporting METR's organizational operations as the AI safety evaluation organization develops assessments for dangerous capabilities in frontier AI systems.