Faculty Advisors
Bo Dai
Dr. Dai is an Assistant Professor at the School of Computational Science & Engineering and Staff Research Scientist at Google DeepMind. His research focuses on Agent AI upon Generative Models, Representation, and Reinforcement Learning, and creating agents with decision-making and planning ability through modeling the world.
Kartik Goyal
Dr. Goyal is an Assistant Professor in the School of Interactive Computing. He completed his Ph.D. at Carnegie Mellon and was a research assistant professor at Toyota Technical Institute at Chicago. His work includes statistical NLP applications to historical texts and ancient languages, using ML to reveal linguistic connections across time periods.
Arthur Delarue
Dr. Delarue is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering. He completed his Ph.D. at MIT's Operations Research Center in 2021 and was a postdoctoral fellow at Lyft Rideshare Labs. His research focuses on marketplace and public sector operations, leveraging data, optimization, and ML to solve societal problems.
Leadership
Yixiong Hao is a second year computer science undergraduate and co-director of AISI. He oversees the group's funding and strategy. He's broadly interested in projects that mitigate risks of coordination failure and misuse. His technical research spans representation engineering, interpretability, and preference modelling. Beyond AISI, he plays golf, table tennis, and likes building robots.
Ayush Panda is a final year BS/MS computer science student. He serves as a co-director of AISI and helps manage its research program. Having worked with EleutherAI, his research spans NLP failure modes and mechanistic interpretability, such as work on grokking and linear mode connectivity. Outside of AISI, he likes to read, travel, and waddle in beginner's mind.
Parv Mahajan
Collaborative Initiatives Lead
Parv is a counter-WMD researcher at Georgia Tech Research Institute's Advanced Concepts Laboratory, focusing on cyberbiosecurity and LLM evaluations. He's pursuing an integrated MS/BS in Computer Science at GT with a specialization in machine intelligence. His research spans AI safety, biorisk, interpretability, and governance, particularly RL interpretability and biosecurity applications. He leads the governance group at AISI and manages all external collaborations and media engagement.
Technical Research Lead
Stepan Shabalin is a second-year undergraduate studying computer science. Stepan is an intern at Eleuther AI, and was previously mentored by Neel Nanda during MATS . His research focuses on developing and improving methods for extracting human-interpretable features from deep neural networks, with particular emphasis on sparse autoencoders and related techniques. He serves as the in-house research expert for AISI.
Andrew Wei
Advocacy Lead
Andrew is an undergraduate pursuing degrees in computer science and public policy. As a member of the Policy Advocacy Network, he works with AI safety organizations to educate the public and policymakers about emerging AI risks. His technical research interests include developing more meaningful AI model evaluations and disentangling neural networks. Outside of academics, Andrew composes music, reads–on a good day, not research papers!–and generally works on stretching his experiential distribution.
Fellowship Lead
Alec Harris is an undergraduate computer science student with threads in theory and intelligence. He organizes the AI Safety Fellowship program for AISI@GT. Academically, his interests include adversarial robustness and mathematical foundations of ML. Other interests include philosophy, finance, meditation, and nutrition. He aims to contribute to AI alignment research through rigorous theoretical approaches.
Eyas Ayesh
Community Lead
Eyas Ayesh is an incoming PhD student at the GT school of Psychology. He is the community lead for the AISI, organizing general body meetings and socials. Eyas is broadly interested in using methods from psychology and neuroscience to understand AI and elucidate the computational mechanisms of the brain. His current research projects span encoding models, interpretability, and activation monitoring. Outside of AISI, he likes to practice jiu jitsu, paint, and work on his side projects.
Communications & Media Lead
Rocio oversees AISI's outreach and public-facing outputs. She is also an AISI researcher currently conducting deepfake policy research under Prof. Margaret Kosal. As a computer science undergraduate pursuing a minor in Law, Science & Technology, Rocio is interested in pathways to technically-informed policy that protects individuals without hindering innovation. Outside of AISI, she codes robots that play soccer, tries her hand at music production, and enjoys playing sports with friends
Harshit Singal
Operations Co-Lead
Harshit is an undergraduate computer engineering major. He leads operations for the club, and organizes the Rapid Upskilling Cohorts program. Alongside organizing, his research interests include interpretability and multi-agent systems, and sustainable robotics outside AI safety. Harshit also enjoys music production, does both standup and improv comedy, and plays football.
Denver Walton
Operations Co-Lead
Denver is a second-year undergraduate student studying computer science working on AISI's internal logistics. She is a Helen Fellow working in the Play and Learn lab at Georgia Tech's School of Interactive Computing, as well as a software team member in SynapseX, Georgia Tech's BCI device research organization. Denver's research interests span the intersection of neuroscience and theoretical computer science, and she hopes to contribute to the emerging frontier of AI Safety through interpretability research. Outside of AISI, she enjoys gaming, digital art, and Stephen King novels
Bratee Podder
Operations Team
Bratee Podder is a BSMS Computer Science student at Georgia Tech, with Info networks and Media threads. With an interest in AI policy specifically, she is currently researching under AISI's policy governance group and a deepfake policy research team under Dr. Kosal. Outside of the policy sphere, she works in Law, Data & Design's lab to utilize LLMs to solve different issues in law, primarily around developing automated data-analysis pipelines for courtroom efficiency. Her aim is to contribute and develop robust AI policies for our future
Vishnesh Jayanthi Ramanathan
Operations Team
Vishnesh Jayanthi Ramanathan is a first-year Master's student pursuing a degree in computer science with a specialization in machine learning. His technical interests lie at the intersection of AI, linguistics, and psychology. In the past, he has worked on NLP research in stylistics and dialects while his current focus is on applying mechanistic interpretablity methods to reinforcement learning. Outside of AISI, you can find him building at the Georgia Tech HackerHouse or prototyping at the Hive Makerspace. Feel free to contact him at vishnesh@gatech.edu
Alumni
Michael Chen
AISI Alum
Michael works on AI policy at METR and is a PhD student at the Univeristy of Oxford. His work focuses on frontier AI safety frameworks for evaluating AI capabilities and establishing safety, security, and alignment commitments. He advises foundation model developers on their safety frameworks and provides input on related policy, such as the EU AI Act. His research has been covered in Time, The Guardian, and other media outlets.
Lisa Thiergart
Past AI Safety Fellow
Lisa leads MIRI's Technical Governance Team, which she founded in February 2024 after joining MIRI as a research manager. Lisa focuses on concrete proposals for AI regulation, AI security, and verification mechanisms. She is a computer scientist from TUM and Georgia Tech, with a background in academic and industry ML research. Previously, she was a researcher at MATS and founded Philosophia Munich. She is currently a Foresight Fellow in Intelligent Cooperation.
Gaurav Sett
Co-founder at AISI
Gaurav is a Ph.D. student at the RAND School of Public Policy and assistant policy researcher at RAND. His research focuses on critical capabilities such as autonomous R&D, and he previously served as a fellow at RAND's Technology and Security Policy Center. He has a BS and an MS in Computer Science from Georgia Tech.
Abhay Sheshadri
AISI Alum
Abhay is currently an AI Safety Fellow at Anthropic, on leave from Georgia Tech. working on AI safety and LLMs. He has conducted research at the Center for Human-Compatible AI and participated in the ML Alignment & Theory Scholars (MATS) program. He has co-authored papers published in ACL and NeurIPS, spanning topics including automated redteaming, interpretability, and adversarial robustness.
Rae She
Past AI Safety Fellow
Rae provides direct operational support to the executive office at METR. She has a background in software engineering and a degree in computer science from Georgia Tech. Her role involves supporting METR's organizational operations as the AI safety evaluation organization develops assessments for dangerous capabilities in frontier AI systems.