Faculty Advisors
Bo Dai
Dr. Dai is an Assistant Professor at the School of Computational Science & Engineering at Georgia Tech and a Staff Research Scientist at Google DeepMind (formerly Google Brain). He completed his Ph.D. at Georgia Tech's School of Computational Science & Engineering. His principal research interest lies in Agent AI upon Generative Models and Representation, aiming to create agents with decision-making and planning ability through modeling the world. His work spans reinforcement learning, machine learning, and optimization, with contributions including provable reinforcement learning algorithms for stochastic nonlinear dynamics through the representation lens. He has published extensively at top venues including NeurIPS, ICML, and ICLR, receiving recognition including the AISTATS Best Paper Award in 2016. He serves as an Action Editor for Transactions on Machine Learning Research and as a Senior Area Chair for major conferences including NeurIPS, ICML, and ICLR.
Kartik Goyal
Dr. Goyal is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He completed his Ph.D. at Carnegie Mellon University and spent two years as a research assistant professor at the Toyota Technical Institute at Chicago before joining Georgia Tech. His research interests lie at the intersection of natural language processing and machine learning, focusing on developing probabilistic models of latent structure in naturally occurring data. His work includes statistical NLP applications to historical texts and ancient languages, using machine learning to reveal insights about linguistic connections across different time periods. He develops methods for understanding subtle differences in text analysis while maintaining robustness to various sources of noise.
Arthur Delarue
Dr. Delarue is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He completed his Ph.D. at the MIT Operations Research Center in 2021, advised by Dimitris Bertsimas, and was previously a postdoctoral fellow at Lyft Rideshare Labs. His research focuses on marketplace and public sector operations, leveraging data, optimization, and machine learning to solve practical societal problems. His work includes applications of mixed-integer optimization in transportation, educational operations, and public policy. Notable projects include redesigning school bus routes in Boston, public school schedules in San Francisco, and course timetables at MIT. His research interests span data-driven decision-making, experimentation, pricing, and optimization problems at the intersection of operations research and policy.
Leadership
Andrew Wei
Andrew is an undergraduate pursuing degrees in computer science and public policy. As a member of the Policy Advocacy Network, he works with AI safety organizations to educate the public and policymakers about emerging AI risks. His technical research interests include developing more meaningful AI model evaluations and disentangling neural networks. Outside of academics, Andrew composes music, reads–on a good day, not research papers!–and generally works on stretching his experiential distribution. Feel free to reach out for collaboration opportunities or just to chat!
Alec Harris
Alec Harris is an undergraduate computer science student with threads in theory and intelligence. He organizes the AI Safety Fellowship program for AISI@GT. Academically, his interests include adversarial robustness and mathematical foundations of ML. Other interests include philosophy, finance, meditation, and nutrition.
Eyas Ayesh
Eyas Ayesh is an incoming PhD student at the GT school of Psychology. He is the community lead for the AISI, organizing general body meetings and socials. Eyas is broadly interested in using methods from psychology and neuroscience to understand AI, and using our understanding of AI to elucidate the computational mechanisms of the brain. His current research projects span encoding models, interpretability, and activation monitoring. Outside of AISI, he likes to practice jiu jitsu, paint, and work on his side projects.
Parv Mahajan
Parv is a counter-WMD and counterproliferation researcher at the Georgia Tech Research Institute's Advanced Concepts Laboratory in the Advanced Warfighting Technical Division, where he focuses on cyberbiosecurity and DNA-based cyber threats. He is pursuing an integrated MS/BS in Computer Science at Georgia Tech with a specialization in machine intelligence and system modeling. His research interests span AI safety, biorisk, interpretability, and AI governance, with particular focus on reinforcement learning interpretability and biosecurity applications. Parv leads the governance group at AISI and serves as the Collaborative Initiatives Lead. Beyond his technical and policy work, he is an award-winning classical composer and avid reader.
Stepan Shabalin
Stepan Shabalin is a second-year undergraduate studying computer science. He is the research lead of AISI, organizing the supervised research program. Stepan is an intern at Eleuther AI in the interpretability team. He was previously mentored by Neel Nanda for the MATS program. His current research focus is sparse autoencoders. He is also interested in virtual reality development and brain-computer interfaces.
Yixiong Hao
Yixiong Hao is a second year undergraduate studying computer science and Co-Director of AISI. He oversees the group's funding and strategy. He's broadly interested in technical and entrepreneurial projects that mitigate risks of coordination failure and misuse. His technical research spans representation engineering, interpretability, and preference modelling. Beyond AISI, he plays golf, table tennis, and likes building robots.
Alumni
Michael Chen
Founder and Ex-Director at AISI
Michael works on AI policy at METR, with a focus on frontier AI safety frameworks for evaluating AI capabilities and establishing safety, security, and alignment commitments. He advises foundation model developers on their safety frameworks and provides input on related policy, such as the EU AI Act. Michael previously worked on engineering at Stripe and collaborated with the Center for AI Safety on projects in AI deception and LLM benchmarking. His research has been covered in Time, The Guardian, and other media outlets. At Georgia Tech, Michael enjoyed facilitating the AI Safety Fundamentals reading group.
Lisa Thiergart
Past AI Safety Fellowship Participant
Lisa leads MIRI's Technical Governance Team, which she founded in February 2024 after joining MIRI as a research manager in summer 2023. Lisa focuses on concrete proposals for AI regulation, AI security, and verification mechanisms. She is a computer scientist from TUM and Georgia Tech, with a background in academic and industry ML research and training in technology management and entrepreneurship from CDTM and Entrepreneur First. Previously, she was an independent alignment researcher at MATS and founded Philosophia Munich, a philosophical discussion group focused on topics like epistemology and artificial consciousness. She is currently a Foresight Fellow in Intelligent Cooperation.
Gaurav Sett
Previous Director at AISI
Gaurav is a Ph.D. student at the RAND School of Public Policy and assistant policy researcher at RAND. His research focuses on governance for advanced AI systems, and he previously served as a fellow at RAND's Technology and Security Policy Center. He has a BS and an MS in Computer Science from Georgia Tech. His recent work includes research on how AI can automate AI research and development, examining the implications for policy and governance.
Abhay Sheshadri
Organizing Team at AISI
Abhay is a fourth-year undergraduate at Georgia Institute of Technology working on AI safety and large language models. He has conducted research at the Center for Human-Compatible AI (CHAI) and participated in the ML Alignment & Theory Scholars (MATS) program, focusing on mechanistic interpretability and adversarial training. He has co-authored papers published in ACL and NeurIPS, spanning topics including automated redteaming, mechanistic interpretability, and adversarial training of language models.
Rae She
Past AI Safety Fellowship Participant
Rae provides direct operational support to the executive office at METR. She has a background in software engineering and a degree in computer science from Georgia Tech. Her role involves supporting METR's organizational operations as the AI safety evaluation organization develops assessments for dangerous capabilities in frontier AI systems.