Yixiong Hao
Co-Director
Yixiong oversees the group's funding and strategy. His technical research spans representation engineering, applied interpretability, and agentic misalignment. Beyond AISI, he plays golf, table tennis, and likes building robots.
Rocio PV
Co-Director
Rocio is a CS undergraduate minoring in Law, Sci&Tech. Her research focus are interpretability and technically-informed policy that protects individuals without hindering innovation. She also enjoys going to the gym and music production.
Eyas Ayesh
Community Lead
Eyas is a PhD student in the school of Psychology. He manages AISI's meetings and socials. He uses neuroscientific methods to understand AI and elucidate the computational mechanisms of the brain. He practices jiu jitsu and paints.
Alec is a CS undergraduate. He manages the AI Safety Fellowship. His interests include adversarial robustness and theoretical approaches to AI safety. He also enjoys philosophy, finance, and meditation.
Ayush Panda
Advisor
Ayush is a MATS Scholar and final year BS/MS Computer Science student. He served as co-director of AISI and helps manage its research program. His research spans AI control and mechanistic interpretability. He likes to read, travel, and practice beginner's mind.
Parv Mahajan
Collaboration Lead
Parv is an ASTRA fellow at Constellation and previously a counter-WMD researcher at GTRI. He focuses on cyber/biosecurity, LLM evaluation, and governance. He manages AISI's external collaborations and media engagement.
Andrew Wei
Advocacy Lead
Andrew studies CS and public policy and is currently a research fellow at ERA. His research includes developing more meaningful AI model evaluations and disentangling neural networks. Andrew composes music, reads–on a good day, and generally works on stretching his experiential distribution.
Stepan is a CS undergraduate and currently an Anthropic Fellow. Previously he was an intern at Eleuther AI and was mentored by Neel Nanda during MATS. His research focuses on extracting human-interpretable features from neural networks with a specialization in sparse-autoencoders.
Harshit Singal
Operations Co-Lead
Harshit is an undergrad Computer Engineering major with research interests in interpretability, multi-agent systems, and sustainable robotics. Outside AISI, he enjoys building robots, music production, standup and improv comedy, and plays football.
Bratee Podder
Operations Co-Lead
Bratee is a first-year MSCS
student with interests in AI policy and advocacy. She also conducted research under AISI's technical governance research group. Outside AISI, she works as an accessibility implementor for OMSCS under Dr. Joyner and enjoys playing the piano.
Meru Gopalan
Operations Team
Meru is a researcher at GT and CS undergrad with a concentration in Systems/Architecture. He is interested in developing software and protocols to handle novel AI attack vectors and wants to win the 2026 SAT Competition. Outside AISI, he enjoys exploring religion and language.
Rohan Kansal
Operations Team
Rohan is a third-year undergrad CS and Math student. He is interested broadly in language models and model steering. Outside AISI, he runs operations for GTMUN, plays clarinet, solves NYT crosswords, and drinks tons of coffee while exploring musical composition.
Ishan Khire
Operations Team
Ishan is an undergrad CS student. He explores economics, bio-security, and AI's impact on animal welfare, and he conducts research on AlphaFold interpretability under Dr. Krishnan and co-authors a systematic review of LLM evaluations. Outside AISI, he enjoys animal welfare advocacy and rock climbing.
Harshul Basava
Operations Team
Harshul is a first-year CS undergrad with theory and intelligence threads. His interests include interpretability and control techniques, specifically chain-of-thought interpretability methods. Outside AISI, he enjoys game dev, rock climbing, and sleeping in.
Jaisen Soundar
Operations Team
Jaisen is a third-year undergrad CS student with threads in intelligence and theory. He is interested in technical AI safety research, particularly in mechanistic interpretability. Outside of AI Safety, he also enjoys climbing, reading, and playing racquetball with friends.
Vishnesh Ramanathan
Operations Team
Vishnesh is a first-year MSCS student with specialization in ML. His interests lie at the intersection of AI, linguistics, and psychology, with NLP research in stylistics. He now focuses on mechanistic interpretability in RL. Outside AISI, he builds at HackerHouse and prototypes new projects.
Bo Dai
Dr. Dai is an Assistant Professor at the School of Computational Science & Engineering and Staff Research Scientist at Google DeepMind. His research focuses on Agent AI upon Generative Models, Representation, and Reinforcement Learning, and creating agents with decision-making and planning ability through modeling the world.
Kartik Goyal
Dr. Goyal is an Assistant Professor in the School of Interactive Computing. He completed his Ph.D. at Carnegie Mellon and was a research assistant professor at Toyota Technical Institute at Chicago. His work includes statistical NLP applications to historical texts and ancient languages, using ML to reveal linguistic connections across time periods.
Arthur Delarue
Dr. Delarue is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering. He completed his Ph.D. at MIT's Operations Research Center in 2021 and was a postdoctoral fellow at Lyft Rideshare Labs. His research focuses on marketplace and public sector operations, leveraging data, optimization, and ML to solve societal problems.
Michael Chen
AISI Alum
Michael works on AI policy at METR and is a PhD student at the Univeristy of Oxford. His work focuses on frontier AI safety frameworks for evaluating AI capabilities and establishing safety, security, and alignment commitments. He advises foundation model developers on their safety frameworks and provides input on related policy, such as the EU AI Act. His research has been covered in Time, The Guardian, and other media outlets.
Lisa Thiergart
AISI Fellow
Lisa is the co-founder of the SL5 Task Force. She used to lead MIRI's Technical Governance Team, which she founded in 2024. Lisa focuses on concrete proposals for AI regulation, AI security, and verification mechanisms. She is a computer scientist from TUM and Georgia Tech, with a background in academic and industry ML research. Previously, she was a researcher at MATS and founded Philosophia Munich.
Gaurav Sett
AISI Co-founder
Gaurav is a Ph.D. student at the RAND School of Public Policy and assistant policy researcher at RAND. His research focuses on critical capabilities such as autonomous R&D, and he previously served as a fellow at RAND's Technology and Security Policy Center. He has a BS and an MS in Computer Science from Georgia Tech.
Abhay Sheshadri
AISI Alum & Fellow
Abhay is currently an AI Safety Fellow at Anthropic, on leave from Georgia Tech. working on AI safety and LLMs. He has conducted research at the Center for Human-Compatible AI and participated in the ML Alignment & Theory Scholars (MATS) program. He has co-authored papers published in ACL and NeurIPS, spanning topics including automated redteaming, interpretability, and adversarial robustness.
Rae She
Past AI Safety Fellow
Rae provides direct operational support to the executive office at METR. She has a background in software engineering and a degree in computer science from Georgia Tech. Her role involves supporting METR's organizational operations as the AI safety evaluation organization develops assessments for dangerous capabilities in frontier AI systems.