A statement on our purpose and some resources to learn more about AI safety.


During the Cold War, the race to nuclear supremacy laid the technical groundwork for a corollary race to space. On one hand, humanity developed enough nuclear weapons to destroy civilization and came very close to doing so. On the other hand, we developed a vision for bringing humanity to the stars and took a giant leap forward in landing on the moon.

Yet, we need far more progress to achieve an intergalactic future. Many technologists believe there is a panacea on the horizon: general purpose AI systems. Leading AI labs have set their sights on creating a universal problem solver, pouring significant resources into accelerating AI development.

AI safety researchers raise the concern that such endeavors will introduce significant risks to humanity. Even if these AI developers are racing to the stars, the corollary races may trigger disastrous outcomes. In a race to capture profit, industry aims to automate labor at large. As AI systems take on more and more decision-making responsibilities, our societies will be driven towards machine goals that we may not understand or desire. In a race to capture power, governments aim to weaponize AI models. Not only will such a competition bring us closer to the brink of conflict, the inevitable diffusion of the technology to malicious and irresponsible actors multiply the likelihood of destruction.

Over the past decade, advances in deep learning have illustrated that progress in AI capabilities often comes independently of advances in human understanding of AI behavior and capacity to control AI outcomes. While AI will certainly help us solve some of our most important problems, much more research is needed to develop procedures and policies for safe deployment. If AI is to propel humanity to the stars, we must be sure this rocket does not crash, especially if we all intend on boarding the same one.

In recognition of these challenges, the AI Safety Initiative at Georgia Tech advances research and education in AI alignment and governance. We welcome students, researchers, and Atlanta professionals to contribute to this mission.