Kaarel Hänni talk about “AGI Safety” on january 7th

This Wednesday, on January 7 at 14:00, Kaarel Hänni will give a talk entitled “AGI Safety” in room 1018.

The talk will be held in English.

Kaarel Hänni is an AI Safety Research Scientist at Mila – Quebec Artificial Intelligence Institute, focusing on the development of safe AI for the benefit of humanity.

Abstract (in English):

This talk is an introduction to AGI (artificial general intelligence) safety. In the first half of the talk, I will argue for the following three background claims:

  • “AI soon”: If AI progress continues, then in 50 years, there will very likely be AIs that are more capable than humans in basically every way. I will discuss certain quantitative empirical trends which suggest this happens before 2035. It might even happen in the next few years.
  • “AI fast”: Once there are AIs autonomously doing AI research at the level of top humans, by default, there will soon be AIs that are vastly smarter than humans (like how humans are vastly smarter than ants).
  • “AI big”: This would radically transform the world. AGI is a much bigger deal than cars or the internet or the industrial revolution — the advent of AGI is more in the same reference class as “evolution starts on Earth” and “human language and culture get started”.

In the second half of the talk, titled “AI bad?”, I will discuss the following questions:

  • Why might going down the AGI path be risky? Could it lead to human disempowerment or even extinction?
  • What are the main plans and hopes for how to avoid bad outcomes?
  • What technical research questions can one work on to mitigate the risks?