Future Shock

The following article is part of the SFG publication “Big Questions of Our time: The World Speaks”. To access the full publication please click here.

———————————————————————————–

Existential risk – the threat of extinction of humanity or permanent curtailment of human potential – is of paramount moral importance since it threatens not just current lives, but the vast potential of humanity’s future. While in the past, humanity was threatened by natural risks such as supervolcanos, pandemics and climate instability, today anthropogenic risks such as nuclear war and biowarfare dominate.

In the near future biotechnology, artificial intelligence and geoengineering can pose existential risks. As our technology gets more powerful, the potential for misuse increases even if it on average improves the human condition. Improving our insight, coordination, ability and protective technologies is hence clearly an urgent and rational aim.

The largest human-caused disasters so far have been wars and democides. While individuals and small groups empowered by new technology may wreak havoc, it is likely that the greater power and coordination abilities of states or state-like actors represent a larger threat to humanity as a whole. The rapid growth of surveillance and automation can empower totalitarian states to an unprecedented degree, while other technological innovations increase the destructive potential of conflicts. Hence finding ways of ensuring good governance, open societies (societies that allow citizens to point out and correct flaws), and tolerance in the face of an increasingly transparent, globalized and multicultural world, is of paramount importance for reducing existential risk.