Biothreats are an Existential Risk
I have a friend who ponders human existential risks (x-risks) for a living. He proposed that biothreats are challenging to model as an x-risk.
A virus that can kill off all of humanity is highly implausible, since you can literally put people in a box or on an island. But, a virus that would kill or maim the vast majority of humanity is plausible. Certainly so, a biological threat can push humanity to a lower level of civilizational complexity.
The consequences of lower civilizational complexity is such:
-
We remain a primitive civilization forever and thus never reach the technological singularity. That removes a key x-risk. But humanity never becomes an interstellar species. Humanity will obviously still end if any planetary-wide disaster occurs.
-
We delay civilizational progress. Think of this as if we have to repeat the industrial age. But instead of repeating it in the 1800s, we repeat it with 3x more carbon in the atmosphere. This makes the threat of climate change more serious.
-
We exist in a impoverished civilizational state where effective altruism is too high on the Maslow Hierarchy of Needs for people to care. This increases the x-risks from all sources. An impoverished civilization would delay of the creation of a technological singularity. However, it is not clear if we need the capital allocation we currently apply to artificial intelligence. The progress in AI often are made in small breakthroughs. For example, the AlexNet architecture was formed in Alex Krizhevsky's dorm room. There is some circumstantial evidence that general intelligence is not complicated. Perhaps any civilization looking for it at any level will eventually stumble on the singularity.
All these work to increase overall x-risk. Thus the biothreat is a contributor to x-risk.