Artificial Intelligence Existential Risk Dilemmas

A few week backs I wrote a series of blog posts about the risks from progress in Artificial Intelligence (AI). I specifically addressed that I believe that we are facing not just structural risks, such as algorithmic bias, but also existential ones. There are three dilemmas in pushing the existential risk point at this moment.

First, there is the potential for a “boy who cried wolf” effect. The more we push right now, if (hopefully) nothing terrible happens, then the harder existential risk from artificial intelligence will be dismissed for years to come. This of course has been the fate of the climate community going back to the 1980s. With most of the heat to-date from global warming having been absorbed by the oceans, it has felt like nothing much is happening, which had made it easier to disregard subsequent attempts to warn of the ongoing climate crisis.

Second, the discussion of existential risk is seen by some as a distraction from focusing on structural risks, such as algorithmic bias and increasing inequality. Existential risk should be the high order bit, since we want to have the opportunity to take care of structural risk. But if you believe that existential risk doesn’t exist at all or can be ignored, then you will see any mention of it as a potentially intentional distraction from the issues you care about. This unfortunately has the effect that some AI experts who should be natural allies on existential risk wind up dismissing that threat vigorously.

Third, there is a legitimate concern that some of the leading companies, such as OpenAI, may be attempting to use existential risk in a classic “pulling up the ladder” move. How better to protect your perceived commercial advantage than to get governments to slow down potential competitors through regulation? This is of course a well-rehearsed strategy in tech. For example, Facebook famously didn’t object to much of the privacy regulation because they realized that compliance would be much harder and more costly for smaller companies.

What is one to do in light of these dilemmas? We cannot simply be silent about existential risk. It is far too important for that. Being cognizant of the dilemmas should, however, inform our approach. We need to be measured, so that we can be steadfast, more like a marathon runner than a sprinter. This requires pro-actively acknowledging other risks and being mindful of anti-competitive moves. In this context I believe it is good to have some people, such as Eliezer Yudkowsky, take a vocally uncompromising position because that helps stretch the Overton window to where it needs to be for addressing existential AI risk to be seen as sensible.

Posted: 26th June 2023Comments
Tags:  ai artificial intelligence risk

Newer posts

Older posts

blog comments powered by Disqus
  1. freebirds-poet reblogged this from continuations
  2. heartstuds reblogged this from continuations
  3. kar2l reblogged this from continuations
  4. continuations posted this
    A few week backs I wrote a series of blog posts about the risks from progress in Artificial Intelligence (AI). I...

Newer posts

Older posts