AI Ethics Are Hard

Google formed and then within a week dissolved an AI Ethics board, in the corporate equivalent of many celebrity weddings: ill conceived, but quickly ended. The calls for ethics in AI have been strong and understandable. AI is powerful technology that can and already has gone terribly wrong, as in the advertising and recommendation algorithms used by Facebook and Youtube.

Now some of this stuff is obvious and there has been no lack of people pointing out the problems. If you train an algorithm on engagement for example, it will surface more content that confirms users’ existing beliefs and skews towards emotional content that appeals to our instincts (rather than requiring us to engage our rationality which requires effort). Because it is obvious, you don’t need an ethics board to point it out and more importantly that wouldn’t make a difference unless the companies in question would be willing to fundamentally change their business model (or if sticking with the same model would be willing to take a huge revenue hit).

And then some things are incredibly hard. Such as face and object recognition. There are tons of amazing positive applications for such technology. And yet they could also be used to bring about a dystopian future of autonomous killer weapons chasing citizens in the streets. Does that mean we should not develop these capabilities? Should we restrict who has access to them? Is it OK for corporations to have them but not the military? What about the police? What about citizens themselves? Those are hard questions and anyone who thinks they have obvious answers I submit hasn’t thought long enough about them.

So what is to be done? A good start is personal responsibility. Think about what you are working and don’t work on it if you are not comfortable with it (or fund it or lead it). Different people will wind up being comfortable with different things because the answers to the questions above do not have a single obviously right answer. Do not assume that because someone is working on autonomous weapons they are evil and have not thought about ethics (conversely, if at this point you are still working on a recommendation algorithm that puts engagement above everything else, you should revisit that).

A second crucial element is protecting and improving democracy and associated institutions, such as an independent judiciary. This is our only real bulwark against the abuse of any and all technology. We have nothing else nearly as powerful, and certainly not more technology (which is always where most technologists go first). For instance, on the related topic of surveillance, a good reminder is that East Germany constructed a low tech surveillance state by turning half the population on the other half. No technology required.

As with all prior technology, going back to fire, humanity’s earliest technology, we will need to figure out how to use AI for good. That will be a long process. Let’s make sure to engage vigorously in that process and with the necessary intellectual honesty.

Posted: 5th April 2019Comments
Tags:  ai ethics

Newer posts

Older posts

blog comments powered by Disqus

Newer posts

Older posts