There’s been a lot of discussion of the potential dangers of Artificial Intelligence (AI) recently, as an existential threat to humanity. I think this overplays the risks, but also rather over-estimates how close we are to such an outcome. Some modest recent advances in pattern recognition are little more than baby steps along a path whose end we cannot even clearly define. More importantly though, I think there are two much more likely outcomes of any emergence of general “human-level” AI that pose significant ethical questions:
- Rather than enslaving us, we create genuinely sentient AI and enslave it. This seems a far more likely scenario to me, given humanity’s record so far. Can an AI be “human-level” and yet treated as sub-human?
- Near human-level AI creates immense economic value, while demolishing the market for human labour. Our current efforts (in the UK at least) at dismantling the welfare state and racing to the bottom on corporation tax start to look a little short-sighted from this perspective.