This year, on 25th of May, the General Data Protection Regulation (GDPR) came into effect in the European Union. With that said, two important clauses were introduced, that shift the power balance back in the favor of people, away from the companies that collect their personal data. They are: the right to be forgotten, and the right to explanation.
We are living in the early days of artificial intelligence, at a time when companies are still trying to perfect their AI algorithms using vast troves of user data they can manage to get their hands on. Up until recently, accountability was not exactly their biggest concern and they cannot be blamed entirely for that. AI is disruption at a huge scale and any kind of disruption involves moving fast and breaking things – in this case that could mean bypassing certain privacy concerns. At the same time, we are living in a post Facebook-Cambridge Analytica scandal world, and this means that people are angry on being exploited. The GDPR is a shield of sorts for Europeans, and they are not afraid to use it.
The right to be forgotten lets users demand that a company that has their data, remove it from all their servers. That is not something AI companies will enjoy doing, because such demands come directly in the way of them perfecting the algorithms that power the machine learning. The solution lies in anonymizing all the personal data, but doing so across servers is quite complicated. However, once the data is anonymized, the companies will be free to use the insights as they see fit.
For those who are okay with their personal data exist on the servers of AI companies, GDPR still let them demand the right to explanation. Users can demand to know when they are being affected directly or indirectly by AI algorithms, and can even question the logic behind the decisions made by ‘machines’ using their data. Anyone who knows anything about AI can tell you how ‘deep neural networks’, the technology that modern AI algorithms use, work. The complexity involved in these software structures as they analyze vast troves of data to find patterns and correlations are not something even brilliant engineers can easily understand, let alone explain to a layman.
Now that GDPR can hold AI companies accountable for the decisions made by their algorithms, it’s pretty clear that the EU wants to hold people behind the algorithm responsible. This means, going forward, they’ll have to design the algorithms in a way that makes it possible to explain the decision-making. Is that ‘dumbing down’ the AI? Possibly. The question is, does it dumb it down enough to take the intelligence away from artificial intelligence? That remains to be seen.