Social media and search engine algorithms have largely been created with the purpose of generating income for Big Tech Companies: a purpose that often clashes with the purpose of data protection acts. Whenever this leads to a public scandal, we will see the head of one of those companies all suited up in a court of law turning a highly technical challenge concerning data security and consumer privacy into a public spectacle of remorse and redemption. If a resolution comes, this is usually in the shape of consent forms full of incomprehensible fine print and terms and conditions. The greatest danger of this is that the public will be blinded from seeing what truly matters.
Where the social networking sites, search engines and big online retailers have truly succeeded so far is in defining the “personal data” that lawmakers say requires protection: mostly demographic data such as credit card numbers, travel records, religious affiliations, search histories, biometric data and IP addresses. But when targeting consumers, such personal data, though useful, is not paramount. Often an algorithm that understands behavioural correlation is more useful than one that uses a demographic profile. And the all-knowing algorithms that power everything from Facebook’s news feed and Google’s search results to Netflix’s recommendations, remain opaque and unchallenged: they even have their own protections in the form of intellectual property rights, making them trade secrets much like the Coca-Cola recipe.
The committee should create a framework for holding (multinational) Big Tech companies accountable for their use of data and the effects this has on their users. Additionally, the committee could formulate guidelines for the regulation of intellectual property of algorithms.