Urgent to verify the legitimacy of Meta’s use of private user data to train Artificial Intelligence

Many of you will have received the notification with which Meta announces the launch of a new development program for its artificial intelligence. In short, Meta will use more or less all the information deposited over the years by its users to train its algorithms. Imagine: personal chats, expressions that you are not proud of or that you no longer recognize as your own, more or less ridiculous comments, likes on photos or posts of dubious taste. All this will be used to train the AI, which, in turn, will be able to “communicate” with third parties.

(In the photo: Stefano Rossetti is a lawyer at Noyb – European Center for Digital Rights)

What all this means remains, frankly, a mystery. In theory one can oppose it, but only in theory. By Meta’s explicit choice, exercising this right is in fact rather difficult (see Annalisa Godi’s excellent article on this). In any case, even where the user succeeds, there is really no way to verify whether the request is accepted in practice. The Irish authority, territorially competent to investigate Meta, is not in fact known for the diligence with which it exercises its supervisory functions.

For these reasons, noyb has filed a series of complaints with different data protection authorities, underlining the need for concerted and urgent action to verify the legitimacy of such operations.

At the basis of everything there is a deliberate, intentional ambiguity regarding data processing. When it all started, a couple of decades ago, no one could imagine that their personal account data could be used for any purpose. Instead, at first it was direct marketing. Then, political targeting (anyone remember a certain Cambridge Analytica?). And now artificial intelligence, with the possible risks that this entails, especially if the algorithm’s training is based on this kind of information (think of private chats).

A practical case. A few days ago, in an effort to improve my English, I asked an AI assistant to help me make my expression more natural. The bot asked me a question, “What do you like?”, and I thought, “Reading”. I replied: «I am passionate about books, I love having them around, reading them, learning from great minds, women and men». AI fix: «I am passionate about books. I love surrounding myself with them, delving into their pages, and absorbing the wisdom of great minds, both men and women». In order to provide a more refined version, the AI ​​reversed the meaning of the two highlighted words, placing “men” before “women”, as it has always been. It did this because the algorithm was trained on a dataset that statistically yielded that sequence. In other words, it reiterated gender discrimination. And this type of dynamic can be repeated in many other sectors.

For example, bringing the thought back to Cambridge Analytica, there is a clear risk that AI could influence the democratic debate, proposing articles that confirm a prejudice rather than providing further elements, essential for forming a thinking and voting brain (we want to be optimistic) . All this can fuel systemic risks such as hate speech, discrimination of all kinds, limitation of freedom of expression online. For example, if a post does not reflect an accepted position, and therefore receives negative comments, the AI ​​will be able to brand it as “illegitimate” and remove the content and its author from the platform. (It’s not science fiction. All this happens and is the subject of a new European regulation, the Digital Services Act.) Challenging the algorithmic decision is theoretically possible, as is obtaining a human review intervention. However, to do this effectively, you would need to know the AI’s decision-making logic. The platforms, however, are not particularly keen on providing these explanations because they expose their industrial property to risks, and because human moderation costs money (unless they outsource the function to developing countries, without labor law protections, and even this we’ve heard it before.) And here the role of the supervisory authorities comes into focus.

The cooperation system envisaged by the European privacy regulation provides that the competence to supervise this type of processing lies with the data protection authorities of the countries where these giants have their main establishment. Essentially, Ireland and Luxembourg. There is only one “small” problem. For some inscrutable reason – let’s say “inscrutable” so as not to offend anyone’s sensibilities – these countries do not have clear procedural rules on the handling of complaints, with the consequence that, in the vast majority of cases, no investigation is carried out and rarely some kind of intervention. In other words, the system does not work precisely for the processing of personal data for which it should work better.

By failing to promptly regulate the activities of their supervisory authorities, these countries allow, in fact, the creation of true privacy paradises. This is a violation of European regulations. The European Commission could intervene with an infringement procedure. So far, however, he hasn’t done much. Perhaps the negotiations for the new Commission will be able to take this state of affairs into consideration. The time to act constructively has truly arrived. In fact, we are already very late.

by Stefano Rossetti (Source: Tomorrow)

 
For Latest Updates Follow us on Google News
 

PREV Accident on the Roccalumera seafront, a 16-year-old centaur seriously injured
NEXT Zaniolo in Nerazzurri: price already set