Artificial Intelligence as a support in the drafting of 231 models: what risks and what advantages?

The recent explosion of services based on Artificial Intelligence (hereinafter also “AI) technology and, specifically on generative AI, has involved not only the world of art, but also the world of so-called “white collar” workers. In fact, the ability of generative AI such as ChatGPT to produce voluminous texts in a few minutes, if not even in a few seconds, could lead to seeing such AI as substitutes in many massive document production activities, in addition to support in judicial activity, lawyers and corporate lawyers also make use of AI to respond to requests made by the world of compliance and, in particular, by requests – increasingly frequent – relating to the drafting of Organizational Models pursuant to Legislative Decree 231/01.

In light of this premise, it is natural to ask: what support can generative AI give in the drafting of MOGs? And, in particular, whether AI activity can even replace the professional.

The preparation of a MOG includes, in the Special Part, the analysis of the risks of commission of the predicate crimes indicated by Legislative Decree 231/01.

However, since this risk assessment is different depending on the realities of the entities and the risk exposure of each of these to the predicate crime, it can – at first glance – be assumed that AI can only partially consider all the variables in play, not having the same sensitivity that a real professional has. In fact, he adds, to the mere mathematical analysis, his own experience gained both in the judicial and extrajudicial fields.

The attenuation of the gap just described could perhaps increase in the case in which it is hypothesized to develop a generative AI that has been “trained” and “acculturated” through the analysis of all the MOGs published online through a massive data scraping operation. But would this be enough?

Without going into detail about any lawful or unlawful processing of personal data involved in this operation, in our opinion, it does not appear possible that this AI can return, on a qualitative level, a MOG that is faithful to the reality of the individual customer, which presents and considers the innovations dictated by the technological innovations that have occurred and analyze the risk of commission of the new predicate crimes not included in the MOGs drawn up and published up to then, also having to take into account – which is quite unlikely – the continuous updating imposed by the jurisprudential guidelines introduced on the merits and in legitimacy.

In fact, the organization to which this MOG belongs will certainly not be able to argue in any judgment that “the AI ​​did not have enough time to update itself on the recent legislation”, thus invalidating that duty to update the model itself as an element, among the others, useful evidence to avoid the so-called “guilt in the organization”.

(In the photo: the lawyer Matteo Alessandro Pagani, speaker at the Privacy Day Forum 2024)

However, if we even went so far as to consider an AI capable of analysing, understanding and applying new predicate crimes or the effects of new technologies on risk assessments within the organisation’s normal activity to the updating of the MOG, we are certain that Could this AI be able to differentiate between the “shades” that make up the real world? Will he, for example, be able to understand the difference between simple gifts and attempted corruption?

And again, in the field of health and safety at work, is it foreseeable to imagine an AI that in its cold calculation contemplates, for example, “abnormal and unpredictable” conduct carried out by the worker?

And finally, in addition to the two examples briefly mentioned above, what are and will be the privacy implications for the processing of numerous personal data carried out by AI? From information to impact assessment, from appointment as data controller for the company that will provide the AI ​​to rights such as not to be subject to decisions based solely on automated processing, privacy compliance for this type of activity is vast and requires considerable effort and professionalism.

In conclusion, it is clear how AI will play, at least in the context of document drafting relating to 231 compliance, an ancillary role to the human professional and any attempt to replace it will involve having to bear significant risks, as mentioned above, among the which it is worth highlighting above all that relating to a fundamental right such as that of the protection of personal data

 
For Latest Updates Follow us on Google News
 

PREV serious mother and child, shock in Sassocorvaro
NEXT Lazio, spent at Hellas. 25 million requested for Cabal-Noslin, Suslov is also there