Ethical and reliable AI, an increasingly urgent issue

The approval of the AI ​​Act, on March 16, 2024, constitutes a fundamental moment in the recent history of artificial intelligence as regards the legislative aspect. This is the first regulation that has the aim of regulating the development and adoption of artificial intelligence, reducing the risks and the most critical implications linked to the impacts on privacy, security, and on a large variety of subjects already covered in the past : here, here and here, for example.

Index of topics:

Toggle

The passage of the AI ​​Act and ethical issues

The effort to regulate the developments of a powerful, versatile, pervasive and in some ways unpredictable technology in its applications and consequences is not new: in April 2019 the High-Level Expert Group on AI of the European Commission had listed a set of principles to guide an ethical approach to artificial intelligence through the Ethics Guidelines for Trustworthy Artificial Intelligence, where the key terms are precisely ethics on the one hand and reliable (trustworthy) on the other.

By trustworthy AI we mean those applications that are simultaneously respectful of the laws, shared values ​​and ethical principles and robust, technically and socially with respect to the expected impacts.

The topic of laws is very complex and depends heavily on the different jurisprudence. Europe, already at the forefront in the regulation for the processing of personal data with the GDPR, opens the way on the regulatory issue also in this field with the AI ​​Act. Many other countries will follow, convinced as technologists and legislators are that they cannot be only technology, the market and global geopolitical competition to direct the development and use of such a powerful technology.

Ethical and trustworthy AI: a huge area of ​​research, development and debate

The topic of ethics is particularly complex, also due to the co-presence and proliferation of definitions, guidelines and recommendations, with overlapping aspects and sometimes small contradictions between the different statements. The following can be mentioned in this regard:

The list of models and guidelines goes on, with the effect of increasing entropy more than clarity.

The ethics of artificial intelligence according to Luciano Floridi


Precisely in an attempt to bring order to this matter, Luciano Floridi has published Ethics of artificial intelligence, published by Raffaello Cortina, a volume that offers a consistent theoretical framework to frame the topic and practical and concrete guidelines to assist those in universities, research centers, businesses and public administrations and regulatory bodies must define policies and limits on the development and ethical application of artificial intelligence.

Without being able to convey the complexity and richness of the book, which we recommend reading, some fundamental concepts are reported, useful to also guide the application of regulations such as the AI ​​Act.

ethics artificial intelligenceethics artificial intelligence

Two preliminary concepts

At the beginning of the book, Floridi gives two definitions useful for understanding the entire system:

  • a definition of artificial intelligence, aimed at immediately understanding its opportunities and limits;
  • the definition of infosphere and the relationships between digital, the environment and artificial intelligence.

The definition of artificial intelligence that Floridi uses in the text is a harbinger of various consequences. The definition is the one given by John McCarty, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon in 1955 in “A Proposal for the Darmouth Summer Research Project on Artificial Intelligence”, that is:

For the present purpose the problem of artificial intelligence is to make a machine act in ways that would be defined as intelligent if a human being behaved in the same way”.

The definition implies that machines are neither intelligent nor conscious, allowing us to skip theoretical disquisitions or futuristic scenarios to adopt an engineering approach, oriented towards building machines capable of reproducing intelligent behaviors. A pragmatic vision that scales back expectations and concerns linked to the advent of so-called general artificial intelligence (which we wrote about here) or even forms of super-intelligence. The choice is pragmatic precisely because these nominalistic debates do not help to act concretely in the development and application of the machine learning and deep learning models available today, here and now.

The second preliminary concept is that of infosphere – that is, of a world immersed in the digital context made up of data and platforms, a digital ecosystem in which we are immersed which creates an environment (made of zeros and ones) certainly suitable for artificial intelligence and machine learning in particular, which precisely data feeds.

Ethical principles for artificial intelligence

Given these premises, it is possible to consider and describe the ethical principles for the development and governance of artificial intelligence, that is, the 4 fundamental principles common to bioethics:

  1. charity
  2. not maleficence
  3. autonomy
  4. justice

to which is added a fundamental dimension for artificial intelligence: theexplicability, intended both as an explanation of functioning and of responsibility. To remember that L’explainable AI represents one of the responses to uncertainty, risk and in some ways also to the fear of systems seen as black boxes (black box) inscrutable.

For this purpose, frameworks have been created model cards, i.e. documents that collect key information regarding the purposes, operation, context of use and training of a machine learning model. Supporting an application with a detailed description also makes possible prejudices, risks and limitations on use more transparent and easier to understand.

To delve deeper into the topic of model cards, you can refer both to academic research (including Model Cards for Model Reporting by Mitchell et al, 2018) and to the models used in tech companyy like Google (Google Model Cards, etc.), and, finally, open source platforms such as HuggingFace Model Cards.

It should be noted that respect for the ethical principles set out above goes beyond regulatory compliance, in the awareness that legislation alone can never guarantee compliance with generally valid ethical principles and that necessarily the times of politics and jurisprudence are not those of a rapidly evolving technology. Operators must maintain the ethical development of AI and in a certain sense also self-regulate.

From ethical principles to practices

In the distance between the principles stated and the practical applications, the true intentions of the various actors are measured and the main risks of failure are encountered, reducing ethical development to a series of ethical recommendations which are only formally adhered to. In this regard, Floridi identifies five ways in which we witness the divorce between theory and practice. These are behaviors that can also be observed in other areas, such as environmental and social sustainability

  • ethical shopping: the practice of searching in the multiplicity and heterogeneity of the offer of guidelines and recommendations for those principles which can, a posteriori, justify pre-existing behaviours;
  • bluewashing ethical: similarly to “greenwashing” in the environmental field, this is the practice of communicating ethical practices not applied in reality, or applied in a very partial and marginal way;
  • digital ethical lobbying: the practice of exploiting the ethical argument to pressure and delay coherent and binding legislation;
  • dumping ethical: the practice of developing models in contexts with little or no regulation, then applying them in more regulated and restrictive contexts;
  • ethical avoidance: the practice of avoiding compliance with regulations and ethical principles in contexts where it is considered less relevant and binding.

AI for Social Good

Various other themes are explored in depth in the text, including: the different regulations and governance systems; the ethics of algorithms and the ways in which algorithms and data can lead to erroneous results, unjustified risks, fuel and reinforce prejudices, etc.; bad practices and applications of artificial intelligence for social good (AI for Social GoodAI4SG).

The chapter dedicated to the applications of artificial intelligence for social good is particularly relevant because on the one hand it describes concrete cases of ethical and positive application of AI, showing that we must be guided by the hope of building a better world, with more intelligence, less effort , less injustice; on the other, offering a framework to strategically and practically direct AI in the pursuit of social good.

A chapter that calls everyone to their responsibilities: if AI leads to negative or even catastrophic outcomes it will not be due to the rebellion of super-intelligent machines, but due to the practical objectives that we decide to pursue.

Conclusions

Already today, we can use artificial intelligence on the one hand to make the lives of blind people more autonomous and fuller, as it does Be my Eyes, on the other to build weapons of destruction and death. What we would like the machines to do will depend only on us. They, the machines, have (at least for now) neither consciousness nor will.

Tags:

 
For Latest Updates Follow us on Google News
 

PREV Great numbers for the “Città di Cantù”: it started yesterday
NEXT The horoscope of the day May 1, 2024 – Discover today’s lucky sign