AI Act, what impact on the education technology sector

AI Act, what impact on the education technology sector
AI Act, what impact on the education technology sector

Education Technology (EdTech) represents the set of technological solutions at the service of education and training, to improve the quality of the learning experience. This is a constantly growing sector (particularly after the acceleration that occurred during the Covid-19 pandemic) which affects schools, universities, businesses and start-ups engaged in the digital transformation of the educational experience and digital learning training.

Index of topics:

Toggle

The global EdTech market is worth $325 billion

The global EdTech market is estimated to be worth $325 billion globally, of which approximately $70 billion is generated in Europe. According to the latest data published by the EdTech Observatory of the School of Management of the Polytechnic of Milan, even in Italy the EdTech market continues to grow (with a value of approximately 2.8 billion euros in 2022), distinguishing itself as one of the few markets where Venture Capital investments have increased. The potential of the EdTech sector is exponential if we consider that the global education market reached a value of 6.5 billion dollars in 2022 and is expected to reach 10 billion dollars by 2030.

In this context, the new European legislation on artificial intelligence (AI Act) is intended to give further impetus to the development of technological innovation in the education, training and digital learning sector.

The new European rules on artificial intelligence

The new Regulation on artificial intelligence approved by the European Parliament on 13 March 2024 represents the first law in the world adopted to regulate the development and use of artificial intelligence, thus ensuring the European Union a strategic role in this sector.

With the AI ​​Act, the European Union has established a uniform legal framework for the development, commercialization and use of artificial intelligence systems, promoting technological innovation and the diffusion of anthropocentric and reliable AI and, at at the same time, with the aim of ensuring a high level of protection of health, safety and fundamental rights enshrined in the EU Charter. The new Regulation will apply to all AI system providers, defined as natural or legal persons who develop or have developed an AI system by placing it on the EU market under their own name or brand, regardless of whether they are established or located in the EU or in a third country (therefore also to non-EU companies).

After approval by the European Parliament, the new law is now subject to final verification by lawyer-linguists and formal approval by the Council. The AI ​​Act will come into force twenty days after publication in the Official Journal of the EU and will be applicable from 24 months thereafter (with some exceptions, such as the ban on AI systems presenting unacceptable risks, applicable from 6 months and the obligations for high-risk AI systems which will become applicable 36 months after the entry into force of the AI ​​Act).

Prohibited practices and high-risk AI systems

In particular, the new Regulation prohibits certain AI practices which may pose an unacceptable risk to the safety and rights of citizens, including AI systems capable of manipulating human behaviour, which allow for the attribution of a social score (social scoring) classifying or evaluating people based on their social behavior or personal characteristics or aimed at non-targeted extraction (scraping) of facial images from the internet or CCTV to create or expand databases, as well as biometric categorization systems based on sensitive data.

Alongside the strictly prohibited applications, the AI ​​Act defines at high risk those AI systems whose use or malfunction could pose a significant risk to the health, safety or fundamental rights of natural persons and which, therefore, can only be placed on (or used in) the EU market if they meet certain mandatory requirements outlined by the new legislation.

This category includes AI systems intended to be used in the management and operation of critical digital infrastructures and road traffic, in employment assessment or aimed at optimizing worker management and access to self-employment (for example, to publish targeted job advertisements, analyze or filter applications and evaluate candidates, take decisions for the promotion or termination of employment relationships), as well as in determining access to essential public and private services and benefits (such as health care or creditworthiness assessment).

AI systems in education and training

The AI ​​Act highlights the importance of dissemination of AI systems in education and training in order to promote a high-quality digital educational and training experience, allowing users to acquire and share increasingly essential skills and abilities in the current and future educational and working landscape (including media literacy and critical thinking), which can contribute to active participation in the economy, society and democratic processes.

Even in the education and training sector, however, there are risks that the legislator has taken into account. In particular, the new Regulation has included among the prohibited AI practices for unacceptable risk i emotion recognition systems used in workplaces and educational institutionsexcept for their use for medical or safety reasons.

The AI ​​Act also lists a number of high-risk AI systems designed for use in certain areas of education and training. These systems include those intended to: determine access to or admission to vocational education and training institutions; evaluate learning outcomes; predetermine what level of education an individual will be able to access; monitor and detect prohibited student behavior during exam tests.

These AI systems, being able to influence people’s educational/professional path and impact their future earning capacity, can in fact be particularly intrusive and violate the right to education, training and non-discrimination, where designed and used in inadequate way.

The AI ​​Act introduces a number of specific obligations and responsibilities borne by suppliers, importers, distributors and in certain cases the so-called “deployers” of high-risk AI systems, including in the field of education and training. In particular, suppliers of high-risk AI systems will have to evaluate and ensure the compliance of their products with the requirements indicated by the new Regulation, implement an ongoing risk management system, provide recipients of the AI ​​system with complete instructions for use , clear and comprehensible, as well as fulfilling registration obligations, monitoring the functioning of the AI ​​system after its placing on the market and implementing human surveillance measures.

Image generated with DALL-E

Opportunities and challenges for EdTech companies

In the coming years, artificial intelligence will play a key role in improving education and learning processes in schools, universities and companies. Companies operating in the EdTech sector, as well as operators interested in investing in this sector, are therefore called upon to deal with the new regulatory framework outlined by the AI ​​Act and the numerous obligations and responsibilities deriving from the development of AI systems intended to be used for educational or professional training.

On the one hand, in fact, the new European regulation represents fertile ground for all developers of AI systems thanks to the introduction of precise rules to operate even in high-risk sectors, promoting technological innovation and encouraging investments by companies operate (or decide to operate) in Italy and Europe (think, in particular, of the measures to support innovation and experimentation envisaged by the AI ​​Act to facilitate and accelerate access to the EU market by SMEs and start-ups).

On the other hand, we must not overlook the legal risks associated with the development of new AI systems or the compliance of applications already on the market, having to ensure compliance with the numerous requirements posed by the new European legislation and the protection of people’s rights and interests physical recipients of these systems (such as students, teaching staff, company trainers, employees).

Many opportunities, but also risks

The AI ​​Act, in fact, provides that any person will be able to submit complaints for any violations of the new European provisions. Furthermore, Member States will be asked to introduce specific sanctioning measures for failure to comply with the regulatory requirements of the AI ​​Act, being able to provide for administrative sanctions of up to 35 million euros or 7% of the total annual worldwide turnover of the previous financial year in case of violations relating to prohibited AI practices or up to 15 million euros or 3% of the total annual worldwide turnover of the previous financial year for failure to comply with other requirements and obligations under the AI ​​Act.

The recommendation is therefore to take prompt action to seize the numerous opportunities posed by the world’s first law on artificial intelligence and, at the same time, avoid or reduce the risks associated with the development of AI systems, particularly if high risk.

 
For Latest Updates Follow us on Google News
 

PREV Who will win Eurovision 2024? Predictions before the final
NEXT Gold and Silver fly: new all-time highs reached