UdG Chairs > OEIAC > OEIAC Cycle of Seminars
Go to content (click on Intro)
UdG Home UdG Home
Close
Menu
Machine translation, text awaiting revision

Observatory of Ethics in Artificial Intelligence

4th OEIAC Seminar Cycle

Ethics, social impact and legal fulfilment

Ethical considerations often configure laws and, at the same time, legal rules can influence ethical considerations, generally with the objective of promoting social welfare and protecting people's rights. However, situations can arise in which legal standards and ethical considerations come into conflict and, in these cases, an analysis of the needs of the interested parts is needed since the ethical principles and social impact assessments often go beyond the legal requirements to address the needs and expectations of society. A clear example of this is the European Union’s recently approved Artificial Intelligence Act (AI Act). These regulations classify AI systems according to their levels of risk, from minimums to unacceptable levels, and establish fulfilment requirements based on these levels, based on ethical imperatives and practical fulfilment aspects. Yet to become a solid framework for protection, it has to reconcile tensions between ethics, social impact and legal fulfilment, ensuring that AI regulations are not only enforceable, but also guarantee individual rights, community welfare, social values, democracy and environmental protection.

At present, the AI Act is not regarded as having a broad enough or people-focused vision as it no longer provides enough protection for fundamental rights such as privacy, equality and non-discrimination. For example, the AI regulations have important gaps when it comes to principles such as transparency about certain applications of AI, especially regarding those supplying and implementing high-risk systems in spheres such as law, migration, asylum and the control of borders. In these cases, a very limited amount of information is asked for, and it will also be included in a non public database, severely limiting its supervision and scrutiny. Moreover, although the regulations include impact assessments about fundamental rights, there are still doubts over what the real scope of these assessments will be, including whether those applying AI systems will only have to specific which measures will be adopted once the risks materialise.

On the other hand, there is no obligation for involving interested parties such as the civil society and those affected by AI in the assessment process. This means that civil society organisations will not have a direct and legally binding way to contribute to impact assessments. Here we also need to underline that with the AI Act, some resources will be contemplated when there are complaints, but there will be no clear recognition of the affected person. So, taking into account that there must be a right to an explanation of the individual decision-making processes, especially for AI systems classed as high risk, this raises questions about access to information and the obtaining of explanations on the part of those applying AI systems. This aspect is regarded as particularly relevant given the absence of considerations such as the right to the representation of natural persons or the capacity of the organisations of public interest for lodging complaints with the controlling bodies.

Similarly, it is worth noting that the AI Act makes the national security area seem a space which to a large extent is free of rights. Even though in this area there can be no justified reasons for the exceptions of the AI Act, this usually has to be assessed case by case, in accordance with the EU Charter on Fundamental Rights. But the text adopted does not go in this direction and, in practical terms, this may mean that the governments could invoke national security to introduce AI systems such as biometric mass surveillance, without having to apply any safeguards contemplated by the AI Act: without conducting an impact assessment on fundamental rights; and without guaranteeing that the AI system does not discriminate against certain collectives.

Another important point is that the AI Act only takes a first step in addressing the environmental impact of AI and this, despite the growing concern over the exponential use of AI systems, may have a severe impact for the environment. While, for example, the AI regulations will require suppliers of general purpose models (GPAI) to document their energy consumption when they are trained with large amounts of data, there is still no suitable methodology for measuring this in a transparent, comparable and verifiable way. It will therefore be necessary to create procedures to guarantee the efficient use of resources by some AI systems, and at the same time, for these procedures to help cut the energy consumption and other resources of AI systems during their life cycle. But this is only a starting point, as more comprehensive approaches are needed that address all the environmental harm throughout the AI production process, including the consumption of water and minerals.

Finally, another aspect to highlight about the ethical and legal considerations of the AI Act, is that it allows for a double standard when it comes to the human rights of non-EU nationals. As a matter of fact, the AI Act does not fulfil the civil society demand of guaranteeing that AI suppliers based in the EU which affect people outside of the EU, are subject to the same requirements as those within the EU. In other words, the AI Act it does not stop EU-based companies from exporting AI systems that are prohibited in the EU, consequently creating a big risk of violating non-EU people’s rights with tech manufactured inside the EU which is essentially incompatible with human rights.

In order to offer some answers and, of course, to open up more questions about the complex interaction between ethics, legislation and technology, the OEIAC is starting this new cycle of knowledge transfer seminars with guests from far and wide who are working to promote the ethical and responsible use of AI.

The OEIAC Seminar Cycle starts on Wednesday, 29 May, and will run until 15 November, the day of its closing ceremony. This cycle is aimed at anybody who is interested in AI in general and in its ethical and responsible uses in particular, as well as from the public and private sector.

If you would like to receive a link to connect and hear the talks, all you need to do it fill in this registration form

Anybody who would like any other information about the cycle can also write to us via email at suport.oeiac@udg.edu .

All seminars will be free.

Previous years

Choose which types of cookies you accept which the University of Girona can store in your browser.

Those that are essential for enabling your connection. There is no option for disabling them, as they are necessary for the functioning of the website.

These enable your options to be remembered (for example language or region you are accessing from), to provide you with advanced services.

They provide statistical information and enable improved services. We use Google Analytics cookies which you can deactivate by installing this plugin.

To offer advertising contents relating to the interests of users, either directly, or through third parties (“adservers”). These must be activated if you wish to see the YouTube videos uploaded to the University of Girona’s website.