Technology has long played a key role in the evolution of society and the economy, facilitating access to knowledge and, in particular, strengthening relations between spaces and people.But as with other technologies before, the eruption of AI not only opens up unparalleled possibilities but also calls for social and technical skills acquisition and critical thinking to make the most of its potential and, at the same time, guarantee that AI becomes part of our everyday life in a fair and equitable way, for the collective good, with public policies and jointly responsible tech developers in the goal of so-called responsible AI.
This new OEIAC series looks at how sociotechnical skills acquisition and critical thought are key factors for responsible AI in all areas of our daily life.On the one hand, skills building goes beyond simple tech training as it must include extensive digital literacy which it allows society to understand both the functioning and the implications and social impacts of AI.From developers to implementers and, of course, end users, everybody should have some basic sociotechnical knowledge to use and interact with AI technologies in a way that is responsible; whether to broaden their use and increase our capacities in the process, or to limit its use, if necessary, when the application of AI systems is not ethical and represents anything from a violation of basic rights to a propagation of errors and/or biases without control.
On the other hand, critical thought is an essential complement as a motor for making informed decisions and preserving people’s autonomy as it is based on maintaining and even broadening an analytical side, and another evaluator about everything to do with AI.In this sense, we can say that there is not enough with knowing how AI works, and that we need to continuously question the results and possible consequences, especially in terms of whether they are acceptable, and if so, to what degree they can be improved.This implies analysing or assessing the structure and consistency of the reasoning, particularly opinions or affirmations that people accept as true in the context of AI.This evaluation can be based on observation, experience, reasoning or on scientific method.In this sense, it is basic to go beyond particular impressions and opinions, and this implies the need to apply clarity, accuracy, precision, evidence and equity in the process.
For these two factors of sociotechnical skills acquisition and critical thought to be developed in a truly transformation way, two specific approaches are needed: relational and ethical.The first puts the accent on the dynamics of interaction among the actors of the quadruple helix: industry, academia, government and civil society.The incorporation AI can substantially alter these relations, in cooperation and in the flows of power alike.It is therefore essential to design collaborative environments that promote inclusion, transparency and mutual confidence, ensuring that technology acts as bridge and not as barrier among the different stakeholders.In this context, public institutions and companies have a key role in facilitating spaces of dialogue and co-creation, where quality is preserved from fundamental human interaction for responsible innovation.AI must serve to promote, but never replace, these networks of shared value that are the core of the innovation ecosystem.
The second approach, the ethical one, establishes the principles that must govern the use of AI on the part of the quadruple helix.It is well known that the ethical implantation of AI has some clear standards and performance protocols, and that at present, many of them have also been set out through data and AI regulations in the European context.Taking this situation into account, and the fact that the OEIAC has rolled out its PIO Model on Duties and Rights, it is essential to haver greater literacy about its contents and implementation on the part of all the stakeholders involved, from citizens to researchers and those holding public positions of responsibility.This means greater introductory training and skills acquisition through this solid ethical base, ensuring that AI favours responsible technological development.
As a matter of fact, both the relational approach and the ethical one are used to remember that technology, in this case AI, must serve to help strengthen collaborative dynamics and respect for basic democratic principles.This requires mechanisms of governance that involve all stakeholders in an equitable way, ensuring that the advances in AI do not weaken, but rather strengthen, the essential ties that give sense to cooperation between the quadruple helix and, ultimately, to the common good.
To provide some answers and, of course, to raise more questions about the complex interplay between ethics, law and technology, the OEIAC is launching a new series of knowledge transfer seminars with guests from near and far who are working on and promoting the ethical and responsible use of AI.
The OEIAC Seminar Cycle starts on Wednesday, 29 May, and will run until 15 November, the day of its closing ceremony.This cycle is aimed at anybody who is interested in AI in general and in its ethical and responsible uses in particular, as well as from the public and private sector.