fbpx

Artificial Intelligence opportunities and challenges

6 September 2022
Marta LópezShare:

Artificial intelligence is increasingly present in our daily lives: voice assistants, movie or shopping recommendations, loan granting or fraud detection are some examples of the numerous applications that use machine learning algorithms.

AI is transforming the economy, work, personal relationships and society across the globe. Almost every day we hear about new developments in AI, such as the Alphafold, a neural network system that has been able to help solve one of the biggest problems in biology, the structure of proteins, more quickly. Advances in the early detection of Parkinson's disease progress faster thanks to AI or how natural language processing in Spanish is playing a key role, with projects such as MarIA. Moreover, according to recent reports from consultancies such as PwC and Gartner, global GDP could increase to 14% by 2030 thanks to AI. The positive effects are countless, this means in most cases, benefits such as automation or the improvement of certain processes, but it also presents some challenges such as the need for subject matter experts or ethics.  

If we focus on Spain, at the beginning of this year Cinco Días pointed out that the industry needs 90,000 data experts in the next three years, and professions such as data engineer, machine learning engineer, data scientist, data scientistsData analysts, data governance specialists and data analysts, among others, are considered to be the most in-demand professions. Hence the need to learn the basics of AI and the high demand for skills in this area. This has led to an exponential growth in recent years in the supply of training creation both in specialised centres and in companies. As far as Immune is concerned, there has been an exponential growth in the number of students in this area, from 1 to 3 courses available for learning AI, as well as an increase in training for companies in this area. 

Undoubtedly one of the challenges of AI is ethicsThis is an unavoidable dimension of any professional activity and, in the case of AI, presents additional challenges compared to other technologies. Since AI began to be applied exponentially in 2010, thanks to Big Data and the increase in processing capacities, a multitude of documents have emerged from companies, organisations, officials, governments and different institutions that seek to establish ethical principles for AI. These principles aim to help preserve people's rights and freedoms without slowing down technological innovation. The principles mentioned in all these documents are numerous, and if we take into account the classification made by AIethicslab of all of them, we see that they can be somewhat grouped into four key categories: human autonomy, do no harm, create benefits and justice. In these categories we can find principles such as fairness, explainability, accountability or privacy among others, which are already generating a lot of debate in today's society. 

If we focus on Europe, we can say that it is one of the geographies that has made most progress in this area, with the proposed regulation for AI published in April 2021 and due to come into force in 2023. This proposal includes principles such as privacy, based on the GDPR (General Data Protection Regulation) as it could not be otherwise, but it also includes fairness, explainability and the principle of preserving human autonomy. The European Union in its AI regulation has decided to adopt an approach based on risk analysis. This analysis will be carried out by designated bodies in Europe and in each EU country. First of all, it mentions prohibited applications or unacceptable risks such as those of social scoringThe first is that we are labelled for our behaviour on the networks, similar to what they currently do in China, which is banned in Europe. Secondly, it details applications related to recruitment or medical applications, which, before being put into production, must have the conformity of these bodies. Thirdly, it points out medium-risk applications that are required to include an explanation of their systems, detailing how the AI has made the decisions. Finally, non-risk or minimal risk applications are allowed. There is still some way to go before this regulation is published in 2023, but it is undoubtedly a model to follow. Spain has sought to lead the way in this area by creating a sandbox Spanish on 27 June This year, which is intended to serve as a prototype for this regulation and will make it possible to provide feedback of what the application of the latter may entail. 

Ethics is sometimes seen as a brake that can slow down innovation, but the opposite is true. If we think about the function of brakes in cars, they give us the ability to move at higher speeds with the confidence that we can react to unforeseen events. It is not a matter of not taking advantage of the benefits of this technology but of minimising or anticipating potential problems in its use such as those mentioned above. Risk assessment is essential, as using machine learning's automated decision-making capabilities to recommend a film is not the same as using them to make a medical decision or to hire a person. The latest report of the Global AI index, which analyses the latest developments in AI each year, reflects that the industry has grown by 71% in its ethical AI publications, and AI regulation continues to expand 7 times faster than in the last six years, and among the 8 priorities the report identifies, ethical AI and AI regulation are two of them. 

For all these reasons, there is an increasing need to advance the understanding of AI and ethics, taking into account their multidisciplinary nature.. This need is what led Javier Camacho and me to write the Manual of ethics applied to artificial intelligence, which we published in May this year. In this manual addresses the fundamental issues that any profile, technical or not, that is going to be involved with artificial intelligence applications should be aware of. If you are interested in ethical AI, don't miss our round table about it on 5 October where we will discuss ethical AI with other market experts.

A little more about the authors of the book.

Javier Camacho Ibáñez holds a PhD in Economics and Business from the Universidad Pontificia Comillas, a Master's degree in Research in Economics from the same university, an Executive MBA from IESE, and a Telecommunications Engineer from the Universidad Politécnica de Madrid. He has developed his professional career in the ICT and telecommunications sector and has provided strategic and business consulting services for companies in different countries. He is currently director of Ethical Sustainability, lecturer and researcher in the field of business ethics, ethics in AI, Engineering and Cybersecurity. 

Mónica Villas Olmeda is an industrial engineer from ICAI, MBA from the Universidad Autónoma de Madrid and has developed her professional career at IBM. With more than 25 years of experience in the IT sector, she is passionate about teaching and technology, especially in Cloud, Artificial Intelligence and Analytics. She is currently a consultant, teacher and director of AI and exponential technologies programmes in different institutions and companies, including Deusto, UNIR, ESIC, Immune Technology and Analyticae. She is developing her doctoral thesis on Artificial Intelligence at UNED and is the training director of ODISEIA (Observatory of Social Impact and Ethics and Artificial Intelligence).

Subscribe to our newsletter
menuchevron-down