workshop IA+Igual at the University of Navarra: The real challenge of AI in the management of talent is to build ethical models of machine learning.
IA+Igual and its academic partner, the University of Navarra (UNAV), have organized the event 'Let's play with AI' in which six experts in people analytics, machine learning, innovation, human resources, communication and algorithmic auditing have shared with agroup of HR managers information on how to use AI in the processes of talent area .
The session Let's play with AI was a practical exercise of "iteration" with artificial intelligence: a reduced group of professionals from area of management of people have been able to see a tool, in this case of generative AI, in operation, understand its logic of "reasoning" and understand why this technology needs them as much as the technicians who develop it.
Organized at the campus of the University of Navarra in Madrid and with the participation of its Institute of Science of the data and IA (DATAI)academic partner of IA+Igual, the meeting sought to convey to the attendees not only the potential of this technology to make the processes of talent management more efficient, but also how to provide them with greater explainability to make it a reliable tool for decision making. A very simple example illustrates the background of meeting: "We can use a hammer to hammer a nail or to hit someone. The tool is the same, but the use is different. The AI is also a tool. The challenge is in strategically defining what use we are going to make of it."
Alberto García Galindo, Fair Learning expert at DATAI, opened the workshop with an introduction to Machine Learning. This allowed him to explain how, through algorithmic programming, the data are transformed into patterns, these into models and these, in turn, into predictions. In this regard, he warned against the temptation to anthropomorphize AI because "although it is a technology that tries to replicate human thinking, it is not the one that makes the decisions, it serves as a support for those who have to make them" . For this reason, he concluded by drawing attention to the real challenge of the application of AI specifically in the field of talent management , which is none other than building ethical machine learning models based on four pillars: robustness, fairness, transparency and reliability. "This translates into validating the data that feeds the algorithm so that it does not present biases that derive in any subject of discrimination by gender, race, etc. and that it is possible to audit it to know how decisions are made," he pointed out.
Next, Ana Valera, expert in Data Analytics and member of committee of IA+Igual, presented a demonstration internship of iteration with generative AI -in this case, with ChatGPT- to show how human intervention is necessary to put into context what submission and identify possible biases in their proposals.
"The use of data can help make better, fairer and more objective decisions. But it is not Exempt biased," saysValera. Companies should make sure that their employees cannot enter confidential data into AI tools and ensure that they are giving them the necessary training so that they are able to discern the advantages and limitations of using AI," stresses the expert at data analysis.
In line with Valera's example, Ambrosio Nguema, LLM expert and member of committee advisor of IA+Igual, has developed an algorithm in Phyton for the session that has allowed to demonstrate not only how it can facilitate the automation of the process but, above all, which is the space that evidences the value of HR in the programming and learning of the models.
Ambrosio Nguema and Ana Valera
A project of innovation to give reliability to HR AI
Félix Villar, partner of IN2 and expert in algorithmic auditing, clarified that, although these models can help to improve productivity, companies face a dilemma: "It is difficult for an organization to make its own development and the usual thing is to hire a provider that has perhaps trained the model algorithm with 200,000 data from a specific geographical location, although it is used for companies all over the world", explained Villar, who is also one of the promoters of IA+Igual.
Marisa Cruzado, leader of project IA+Igual and expert in social innovation, pointed out that HR faces the challenge of understanding artificial intelligence. "To lead the transformation process that is already taking place, HR managers must turn AI tools into a necessary co-pilot that provides them with the necessary information for their decision making, in a fast, efficient and reliable way. From IA+Igual we are testing an algorithmic audit model that will set the instructions of a future certification model . We work from the present, with our eyes set on a future that is diverse and in which technology will support us to achieve the objectives of equity and equal opportunities in the workplace, in an ethical, transparent and reliable way".
And finally, Maite Sáenz, an expert in HRmanagement , warned that "when we say that algorithms are hallucinating, we actually have to convince ourselves that we are the ones who are hallucinating". The unconscious biases that compromise the principles of robustness, fairness, transparency and reliability referred to by Alberto García are reflexive acts of human reasoning that we transfer to artificial intelligence with the same unconsciousness with which we experience them in our daily lives. Precisely, the goal of IA+Igual is to audit algorithms to guarantee fairness in diversity".
The event was brought to a close at position by Iván Cordón, leader of the team at DATAI and expert in Technological Innovation, who highlighted the need for "multidisciplinary and multidiverse teams to put a stop to the biases that we all have and that AI reproduces". He concluded by giving a further twist to the thread of the event, stating that "technology is a tool and will not replace the HR professional who learns to use it, but it will replace those who do not know how to do so".
Iván Cordón, Maite Sánez, Félix Villar, Ana Valera, Marisa Cruzado, Ambrosio Nguema and Alberto García.