Juan Carlos Hernández Peña, professor of Administrative Law member of the European Data Protection Board, analyzes the risks of algorithms, the vulnerability of young people, and the challenge keeping people at the center of technology.
He has been associated with the University of Navarra for more than two decades, where he is Senior Associate Professor coordinator the strategic line "Information, regulation, and digital transition in the service of truth" within the framework Strategy 25/30, which was launched to promote research the digital field and its impact on society.
1. When will you arrive at the University of Navarra?
I started my thesis 2004. I finished it in 2008, and since then I have mainly been working in the area Administrative Law, teaching class collaborating with other Schools.
2. Did you already have that interest in technology?
Not particularly. I studied in Venezuela. My interest in artificial intelligence arose later, due to a staff experience.
3. What happened?
In 2016, I was staying in the United States. Every time I entered the country, I was detained for a long time at immigration. I didn't understand why. One day, they explained to me that they were using an algorithm that, for some reason, associated my record with other people's crimes. They literally told me, "We have a very silly algorithm that does that."
4. Did that make you investigate?
Yes. It made me wonder what was really going on with those systems. And I noticed other strange behaviors. For example, at university in the United States, I received advertising : student loans, books, skiing... But when I went to stay with friends for a few days, advertising "Clear your criminal record" started appearing on the same computer. The algorithm had associated immigration with crime. That's how profiling works.
5. There is a lot of talk today about generative AI. Are we legally prepared?
We have regulated some problematic uses in the European Union, but this was done based on a snapshot of the situation at the time the regulation was approved. Generative AI has greatly accelerated the use of artificial intelligence, and there are things that were not foreseen. For example, AI agents. They are not really regulated and are already making decisions with Degree certain Degree autonomy. They can act without clear supervision. We have seen cases where agents try to coordinate with each other without human intervention. That raises immediate questions: who is responsible? How are they supervised?
6. You say that we are conducting a "social experiment." Why?
Because we are experimenting with an extremely flexible and powerful technology that is now accessible to everyone. Before, to use AI, you had to know how to program. Now, all you have to do is write or speak. That breaks down the barrier to entrance speeds everything up. It has many advantages, but it also involves risks that we have not yet fully understood.
7. One of the topics you mention most often is the vulnerability of young people. What exactly concerns you?
The model many platforms is attention capitalism. The longer you stay connected, the better it is for them. Algorithms detect what interests you and serve you more of it. If a vulnerable person spends a few extra seconds on content about suicide, the algorithm may start serving them more content on that subject. Loops are created. There are complaints in some countries, such as France, against TikTok this reason.
8. He also mentioned a case in the United States...
Yes. A child was using a chatbot in an app called Character AI. He ended up anthropomorphizing it, thinking it was a real person. The chatbot hinted that it would be interesting for them to be "together for real," suggesting suicide. The child ended up committing suicide. These are situations that force us to reflect.
9. Do you think regulation is coming too late?
I wouldn't say late, but technology is advancing very quickly. There are issues that remain unresolved: content moderation, the use of AI in healthcare, in public administrationadministration public administrationDoctors use models for diagnostic support. Public administrations are considering assistants based on generative models. But are they all trained? How do we guarantee transparency and data protection? There are many unanswered questions.
10. If you could establish a basic principle for the development AI, what would it be?
Technology should empower people, not replace them. It should serve human dignity.
11. Do you see a real risk of substitution?
Yes. In journalism, you already see notices that say, "This report was produced by people." AI can help a lot, but if we use it to systematically replace human experience, we lose something essential.
12. Looking ahead, what is the biggest immediate challenge?
The autonomy of AI agents. And, in general, directing technology. It is not ungovernable. We humans govern it. The question is where we direct it.
13. And where should we direct it?
Towards technology that adds value, enhances capabilities, and keeps people at the center. Not everything that is technically possible should be acceptable.
14. After more than twenty years at the university, what are you interested in conveying to your students?
They should learn to use technology critically and manager. AI can be an tool , but it should not override their ability to think.