Publicador de contenidos

Back to 20230926_OP_ICS_inteligencia_artificial

Why Artificial Intelligence will only be useful if used ethically

25/09/2023

Published in

The Conversation

Javier Sánchez Cañizares

researcher from Institute for Culture and Society and group 'Science, Reason and Faith', University of Navarra.

In recent days, several news items related to artificial intelligence (AI) have caused more than one shock. It is no longer simply the appearance of content that causes disinformation on social networks, but real attacks on people's dignity. As has happened with the case of minors who have spread on social networks false nudes, generated by AI, of several girls and young women from a town in Spain (Almendralejo). The discussion about the limits that should be placed on AI is back on the table.

A few years ago I participated in a congress on AI and communication. I was struck by the fact that the journalists present asked for an explicit label for every news item produced by an AI. Ahead of the curve, they knew that AI-generated content would become indistinguishable from that produced by humans.

The journalists' request was not simply a defense of their jobs at work. AI, which can free from heavy and repetitive tasks on work, has also become a threat because of the possibility of generating fake content: faces that have never existed, speeches never uttered, bodies never displayed.

The emergence of new technologies entails new risks due to the possible misuse employment of the new potentialities. In the case of AI, with its omnipresence in our social fabric, the risks become even greater, especially because of the possibility of establishing unprecedented destructive synergies when AI meets an unfortunate human purpose.

We are not only talking about the "alignment problem": due to the complexity of AI algorithms and optimization strategies, its products may contain biases and ends unintended by their creators.

This problem has led to the training of interdisciplinary groups debating how to have safer AI. The problem I am referring to is conceptually simpler, but potentially more dangerous: what to do when you have a very powerful tool within reach of any human being, capable of harming itself or others?

Much-needed legal frameworks

As a society, we seek ways to protect ourselves against the misuse of technologies. We protect our personal data , we fight with copyrights against piracy and we put filters on the Internet to prevent access to harmful content.

The unstoppable development of AI demands new legal frameworks on which many institutions have been working for some time. There is a growing sensitivity to this issue in practically all sectors of society and steps are being taken in the right direction.

However, establishing updated legal frameworks in the face of the potential risks of AI, while necessary and essential, should not make us lose sight of what is at stake. However well-intentioned it may be, legality alone cannot prevent the evil employment of AI at any cost.

Law always comes after life, and here we are confronted with the misuse of AI in human life. A life, transited by AI, in which new possibilities are continuously given.

The empowering effect of AI on human activity, the positive effects of which very few doubt, makes the ethical dimension of our actions even more important. When we talk about ethics in AI, we are not simply considering how to implement some ethical rules in machines. In its deepest sense, the ethical dimension of AI makes reference letter how we recognize and treat ourselves as people when using this powerful tool.

Ethics always has to do with life and performance staff, that is why it is also present in this field. Paradoxically, AI challenges us to understand ourselves better as people. Its impressive potentialities make us realize what it means that every human being can use it for good or evil. As the philosopher Charles Taylor explains, it is impossible to be "selves" without a reference letter to good and evil. AI does not have that reference letter, but we do.

The need for an ethical Education

At the beginning of the century, Benedict XVI prophetically warned of the imbalance between technological growth and the ethical maturity of our society. The challenge that lies ahead of us, before which AI places us with no possible escape, is the ethical Education . And I am not only referring to teaching ethics to our children, but to the ethical Education of each one of us, that which can in no way be delegated.

AI opens up the range of possibilities for action to unsuspected limits. Implicit in each of them is the question of what it means to be a person and to do what is good here and now. The conversation between scientists, philosophers and jurists is very necessary for a safe employment of AI, but even more so the Education staff , that which in the end cannot be imposed, but can only inspire. Educating is a perennial task: it means bringing out the best in each person. Can we rely on AI for this?

This article was originally published in The Conversation. Read the original.

The Conversation