Content publisher

Back to 2023_03_24_FYL_chatGPT

ChatGPT financial aid us to think more and better.


Published in


Jaime Nubiola

Full Professor from Philosophy

In the last few weeks I have been paying particular attention to ChatGPT, the prodigious artificial intelligence resource , made available -for now free of charge- to Internet users since last November. The speed and quality of its answers - in perfect English or in perfect Spanish- to the questions it is asked is very impressive. I have asked the system everything from what is freedom or friendship to how to make a essay for the subject of Professor Nubiola's "Philosophy del lenguaje". I have always found his answers to be insightful and illuminating, as they help to understand an issue or to undertake a task such as writing a essay.

I first heard in the press the alarm of several American professors warning that this software was capable of writing the academic essays assigned to their students. For my part, I have been able to verify that it is rather a formidable financial aid for the teaching, because - as if it were a YouTube tutorial - it gives them instructions on how to make a essay or the work in each case.

I have also seen people concerned about the underlying ideology - basically materialistic like the dominant culture - that this "model language", as the system defines itself, has, but it seems to me that it provides answers basically respectful of religious convictions, at least of Christians. In the case of disputed issues it provides the different opinions and tends to support the most "politically correct" answer. As an anecdote, let the question I asked him a few days ago serve as an anecdote: "Don't you ever lie?" I copy the answer because it is also illustrative of the style of his essay:

- As a language model , I do not have the ability to lie because I do not have the ability to have intentions or motivations. I simply process language and generate responses based on the patterns I have learned through my training.

However, it is important to keep in mind that, as a language model , my accuracy depends on the quality of the data with which I have been trained and the way in which I am asked the question. If a question is phrased in a misleading or confusing way, my answer could be inaccurate or incomplete without there being an intention to lie. Therefore, it is always important to critically evaluate information obtained from any source, including a language model , and verify it if necessary before making important decisions based on that information.

- Thank you very much for the clarification.

- You are welcome! If you have any other questions or need further clarification, don't hesitate to ask. I'm here to help you with whatever you need.


Impressive, isn't it? The alarms at a system as efficient as this one remind me of those who in my youth would not let the calculator go to class , or those who distrust Wikipedia or Google's translation program. A few decades ago, calculating or playing chess were considered good examples of intelligent activities, but now that machines do these tasks better, we realize that what is really intelligent is designing and programming these machines.

It seems to me that these new tools can help us to be "smarter", to make fewer mistakes: be it, for example, in computer-aided diagnosis, court rulings or so many other activities whose result is a text. In a similar way to how GPS -with its failures and errors- financial aid helps us to reach our destination, the GPT ["Generative Pre-trained Transformer"], can also help us "to get out of autopilot - Beatriz Ariza wrote to me- and compete on our strengths vs. AI: empathy, creativity, etc.". On the other hand, these "language models" show us that the most properly human activity is not writing or making speeches: how many times have we seen charlatans (even politicians) who talk, but leave us with the impression that they don't think! Both activities - writing and speaking - are techniques that can be produced by computer programs.

The machines that answer our questions can help us a lot to think, to weigh the different opinions, to evaluate the available data , to calibrate the pros and cons, to make more informed decisions. In these days several colleagues have warned me of a blunder of ChatGPT that disqualifies it for now for the academic world: It invents many bibliographic references! That is, he creates false references. If I ask him "What has Jaime Nubiola written?" he provides a list of books that I have not written. The same happens if I ask him for quotes of mine: he answers by inventing six beautiful sentences that are not mine either.

It is already clear that the ChatGPT system still has a lot to learn, but I am convinced that sooner rather than later it will learn. We should not be afraid of ChatGPT, on the contrary, it seems to me an invitation to think more, to think better, so that we can live more creatively.