Verbal aggression versus freedom of speech in social networks
Every time we talk, we take responsibility for what we say, so that we might be reproached for being imprudent or wrong, and we may even have legal problems if, for example, someone thinks we have violated his/her right to honour or privacy.
In social networks, perceived anonymity and disinhibition facilitate verbal aggression, and many users are not fully aware of the far-reaching potential effects of their words. Social networks now offer a new arena for defamation, hate speech and incitement to terrorism.
However, the limits between such offences and freedom of speech are still very fuzzy and trigger recurrent social debates, particularly in the wake of controversial lawsuits. Drawing these boundaries seems to be even more intricate in social networks, due to their complex discursive dynamics.
Nonetheless, Forensic Linguistics has not yet developed a systematic and consistent apparatus for analysing defamation, hate speech and related crimes in traditional contexts, and even less in social networks.
The aim of this project is to identify, from an interdisciplinary perspective, the relevant criteria (i.e. linguistic, discursive, pragmatic, legal, etc.) for analysing potentially defamatory texts published in social networks, and thus help draw the line between the legitimate exercise of freedom of speech and abusive online defamatory practices.
Our advances and results will be not only disseminated in academia, but also addressed to different experts and social stakeholders (lawyers, judges, journalists, social network users, etc.).
What types of criteria (legal, linguistic, discursive, sociologic, mediatic, etc.) are more frequently evoked in Court Decisions on defamation, hate speech and incitement to terrorism in social networks?
What legal criteria are relevant to identify defamation, hate speech and incitement to terrorism in social networks?
What (socio-)pragmatic and media criteria are relevant to identify defamation, hate speech and incitement to terrorism in social networks?
What linguistic and discursive criteria should be used to identify defamation, hate speech and incitement to terrorism in social networks?
How should all these criteria be harmonised to build a comprehensive analytical model to examine potentially defamatory texts in social networks? How might this model be transferred to Forensic Linguistics and applied to real lawsuits?
Grant IJC2020-046166-I funded by MCIN/AEI/ 10.13039/501100011033 and by the "European Union NextGenerationEU/PRTR".