Hate is not the default way of communication on the internet. Fighting toxic language does not mean violating freedom of speech, explains linguist Dr. Paweł Trzaskowski. The researcher describes the mechanisms of hate speech and effective ways to limit this phenomenon.
Dr. Paweł Trzaskowski, head of the language section of Polish Radio, analyzed the phenomenon of hate speech in online comments as part of his doctoral thesis at the University of Warsaw. His work received an award from the Prime Minister’s Office.
“I assumed that hate speech is a way of talking about someone on the Internet that makes the recipient of the message think worse of the person or group being described than before receiving the message,” summarizes Dr. Trzaskowski in an interview with PAP. He adds that whether a particular post is hate speech or not is a subjective opinion. “If someone feels worse after receiving a particular message or we see that someone’s reputation has been harmed, then we can say that we have encountered hate speech,” he describes.
According to the researcher, unethical language constitutes a decidedly small minority of content on the internet, but we remember it better than other messages; “it is loud, bright, aggressive, and tries to harm.”
According to the linguist, there are two main reasons for using hate speech. The first reason is mercantile – there are people who get paid for discrediting someone, such as political trolls. It is difficult to determine how many such people there are, however. And the second reason is emotional. “Someone uses hate speech to vent frustration – lowers someone’s status, and thus feels more important, better, may believe that they have an influence on something,” says Dr. Trzaskowski.
In his opinion, hate speech is a language of violence. “Violence is always associated with someone stronger usurping power over someone weaker, and harm is inflicted for someone’s gain. And this is the case with hate speech. In the comments section, an attack is launched on the victim. Even if the person wanted to, they are unable to defend themselves because they will be shouted down. There is a power imbalance, harm is inflicted on someone. And the hater benefits from it,” explains the researcher.
A scientist described ways of causing harm in comments. “First of all, the hater finds a point of attack: they grab onto something to discredit the other person,” he says. And he lists that it could be their sexuality, appearance, or accusing them of hypocrisy and cynicism, pointing out their foreignness, dependencies, or lack of credibility.
As for the techniques of discrediting used in hate, they can include: ridicule (making fun of the person, mocking their traits), depreciation (insults, abuse), labeling (assigning the person to a negatively associated category), diversion (referring to a completely different topic than the issue at hand), as well as provocation (such as attacking sensitive points of the person – their intimacy, family).
Dr. Trzaskowski also distinguishes several types of haters: “jokers” (trying to evoke laughter), “screamers” (using short, often vulgar statements), “outraged” (expressing their dissatisfaction), and “informers” (sharing knowledge with others).
While hate language is diverse and commentators use all language means to discredit their opponent, hate techniques are repeatable and easy to list. Once they are known, it is easy to predict what the comments section under an article will look like, the PAP interviewee evaluates.
Dr. Trzaskowski studied, among other things, publicly available news portals in his work. “They do not help in creating a community. There is no sense of community there that would motivate those commenting to behave well. And thus, such portals become the modern equivalent of a medieval pillory, a place where lynching takes place,” he compares.
The PAP interviewee points out that when you are a member of a crowd, you feel like you can get away with more. “But if you choose someone from the crowd, for example, for a street survey – that person suddenly becomes calmer, speaks in more coherent sentences, because then they are responsible for themselves. They expose themselves to judgment. The same goes for hate. In the comments section, where people feel anonymous, a part of their responsibility is taken away. If we create some sense of responsibility in some way, such an obliged user will behave better than an anonymous screamer,” he says.
“When I started working on this research, I thought that comment sections were like the Augean stables – there was no hope for them. However, it turns out that there are ways to effectively counter hate speech in comments. The language of internet comments can be organized and debates can be directed towards the proper channels. It just requires effort,” he says.
The doctor refers to analyses indicating that hateful comments lower readers’ opinions of the original text. Publishers should, therefore, care about improving the quality of comments.
He points out that there are large Polish portals, such as Onet and Interia, that have already stopped publishing comments from their readers. Closing the possibility of commenting may be the easiest but not the only way to get rid of hate speech.
There are places on the internet where dialogues take place, such as specialized or thematic forums. The members of these groups form a community – they gather around a selected topic to exchange information. Hate speech is an exception in such places. Publishers of portals should, therefore, care about creating such cohesive communities. However, this is not an easy task.
“To achieve this, among other things, people are needed who will be responsible for fair and transparent moderation. And this is time-consuming and costly,” says Dr. Trzaskowski.
According to the researcher, moderators should use a fair and transparent system of rewards and penalties for users. Commentators should know why they were punished (e.g., by having their posts deleted or being banned) and should be able to appeal the punishment. If the banning system is not transparent, haters return and become even more aggressive – Dr. Trzaskowski believes.
Apart from punishments, it is also important to offer rewards to commenters in order to encourage valuable contributions. For example, as reported by a researcher, users of The New York Times website can choose which comments they want to see: all comments, those chosen by the editorial staff, or those rated highly by other readers. This division into sections motivates commenters to write thoughtful and careful posts.
Similarly, in the online game League of Legends (which became famous for effectively reducing the amount of “toxic” language used between players), a reward system was developed for users who were not reported by others for using aggressive language.
“However, this can only work if the person writing online identifies with their avatar. They don’t necessarily have to use their real name, but they cannot be completely anonymous either. It is important to create a sense of responsibility for how others perceive them,” commented a linguist.
There are also techniques that discourage commenters from posting comments under the influence of sudden negative emotions. Therefore, in the fight against hate speech, it is helpful to: hide the comment section (it must be manually expanded), require users to log in before adding comments, and use various verification questions. Delayed publication of comments after approval by a moderator is also very effective in combating hate speech. Some websites also refrain from providing a comment section and instead encourage readers to send letters to the editorial staff, which can later be published, reminiscent of the best years of print media.
Counterspeech is also effective in the fight against hate speech, which involves presenting a different narrative under an aggressive comment. However, for counterspeech to be effective, it cannot be aggressive, but rather factual and empathetic.
“Getting rid of some comments doesn’t necessarily mean violating freedom of speech. In public life, there are often unwritten rules that suggest not everything can be said everywhere. And some places online create an illusion of complete freedom, offering a space where verbally anything goes,” comments the researcher.
“The problem is global. Not long ago, we hoped that citizen journalism would emerge thanks to the development of the Internet. Readers then gained a certain hope for agency. But then it turned out that nothing came of it. Ordinary users still go unheard, still unread,” says Dr. Trzaskowski. He adds that this causes frustration among some of them. Therefore, some commenters find this semblance of agency in using verbal violence – hurting others with words. However, this does not solve frustration but fuels further frustration – according to the principle that aggression begets aggression.” (PAP)
Nauka w Polsce, Ludwika Tomala