Connect with us

News

Why Is It So Difficult To Eradicate Cyberbullying On Social Media

Following the Euro 2020 final, Twitter took steps to combat racist abuse directed against Black England players, but is this enough in the face of overwhelming calls for social media giants to act?

The abuse was also posted on Facebook, following a social media boycott by players and clubs in April in response to a growing wave of bigotry directed towards footballers.

Here are the efforts that social media sites are doing to address the problem and challenges that are blocking further development.

1:25 England manager Gareth Southgate says racial abuse hurled at some of his players is inexcusable and “not what we stand for.”

The social media networks have two primary requests.

The first is that “if messages or posts contain racist or discriminatory information, they should be filtered and blocked before being received or posted.” The second is that “all users should be subject to an upgraded verification process that allows for reliable identification of the person behind the account” (only if requested by law enforcement).

What are the drawbacks of filtering?

The problem with the first request – censoring content before it is received or posted – is that it necessitates the use of technology to automatically detect if a communication contains racist or discriminating content, which currently does not exist.

People can develop new epithets or substitute characters, so the filtering can’t be based on a list of phrases, and existing racist labels can be used in a context that doesn’t encourage hate, such as a victim seeking support citing an abusive message that was given to them.

What methods do they use to filter out other information?

Although social media companies have been successful in filtering and removing terrorist content and photographs of child sexual exploitation, these are technologically distinct problems.

Fortunately, the number of photographs of abuse in circulation is limited.

Unfortunately, this number is rising, but because the vast majority of this media has already been uploaded, it has been fingerprinted, making it easier to find and remove in the future.

Understanding the meaning of an English message and fingerprinting an image are two completely distinct technological difficulties.

Even the most advanced natural language processing (NLP) software can struggle to consider the context that a human will intuitively understand, despite the claims of numerous firms that their software does so successfully.

0:46 Everton defender Ben Godfrey says racial abuse directed at England players after their Euro 2020 final loss to Italy was “always coming.”

What do the businesses have to say?

Instead, both Twitter and Facebook claim that nasty posts were immediately removed after they were posted.

“Using a combination of machine learning-based automation and human review, we’ve rapidly removed over 1,000 Tweets and permanently suspended a handful of accounts for breaking our policies,” Twitter said in a statement.

“We immediately banned comments and accounts targeting hate at England’s footballers last night, and we’ll continue to take action against those who break our rules,” a Facebook representative said, adding, “In addition to our work to remove this content, we encourage all players to turn on Hidden Words, a mechanism that guarantees no one has to see abuse in their comments or DMs.”

What are the problems with requiring verified IDs? Mike Dean, Marcus Rashford, and Lauren James have all recently been victims of social media harassment.

BCS, the chartered institute for IT, has echoed the request for social media users to identify themselves to platforms – though not necessarily to the general public.

Anonymity online is crucial, as Culture Minister Oliver Dowden acknowledged in a parliamentary discussion, stating that “it is quite important for some people – for example, victims fleeing domestic violence and children who have sexuality questions that they do not want their families to know about.”

There are numerous reasons to maintain anonymity.”

Proposals for an ID escrow system, in which the company knows the user’s identity but other social media users don’t, raise concerns about how trustworthy the platform’s personnel are for “groups like as human rights campaigners [and] whistleblowers,” whom the government has classified as deserving of online anonymity.

1:27 Kick It Out Chief Executive Tony Burnett outlines what can be done to stop online racial abuse after Marcus Rashford, Jadon Sancho, and Bukayo Saka were racially harassed on social media following England’s Euro 2020 final defeat to Italy.

Yet, if the corporations were storing the users’ real names in escrow, they could be exposed to law enforcement, as a number of autocratic states have been known to persecute dissidents who openly criticize their governments on social media.

It’s also unclear what procedures the social media networks would use to verify these individuals’ identities.

According to Heather Burns, policy manager of the Open Rights Group, “online abuse is not anonymous.”

“Virtually all of the current wave of abuse can be traced back to the individuals who posted it, and social media platforms can send over information to police enforcement.”

“Calls for social media platforms to remove content miss the point and absolve criminals,” Ms Burns continued.

Nevertheless, according to Twitter’s transparency stats, the business only replies to less than half of all requests for information from law enforcement in the UK relating to accounts on its platform.

What will the government do about the abuse? Image: Social media firms are addressing it retroactively.

“I share the outrage at horrible racist insults of our gallant players,” Oliver Dowden remarked.

Our new Online Safety Bill would hold social media corporations accountable with fines of up to 10% of global sales if they fail to do so.” The Online Safety Bill – a draft of which was published in May – places a statutory duty on social media platforms to remedy harm, but it doesn’t define what that harm is.

Instead, that decision will be made by the regulator Ofcom, which has the authority to pay a business up to 10% of its worldwide revenue if it fails to comply with these obligations.

When it comes to prosecuting people based on tweets, the most important difficulty for British cops is Twitter’s extraordinarily low compliance rate with information demands.

According to the company’s transparency report, just about half of the requests are answered!

Source: https://t.co/S2uIBze4Uq pic.twitter.com/C3wiMegV2I — Alexander Martin (@AlexMartin) July 12, 2021 Incidentally, the Information Commissioner’s Office has a comparable power to deal with data protection breaches, and no major platform has yet received the maximum sanction.

The content will certainly be illegal in circumstances like racist abuse, but the language concerning the responsibility itself is ambiguous.

As written, platforms will be expected to “minimise the presence” of racist abuse on their platforms, as well as the length of time it remains online.

It’s possible that the regulator, Ofcom, believes they’re already doing this.

What do others think? Culture Secretary Oliver Dowden claimed the Internet Safety Bill would address the abuse.

The primary problem is who is responsible for dealing with this stuff.

“The awful racial abuse of England players is a direct outcome of Big Tech’s collective failure to manage hate speech over a period of years,” said Imran Ahmed, the director of the Center for Countering Digital Hate (CCDH).

“This culture of impunity persists because these companies refuse to take significant action and hold people who spread hatred on their platforms accountable.”

“Racists who insult public figures should be removed from social media sites as soon as possible.”

Nothing will change unless Big Tech decides to make a significant shift in its approach to this problem.

“Political leaders have only delivered words so far, with no concrete action.”

“Illegal racial abuse directed to England’s footballers must be prosecuted under current laws,” Ms Burns retorted, “if social media firms fail to wake up to the situation, the government will have to step in to safeguard people.”

“Rather than abdicating their responsibilities by blaming it on social media platforms, the government should ensure that police and the justice system follow existing criminal law.”

“Courts and jails are not run by social media platforms,” she continued.

0:38 Sir Geoff Hurst says his 16-year-old grandson showed him the abuse England’s three penalty shootout losers in the Euro 2020 final got.

Is there anything further that can be done?

According to Sky News, Graham Smith of Bird & Bird, a respected cyberlaw specialist, believes the government and police may use current “online ASBO” authorities to target the most egregious antisocial online behavior.

He stated the potential for employing ASBOs (anti-social behavior orders – also known as IPNAs, or injunctions to prevent nuisance or irritation) “has been largely overlooked” in an interview with the Information Law and Policy Centre.

Mr Smith stated that while IPNAs “have problematic characteristics,” they “at least have the benefit of being targeted against criminals and subject to prior due process in court,” and that “consideration could be given to extending their availability to some voluntary organizations concerned with victims of internet misbehavior.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Must See

More in News