Social Media Platforms Have Finally Banned These In 2019

Dhir Acharya - Jul 22, 2019


Social Media Platforms Have Finally Banned These In 2019

If you update the news, you will surely feel that social media platforms avoid taking responsibilities for the problems arising solely from their existence.

If you update the news frequently, you will surely have an impression that social media platforms continuously make excuses to avoid taking responsibilities for the problems arising solely from their existence.

Social-media-is-raising-more-concerns-than-ever-1
Social media is raising more concerns than ever

However, under public fire for several concerns, social companies are forced to make changes to improve their platforms, though they are still minimum along with the familiar “We’re Trying” sentence.

So here are the changes made on social media so far, which they should have made years ago or applied from day one.

Facebook

Suspending hate figures: This change came in May, like for Instagram.

Pivoting to “privacy”: The company’s CEO announced that it will focus on privacy in 2019, which seems against the concept of how Facebook has worked for years.

Fixing the busted comments algorithm: Last month, the social network announced an update applied for the ranking system for public comments, emphasizing on displaying authentic and safe comments as well as integrity signals to solve the integrity matter of info and strengthen comment quality.

Facebook-has-finally-suspended-hate-figures-on-its-platform-1
Facebook has finally suspended hate figures on its platform

Banning explicit white nationalist and white separatist content: This change finally came just this year.

Working to prevent involvement in genocide: Also last month, the social giant claimed to be working on violence in the real world incited through its platform.

Demoting fake health info: Recently, Facebook announced a courageous and bold move to minimize the spread of misleading health info on its News Feed without banning it completely.

Instagram

Suspending hate figures: Along with its parent company, Instagram finally permanently suspended a number of political wingnuts and conspiracy theorists. In May, a spokesperson from Facebook said it has always banned organizations and individuals promoting or engaging in hate and violence, regardless of their ideology. The evaluation process for potential violators is extensive, which led the company to the decision to delete the account.

Fighting bullying: Earlier this month, the platform released a new effort to address bullying, including two parts: flagging potentially offensive or hurtful comments before uses post them, and letting users “Restrict” others to hide their comments on their posts. Restricted users won’t be able to send DM or read activity status.

Instagram-will-ask-users-before-they-post-a-potential-hurtful-comment-3
Instagram will ask users before they post a potential hurtful comment

Moderating self-harm content: In February, the platform cracked down on this type of content, such as banned self-harm images and stopped showing non-graphic self-harm content the explore or search tab.

Testing removing likes: This seems like a minor change but it’s for the sake of users. The changes came in April when Instagram announced to begin testing hiding likes on photos from everyone except for the photo poster. Instagram Head Adan Mosseri expressed hope that this change would make users spend more time connecting with people they care about.

Doing whatever it can to tackle fake health info: In May, Facebook said that it was working on the spread of misinformation about vaccination by limiting their reach.

Twitter

Twitter-is-also-making-changes-to-improve-its-platform-4
Twitter is also making changes to improve its platform

Flagging misleading health info: Also in May, the micro-blogging site added a feature for flagging legitimate public info about health to people searching for vaccine-related keywords.

Researching to find out if hate belongs on its network: Recently the social platform decided to do research on whether to allow white supremacists on its platform or not. In late May, the company’s executive Vijaya Gadde said that Twitter thinks that conversation and counter-speech are a force for good, they can also set the ground for de-radicalization, which has occurred on several platforms.

Labeling problematic content: To protect public conversations’ health, the company said there would be a special warning for tweets which may violate its rules but could be kept on the platform due to “the public’s interest.”

Twitter-will-now-label-problematic-content-4
Twitter will now label problematic content

YouTube

Stop helping pedophiles: The Google-owned company has been under fire for how it manages child exploitation in such a way that even advertisers stopped using its platform. In response, YouTube announced a handful of ways to protect children, which include restricting children’s lives streams, disabling comments on minors’videos, and decreasing recommendations.

(Kind of) managing its hate speech issue: Last month, the company announced to be strictly monitoring its problem with hate speech spread. It said that it is determined to upgrade its policies while keeping to set a higher standard for creators and the firm itself. In addition, YouTube banned explicitly pro-Nazi clips as well as those claiming one group is superior to another.

Comments

Sort by Newest | Popular

Next Story