How to make a bird on facebook


Trash dove: how a purple bird took over Facebook | Facebook

If you spend any time in the comments section of big Facebook pages, you may have seen a purple bird headbanging in the comments.

The headbanging Trash Dove, as seen on Facebook. Photograph: Syd Weiler

Meet the Trash Dove, a little cartoon bird that has begun taking over Facebook. It comes from a sticker set made by the artist Syd Weiler, an Adobe Creative resident. It was initially created for the iOS10 sticker store, and was released on Facebook at the end of January.

Syd Weiler’s Trash Dove Facebook messenger stickers. Photograph: Facebook

There’s a Trash Dove for every situation: detective Trash Dove, bread enthusiast Trash Dove, business Trash Dove ...

People on Facebook use stickers the same way they use emojis, either on Messenger or in comments on posts. As noted by meme database Know Your Meme, Trash Dove exploded in popularity after it was featured alongside a dancing cat on a Thai Facebook page with millions of followers:

Allow Facebook content?

This article includes content provided by Facebook. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. To view this content, click 'Allow and continue'.

The bird was all over Thailand’s internet, with its rise to memedom covered by a number of news outlets. It featured in more videos:

Allow Facebook content?

This article includes content provided by Facebook. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. To view this content, click 'Allow and continue'.

And merged with other memes, such as #Saltbae:

All of Thailand currently. @SydWeiler #trashdoves pic.twitter.com/N5m8NT6BlN

— Shauna Lynn (@shaunaparmesan) February 11, 2017

Now, the hype is catching on elsewhere. Big Facebook sites are full of Trash Doves:

Allow Facebook content?

This article includes content provided by Facebook. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. To view this content, click 'Allow and continue'.

Comments on news outlets, meme pages or any big Facebook page are absolutely full of them.

Screenshot of a Trash Dove in the wild. Photograph: Facebook

And it’s even seeping into real life, with some quick turnaround Trash Dove cosplay:

น้องสู้เพื่อพี่มากค่ะ #JAPANEXPOTHAILAND2017 #trashdoves pic.twitter.com/chG0qVJkmC

— หลุงหลิงรุ่งริ่งเอง (@ApichLhing) February 12, 2017

Weiler, who regularly draws live for her audience on her Twitch channel, designed the doves while sketching some pigeons on her summer travels. She told the Guardian: “Pigeons are such strange birds, they have very beautiful mottled, shimmery feathers, but they waddle around and bob their heads and beg for crumbs. They’re like beautiful doves, except they eat trash.”

The pack has been available on the Apple app store since September. But on Thursday, she woke up to a slew of notifications, plus her face plastered on the websites of various news outlets in Thailand.

Allow Facebook content?

This article includes content provided by Facebook. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. To view this content, click 'Allow and continue'.

“I didn’t really understand it, but it seemed like a lot of people were really loving my work, and that’s a great feeling as an artist.”

Internet fame can often be a double-edged sword – viral attention seems like a great thing to have, but comes with pitfalls. While Weiler acknowledged the good parts, she also noted the darker side of the attention.

“I’m a quiet homebody – I like to sit at my desk and draw, and play video games. Overnight, I was flooded with attention, and that has only sped up for five days now,” she said. “The fan art and nice comments have been the highlight for me, but I’m amazed at how mean people can be to someone they’ve never met, because of something silly online. I didn’t ask for or sign up for any of this, but many people are blaming me, and I’ve even received some threats.

As a result, Weiler said she has learned a lot about internet security and the importance of two-step verification on her account. “Everyone always talks about ‘going viral’ as a great thing, but I don’t think I’d wish it on another artist. It’s better to spend time building a dedicated viewer base that will support you for you.”

On a more positive note, Weiler is looking ahead to future projects, specifically one involving raccoons:

started working on a new sticker pack because I don't have enough other stuff to do! 'Trash Pandas' or 'Trash Bandits'? (they're raccoons!) pic.twitter.com/I4SONTVO4l

— syd weiler 🙃 (@SydWeiler) February 11, 2017

So, there you have it. If you see a purple bird headbanging in the comments of a Facebook post – now you know why.

pic.twitter.com/8LPcyejaFG

— syd weiler 🙃 (@SydWeiler) February 9, 2017

This article contains affiliate links, which means we may earn a small commission if a reader clicks through and makes a purchase. All our journalism is independent and is in no way influenced by any advertiser or commercial initiative. By clicking on an affiliate link, you accept that third-party cookies will be set. More information.

What Is Trash Dove? The Purple Facebook-Comment Sticker Bird

What Is Trash Dove? The Purple Facebook-Comment Sticker Bird

SubscribeGive A Gift

Things you buy through our links may earn New York a commission.

A wild Trash Dove approaches.

If you made the mistake of reading the comments anywhere on Facebook this week, you might have been pleasantly surprised by what you found. It all depends on how you feel about a certain purple bird with a very pliable neck better known as Trash Dove. If you’re pro-Trash Dove, then Facebook was probably a pretty good hang for you this week. If you’re not, too bad, because there is no escaping the head-banging viral monster that is Trash Dove.

To fully understand how this weird GIF of a cartoon bird became the perfect response for anything anyone posts on Facebook, you need to understand Trash Dove’s origins. The bird is part of a set of Facebook stickers designed by Syd Weiler, an illustrator and Adobe creative resident. According to an interview with Weiler from PopDust, she actually created Trash Dove while streaming live on Twitch in September, after Apple announced stickers for messaging in iOS. “Facebook asked to put them on the site last month. But I didn’t expect anything like this,” she said. “I woke up on Thursday … I think, and everyone had tagged me in the joke videos. It was surreal.”

YOU GUYS

TRASH DOVES ARE NOW ON FACEBOOK! THEY'RE FREE!!!

GO SPAM YOUR FRIENDS AND ENEMIEShttps://t.co/RcIVWvdMFg pic.twitter.com/28qgJqOczF

— 🦎🌱🐉 (@SydWeiler) January 31, 2017

The “joke videos” she’s talking about hail from Thailand, where Trash Dove became an overnight sensation last week. One video in particular — an edit of Trash Dove and a dancing kitten with an overlay of pop music — gained over 3.5 million views in just a few days, and helped launch the stickers to their meme status, Know Your Meme reports.

แมวเด้ง + นกเด้า fusion

Posted by สัตว์โลกอมตีน on Tuesday, February 7, 2017

“It all started in Thailand, actually — they have a joke about birds there, and it tied in there,” Weiler said. The joke, as explained by the Daily Dot, involves the Thai word for bird, nok, which has an alternate translation meaning “someone hopelessly single or suffering from unrequited love.” And honestly, just look at that doofy bird flapping its neck around. Makes perfect sense.

Facebook today... @SydWeiler I love this sticker so much. 😆🐦 #trashdoves pic.twitter.com/9beJTwFdvY

— น.ส.หน้าไทย (@nicemarez) February 10, 2017

Today, Trash Dove’s popularity has only continued to grow worldwide. There is Trash Dove fan art.

ขนาดนกยังไม่นกเลย~
#drawing #linedraw #painting #ibispaint #ibispaintx #アイビスメイキング #trashdove #trashdoves pic.twitter.com/fcvSF7IRAr

— On Sunika (@On_Sunika) February 12, 2017

#trashdoves that accidentally looks like Jave Y - Y pic. twitter.com/7Iw6eOAMmp

— MonochromeZ (@MonochromeZ_) February 12, 2017

Meta-Trash-Dove memes.

All of Thailand currently. @SydWeiler #trashdoves pic.twitter.com/N5m8NT6BlN

— Shauna Lynn (@shaunaparmesan) February 11, 2017

Trash Dove makeup tutorials.

@SydWeiler your trash bird is all Over my FB dash + I love it. This eye makeup 😍 (not my eye) https://t.co/crJPsQpqSA pic.twitter.com/bktc7pH5om

— BRUISES (@bruisesxo) February 12, 2017

And … whatever the heck this guy is doing in the name of Trash Dove.

น้องสู้เพื่อพี่มากค่ะ #JAPANEXPOTHAILAND2017 #trashdoves pic.twitter.com/chG0qVJkmC

— หลุงหลิงรุ่งริ่งเอง (@ApichLhing) February 12, 2017

Of course, since it’s been about a week since Trash Dove rose to internet fame, the backlash cycle has already begun. People are starting to get irritated by the endless stream of reply comments consisting solely of Trash Doves, and some are even proposing a ban. Which likely won’t happen — unless Mark Zuckerberg really hates these birds of peace for reasons unknown — but at least the fan art is still good.

Ask Mark to ban trash doves.

Posted by Parthiben Jayagobe on Wednesday, February 15, 2017
Introducing Trash Dove: The Purple Bird Taking Over Facebook

Things you buy through our links may earn New York a commission.

extreme weather

extreme weather

Oh the Weather Outside Is Frightful

By Chas Danner

Vast stretches of the country have been struck by the bomb cyclone, making a treacherous mess.

year in review

year in review

Intelligencer’s 20 Most-Read Stories in 2022

By Intelligencer Staff

The articles our readers were most deeply engaged with this year.

the power trip

the power trip

Donald Trump’s Final Campaign

By Olivia Nuzzi

Inside his sad, lonely, thirsty, broken, basically pretend run for reelection. (Which isn’t to say he can’t win.)

early and often

early and often

Senate Passes Massive Spending Bill That Closes January 6 Loophole

By Ed Kilgore

A dispute over immigration policy nearly blew up the whole thing, but the deal got done — with Electoral Count Act reforms intact.

early and often

early and often

The MTG-Boebert Feud Is Fueled by Diverging Political Paths

By Ed Kilgore

Marjorie Taylor Greene has a safe seat and every reason to make a bid for House GOP leadership. Lauren Boebert needs to scrap for votes back home.

early and often

early and often

Poll: Sinema Would Get Crushed As an Independent in 2024

By Ed Kilgore

Worse yet, she’s so weak that she might not even be able to execute a threat to throw her Arizona U. S. Senate seat to the Republicans.

politics

politics

What Hasn’t George Santos Lied About?

By Nia Prater

A new report suggests George Santos lied about his family escaping anti-Jewish persecution in Europe during World War II.

ftx

ftx

The Feds Flipped Two of SBF’s Top Lieutenants

By Matt Stieb

Alameda CEO Caroline Ellison and FTX co-founder Gary Wang pleaded guilty to wire fraud, putting Sam Bankman-Fried in an even deeper hole.

early and often

early and often

New Hampshire Threatens Biden With Loss of Electoral Votes

By Ed Kilgore

Granite State Democrats are refusing to copy with Biden’s plan to make South Carolina the first primary, arguing it could flip the state to the GOP.

early and often

early and often

The Fall of the Progressive Boy King

By David Freedlander

Sean McElwee formed a lucrative alliance with SBF. Then his patron went bust and he suffered his own scandal.

exhibit a

exhibit a

The January 6 Committee’s Best and Worst Cases Against Trump

By Ankush Khardori

The Justice Department has been given a grab bag of options for prosecution.

tremendous content

tremendous content

Are the Trump NFT Trading Cards Full of Unauthorized Images?

By Margaret Hartmann

Trump’s NFT collection allegedly uses unlicensed images from sources like Amazon and Reuters, and watermarks are visible in some illustrations.

just asking questions

just asking questions

What It’s Like in China As Everyone Gets COVID

By Benjamin Hart

Darin Friedrichs, a market analyst based out of Shanghai, on daily life amid a chaotic policy shift.

ftx fallout

ftx fallout

Who Gets FTX’s and Sam Bankman-Fried’s Money Now?

By Matt Stieb

Customers, politicans, and media organizations all received the fruits of a massive fraud.

early and often

early and often

McCarthy Makes Weird Threat to Thwart GOP Senators

By Ed Kilgore

Twenty-one Republican senators just ignored him, voting to advance a compromise-spending bill.

the sports section

the sports section

Steve Cohen’s Midnight Miracle

By Joe DeLessio

The billionaire owner’s transformation of the Mets continues with a blockbuster deal for Carlos Correa.

early and often

early and often

Even Trump Accepts That Electoral Count Act Reform Is Near

By Ed Kilgore

Congress is poised to explicitly outlaw the insurrectionary move Mike Pence was pressured to deploy. Now Trump claims he doesn’t care.

year in review

year in review

New York’s 20 Most-Read Stories in 2022

By Intelligencer Staff

The articles our readers were most deeply engaged with this year.

the group portrait

the group portrait

How CoinDesk Took Down SBF

By Kevin T. Dugan

The reporting team who shook up the crypto world.

headhunting

headhunting

Who Could Be the Next CEO of Twitter?

By Matt Stieb

A handful of options for a possible post-Musk era.

early and often

early and often

Ivanka Trump Gave the January 6 Panel What It Needed

By Margaret Hartmann

Was she “entirely frank” and “forthcoming” in her testimony? The panel says no — but at least she gave the hearings some star power.

politics

politics

A Newly Elected Congressman Seems to Have Made Up His Life Story

By Nia Prater and Benjamin Hart

The New York Times discovered that Long Island’s George Santos is not who he claims to be.

life after roe

life after roe

A National Abortion Ban Will Be a 2024 GOP Litmus Test

By Ed Kilgore

Republican candidates may prefer not to talk about the issue. But anti-abortion activists will force the discussion in the presidential primaries.

politics

politics

City Councilman’s Office, Apartment Building Targeted by Anti-LGBTQ+ Protesters

By Nia Prater

Erik Bottcher shared images of hateful graffiti outside his apartment building, which he said was the result of “unhinged online conspiracy theories.

early and often

early and often

No, Ron DeSantis Isn’t the Second Coming of Ronald Reagan

By Ed Kilgore

The idea that “DeSantis Democrats” are the new “Reagan Democrats” betrays a misunderstanding of both men.

Email You\'ll receive the next newsletter in your inbox. *Sorry, there was a problem signing you up.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Already a subscriber?

What is your email?

This email will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

Enter your email: Please enter a valid email address.

Sign In To Continue Reading

Create Your Free Account

Password must be at least 8 characters and contain:

This password will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

You’re in!

As part of your account, you’ll receive occasional updates and offers from New York, which you can opt out of anytime.

Already a subscriber?

What is your email?

This email will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

Enter your email: Please enter a valid email address.

Sign In To Continue Reading

Create Your Free Account

Password must be at least 8 characters and contain:

This password will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

You’re in!

As part of your account, you’ll receive occasional updates and offers from New York, which you can opt out of anytime.

Already a subscriber?

Already a subscriber?

Email You\'ll receive the next newsletter in your inbox. *Sorry, there was a problem signing you up.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Already a subscriber?

What is your email?

This email will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

Enter your email: Please enter a valid email address.

Sign In To Continue Reading

Create Your Free Account

Password must be at least 8 characters and contain:

This password will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

You’re in!

As part of your account, you’ll receive occasional updates and offers from New York, which you can opt out of anytime.

Already a subscriber?

What is your email?

This email will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

Enter your email: Please enter a valid email address.

Sign In To Continue Reading

Create Your Free Account

Password must be at least 8 characters and contain:

This password will be used to sign into all New York sites. By submitting your email, you agree to our Terms and Privacy Policy and to receive email correspondence from us.

You’re in!

As part of your account, you’ll receive occasional updates and offers from New York, which you can opt out of anytime.

Already a subscriber?

Already a subscriber?

How to get verified and get a tick on Instagram

Many influencers, athletes, musicians and various media people, as well as brands, try to get a tick on Instagram, but not everyone succeeds and often not the first time. In the article, we analyze what the blue tick on Instagram means, to whom it is given and what needs to be done for this. We share tips on how to increase the chances of verification.

  • What does the blue tick on Instagram mean?
  • nine0005 What are the benefits of having a check mark on Instagram?
  • Who Can Get a Tick on Instagram
  • Why they give a tick on Instagram: account requirements
  • Is it possible to buy a blue tick on Instagram
  • How to get a blue tick on Instagram
  • How to increase your chances of getting ticked on Instagram nine0006

What does the blue tick mean on Instagram

Some Instagram profiles are marked with a badge with a tick. This means that the account belongs to a famous person or brand, and the owner has confirmed its authenticity.

An example of a verified account

What are the benefits of having a check mark on Instagram

It is believed that verification on Instagram increases reach and reduces the likelihood of a ban. There is no evidence for this, however, a tick in the profile still provides some advantages. nine0003

People who have achieved a certain popularity sooner or later have fans and imitators. When trying to find an account of a famous person through the search bar, there will be many "left" pages. If the profile is ticked, the target audience will immediately understand who to follow. The same goes for brands.

A verified account is immediately highlighted.

Another plus is that the checkmark reduces reputational risks. As your fame grows, you may encounter a common problem: scammers create fake pages and deceive gullible people on your behalf. For example, they offer to buy merch or an information product, invest in a “reliable project”, and the like. nine0003

It is much more difficult to do this with a verified profile. When searching, users will first of all pay attention to an account with a check mark. In the header, you can indicate that this is the only genuine page, and all others are fakes. If you are in sales, having a badge will increase the confidence of potential buyers.

Try SMS mailings for business

Select the desired segments among subscribers: by age, interests, gender.

Who can get a tick on Instagram

Instagram profile can be verified by public figures who are something of themselves and have reached a certain level of media: actors, athletes, models, politicians, musicians, various representatives of the arts, and so on. There are verified accounts of various areas - makeup artists, lawyers, authors of information products and much more.

Brands that have gained some recognition and fame, at least locally, can also get a tick on Instagram. nine0003

Why they give a tick on Instagram: account requirements

There are a number of criteria that a profile must meet in order to receive a badge:

  • authenticity - belongs to a real person or an officially registered company;
  • uniqueness - this allows the presence of accounts of the same brand in different languages;
  • completeness - all information in the bio is indicated, there is an avatar and at least one publication;
  • publicity - a community has been formed around an individual or brand, as evidenced by media mentions. nine0006

Of course, the account must comply with Instagram's terms and conditions of use.

Is it possible to buy a blue tick on Instagram

The official position of Instagram management: accounts that meet all the requirements are verified for free. Any attempt to get a tick around the rules is considered fraud. If the fact of this is revealed, the page will be blocked.

However, there are people who offer to buy a tick, and the demand for this service is quite high. It is safer to cooperate with agencies that work under a contract with a guarantee against deletion and take payment only after verification. However, you need to understand that you agree at your own peril and risk and may lose your profile if something goes wrong. In addition, the cost of the service reaches several thousand dollars, so weigh the pros and cons. nine0003

How to get a blue checkmark on Instagram

Go to settings, click "Account" and select "Request Verification".

Select the desired item in the settings

An application form for verifying an Instagram account will open. First step: you need to add an official document proving your identity or company registration. Options:

  • passport;
  • state-issued identity card;
  • driver's license;
  • tax documents; nine0006
  • utility bill;
  • founding documents of the company.

A nickname will be automatically loaded in the "Username" line, and below you need to specify the full name.

How to get a blue tick on Instagram: fill out the verification application

How to get a blue tick on Instagram: fill out a verification application

Second step: confirm that the person or brand you represent is of public interest. Choose a category that characterizes the direction of your profile: media, sports, music, digital content creator, and so on. Indicate the country or region in which you are most popular. nine0003 How to get a tick on Instagram: provide as much information as possible

You can also describe your audience: who these people are, what their interests are and why they follow you. Next, list the names or titles by which the person or brand is known. Both items are optional, but we recommend that you complete them to increase your chances of getting a check mark.

Add links to publications in the media, accounts in other social networks and other sources that prove that you are interesting to people and have a certain media weight. All information is carefully analyzed, so custom articles and other paid content can harm your reputation. nine0003

Check the specified data and send a request for verification. Within 30 days you will receive a response with approval or refusal. In the latter case, read the reasons and try again after a month.

How to increase your chances of getting a check mark on Instagram

Even if your account is 100% compliant with the requirements above, there is no guarantee that you will receive a verification badge. It is useful to take a number of actions that will increase the chances of success.

Grow your audience on Instagram

Accounts with more followers are more likely to get ticked. This is not a mandatory requirement, but there is such an observation. We have articles on how to increase reach and gain an audience on Instagram.

Draw is one of the effective ways to promote

Promote on other platforms

Develop accounts on YouTube, Telegram, TikTok and other social networks. Gain followers so that Instagram staff sees interest in you across platforms. It’s great if you can confirm profiles on other sites - this contributes to a positive decision on verification on Instagram. nine0003

Increase media exposure

Try to increase the number of mentions of yourself in the media. Participate in events, express an expert opinion on the topics you specialize in, write articles, agree to interviews - perhaps you yourself will be able to propose your candidacy.

We advise you to read:

  • “What is a personal brand, how to create and promote it”;
  • "Brand promotion on the Internet - how to make the company name recognizable. "
  • nine0019

    Regularly create and publish unique content

    A full account creates a positive impression of the owner. It is advisable to post content at least a couple of times a week, while its uniqueness is important. At this stage, it is better not to duplicate photos and videos from other social networks on Instagram, especially those with a TikTok watermark.

    Recommended reading:

    • “How to create a content plan for Instagram. Rules, tips, examples”;
    • “Ideas for posts. 50 ideas for any public and business”; nine0006
    • "Educational content: how to create and use it in mailing lists";
    • “How to write an engaging Instagram post”;
    • "Instagram ideas to spice up your business account";
    • "How to create a high-quality visual on Instagram."

    Do not invite your audience to other social networks

    All social networks compete with each other, so you should not invite subscribers to YouTube or TikTok in posts, headers and pinned stories - at least until you get a tick. Show that you are primarily interested in growing your Instagram account. Alternatively, you can place a multilink and specify all links there. nine0003

    SendPulse has a block constructor for quickly creating websites and multilinks. You can add buttons for messengers and social networks, buttons to go to the site and chat bots, a gallery with pictures and videos, multi-channel subscription forms, and also accept payments through popular payment systems. Internal statistics and integration with Google Analytics are available. You can place a Facebook pixel and add a custom code.

    Creating a multilink in the SendPulse constructor

    A couple more tips

    If you want to verify your identity, it is better to take a photo of documents next to your face so that Instagram employees have no doubts about authenticity.

    It is also desirable that the usernames on Instagram and other social networks to which you provide links are the same or as similar as possible.

    Monitor the internet for fake accounts with your name and content. If you find such pages, file a complaint immediately.

    Conclusion

    Find out what the Instagram checkmark means, who can get it and how to do it.

    Use the SendPulse platform for marketing and online sales. Our tools include a website and multilink builder, chat bots for Instagram, Telegram, WhatsApp, Facebook, email, SMS, Viber mailing services, a free CRM system, browser notifications, and a universal platform for creating courses.

    Facebook versus two billion people — Bird In Flight

    Today, Facebook has 7,500 moderators, but even this army is not enough to keep track of the millions of posts that appear on the social network every day in a hundred languages. After interviewing two dozen employees and poring over hundreds of pages of Facebook's internal guidelines, Motherboard has published a gigantic piece on how the platform is trying to meet this challenge. nine0003

    The article is abbreviated. Read the original on the Motherboard website.

    This spring, Facebook approached leading social media experts with a proposal: Would they like to have dinner with Mark Zuckerberg? According to people who attended these informal meetings at Zuckerberg's Palo Alto home, the conversations almost always revolved around the same issue: content moderation.

    Lately everyone seems to be dissatisfied with the platform. Conservatives accuse Facebook of supporting liberals, liberals of allowing white nationalism and Holocaust denial on the platform; governments and corporations are outraged by fake news and disinformation, human rights organizations are outraged by the flourishing of harassment and live suicides. Recently it came to accusations of complicity in genocide. nine0003

    Facebook hopes that complex content moderation will help reduce tension, but this idea is incredibly difficult to implement. And most of the experts who attended Zuckerberg's dinners are sure that it is completely impossible.

    Recently it came to accusations of complicity in genocide.

    At the dawn of the Internet, many believed in a utopia that it would remain a decentralized platform with hundreds of millions of sites and communities, each of which would choose its own rules and norms that were comfortable for it. But very soon corporations took over the Internet: personal sites and forums were replaced by Facebook, YouTube, Instagram, Reddit and Twitter. As these companies grew, they began to hire moderators. At first they simply removed illegal content (primarily child pornography), then they turned to posts that could alienate users or harm the company's reputation. “The main reason for content moderation is to protect the brand and to remove responsibility from the platform for the actions of its members,” says Associate Professor Sarah T. Roberts, who studies moderation. To understand why Facebook and Twitter need moderation, it is enough to remember that when they began to tighten the rules, there were many alternatives with “complete freedom of speech”. Most of these resources soon turned into cesspools full of rudeness, hatred and Nazi slogans. nine0003

    Facebook now has hundreds of rules, each with many subsections, exceptions, and gray areas. There are many detailed internal guidelines regarding hate speech, bullying, pornography, spam, fake news, copyright. But perhaps the most difficult question for the social network is how to moderate the statements of users who do not engage in fraud and do not undermine democracy - it's just that their communication style offends others?

    Facebook's head of cybersecurity, Nathaniel Glaisher, enters a "war room" the company has set up to fight US election interference, October 17, 2018. Photo: Noah Berger / AFP / East News nine0187

    A whole team is working on the creation of the rules: it includes lawyers, experts in PR and crisis management. These rules are followed by 7,500 moderators. Primary content monitoring, however, is carried out by artificial intelligence - today it is perfectly able to track porn and spam, but it has not yet been able to recognize the comments of haters. A whole staff of employees is engaged in supporting the army of moderators: one division writes new software for them and trains artificial intelligence, another establishes moderation outside the United States, the third deals with controversial cases, the fourth coordinates the work of the first three ...

    User content moderation is one of the most confusing and labor-intensive tasks facing the platform. Two billion users write billions of posts every day in a hundred languages. Every week, Facebook employees view more than 10 million potentially violating posts. No wonder moderators sometimes get it wrong. But even when they do everything according to the rules, users are still unhappy: they don’t like the rules themselves. The problem is, experts say, that the Facebook audience is now so large and diverse that it is no longer possible to completely control its behavior on the Internet. nine0003

    Even when moderators do everything according to the rules, users are still unhappy: they don't like the rules themselves.

    For several months we studied the work of all Facebook departments that interact with user-generated content: we talked with people who develop rules and bans, read thousands of pages of instructions for moderators, talked with the moderators themselves, met with top managers - and were able to look inside processes that are usually invisible to users. nine0003

    In April of this year, Facebook made public its internal content removal standards for the first time. All these rules are created on the basis of many years of work with controversial situations and are designed to make it easier for moderators to make decisions on similar cases. If the case is complex and does not fall under the rules, the moderators transfer it to a department created specifically to work with atypical cases.

    For example, this was the case with the #MeToo flash mob, when women began to massively share personal stories of sexual violence and harassment on social networks. Technically, many of these posts broke the rules, but the Facebook team reasoned that it was more important to give people a chance to speak. So the rules regularly have to be repealed, rewritten, edited and adapted to changing circumstances. nine0003

    Disputes arise daily. When serious problems are brewing, something like a state of emergency is declared: all departments put aside current affairs in order to focus on the most important things together. Sometimes such periods last for weeks, but they, according to one employee, help to take action before the crisis erupts in full force. When the launch of Facebook Live was followed by a wave of suicide and self-harm videos, the brainstorming lasted three months. As a result, the team was able to develop tools to quickly detect disturbing content, even live. nine0003

    When the launch of Facebook Live was followed by a wave of suicide and self-harm videos, the brainstorming session lasted three months.

    In order to respond to issues even faster, Facebook is now hosting forums about UGC standards twice a month. At these meetings, they propose, finalize and adopt new user policy rules, change and edit existing ones, discuss emerging problems and ways to solve them. In June, we attended one of these meetings (the very fact that we were admitted is quite remarkable: usually Facebook is not too open in such matters). We promised not to publish the agenda of the event, but at least we can tell how such meetings go. nine0003

    Representatives of 11 offices from different parts of the world participate in the forum: someone attends the meeting in person, someone connects via video chat. A working group made up of several Facebook user policy experts proposes a new rule or change. Over the next weeks, this rule is discussed, improved, tested (usually with the involvement of external consultants - members of non-profit organizations, scientists, human rights activists), and then added to the user policy. nine0003

    A protester from the Furious Grannies group demonstrates outside the Facebook headquarters on April 5, 2018. The group demands better protection of users and their personal data. Photo: Justin Sullivan / Getty Images / AFP / East News

    Account owners make fun of Facebook's rules and restrictions all the time, but each of them is not accidental and has a good reason. Take, for example, the recent story when Facebook invited users to upload their "nude" pictures; a special program blocked the photos, making it impossible for them to be published by someone else. Everyone laughed at the idea, but Facebook employees who are fighting “revenge porn” (revenge porn) are sure that blocking a picture before it is published is much more effective than deleting it after the fact. And security experts believe that the program invented by the company is absolutely reliable. nine0003

    Facebook claims that artificial intelligence now recognizes almost 100% of spam, 99.5% of terrorism-related content, 98.5% of fake accounts and 96% of "sexual content". And even 86% of graphic images containing violence are tracked by artificial intelligence, and not by a team of moderators. According to internal statistics, for one moderation error, more than a hundred posts with prohibited content are removed before anyone sees them.

    But if AI almost won porn and spam, then the nuances of human vocabulary are more difficult for it. In addition, there are still no generally accepted standards: what is considered a manifestation of hatred and what is not. Therefore, now artificial intelligence recognizes only 38% of offensive posts (and even then when they are written in English or Portuguese). nine0003

    Now artificial intelligence recognizes only 38% of offensive posts (and even then when they are written in English or Portuguese).

    …Facebook strives to ensure that decisions to remove user-generated content are made as objectively as possible, without the influence of the human factor. The task is impossible - for this the rules must include all potential human statements in the past and future - but the employees of the company do not give up. For moderators, they develop all new methodological manuals with ready-made solutions. Seeing, for example, a post about migrants, the moderator checks the corresponding diagram, which already lists acceptable and unacceptable wording. By the way, this trend towards objectivity appeared along with social media: in the pre-Facebook era, moderators usually deleted content, guided by their own ideas about beauty. nine0003

    In last year's open letter, Zuckerberg explained where the company would like to go in terms of user policy. So far, technology is not ready for this, but in the future, AI will be able to determine by the activity of people on the network what types of content they enjoy. And then they will see on Facebook only what does not offend them, does not irritate or hurt them. “Each user will choose their own content policy,” Zuckerberg wrote. What do you think is unacceptable: nudity? Violence? Blasphemy? This will be your personal settings." But so far this is far from it. nine0003

    Dave Willner, who created the first standards for Facebook moderators, remembers how it all began: “When Facebook was still a college social network, there was only one moderator, and he had few duties - to delete photos of members, spam and phishing. When I joined, there was already a team of moderators, but the instruction for them fit in one line: “Delete everything that makes you uncomfortable.”

    In 2009, Facebook had 120 million users - and only 12 moderators. But when the social network came to countries about which its creators had a rather vague idea, it was necessary to clearly define the rules of what can and cannot be done on the site. This became apparent in June 2009th, when militants killed an oppositionist in Iran and posted the video on Facebook. The more international the platform became, the more often conflicts and difficult situations arose there. “It was no longer possible to leave the decision to the moderators: all moderators are very different, and we needed a single policy,” says Willner.

    In 2009, Facebook had 120 million users - and only 12 moderators. Now there are 7,500 moderators.

    …The internal instructions for the moderators that we were able to obtain are rather curious. One of them prescribes to ban "racism" and "white domination", but for some reason "white nationalism" and "white separatism" are not banned. “White nationalism and calls for a white state are not a violation of our policy,” one of the slides says. (A Facebook spokesperson later explained that the rule was added in an attempt to protect any separatist movements, not just "whites"; furthermore, nationalism and separatism, unlike racism, do not necessarily imply the superiority of one nation over others.)

    Despite the declared statements about a “world without borders” with the same rules for everyone, the slides show that the platform still has to take into account the local specifics. At a minimum, you must comply with local laws so as not to fly out of the market. One employee said that sometimes Facebook releases country-specific manuals to give moderators "local context." In India and Pakistan, moderators have been instructed to monitor potentially illegal content (such as images of Muhammad or insults to Allah). nine0003

    According to another document that came to us, when Zuckerberg's refusal to delete Holocaust denial posts sparked outrage in countries where such denial is illegal, Facebook blocked users with those countries' IP addresses from viewing such posts. Facebook respects local laws "when the governments of these countries actively enforce them," the document says.

    What is allowed and not allowed on the platform is spelled out in the instructions with incredible pedantry: what is considered hate speech and what is not, whose faces can be photoshopped with anuses, and whose faces cannot. (If the figure is public, this is usually allowed. The manual gives an example: “Photo of Taylor Swift with anuses instead of eyes: YES. Photo of Donald Trump with anus instead of a mouth: YES. Photo of Kim Jong-Un with an anus instead of a mouth and anal balls sticking out of it: No. The exception is close-ups of the anus or the whole buttocks are not allowed: this falls under our prohibition of sexual content. ”)

    Facebook respects local laws "when the governments of these countries actively enforce them."

    “These detailed instructions are 'idiot protection',” says another company employee. Moderators are rarely experts in many areas, so Facebook provides them with ready-made formulas. But, of course, it is difficult to shove all the diversity of human communication into a training manual.

    The wall of the building that houses the German office of Facebook on December 13, 2015. Photo: Bodo Marks / DPA / AFP / East News

    Sometimes Facebook's effort to classify prohibited content itself leads to errors - posts that clearly violate the site's policy are not included in the classification. For example, one of the guidelines for moderators placed an anti-Semitic cartoon in the “acceptable” section — extremely offensive, hinting at the involvement of Jews in the explosion of the Twin Towers, but not containing a single “stop word” from the section about xenophobia. (Next to the cartoon, there is a postscript: “Ignore because it’s a conspiracy theory.” Facebook representatives have previously stated that “if a person is wrong and believes in conspiracy theories, this is not a reason to remove his posts.”)

    ...One of the moderators admitted in an interview that although he and his colleagues are busy watching the latest content all day long, sometimes he feels completely disconnected from the real world. “You mechanically analyze the context without going into the content. It's not like watching the news at all." Receiving a complaint about the content, the moderator checks it for "stop words" and decides whether to delete or leave. When deleting, he must choose a reason: does this photo violate the ban on “sexual content” or “revenge porn”? If a post falls under several violations at once (and it often happens), the moderator follows the hierarchy and deletes the post for a more “good” reason. Sometimes, as the staff say, this greatly slows down the process itself - instead of just getting rid of the post, the moderators puzzle over the choice of the reason. nine0003

    Instead of just getting rid of the post, the moderators are puzzled over the choice of reason.

    To remember the rules, new moderators in the early days work "in simulation mode": the content they delete does not actually disappear. Newcomers come often, few people have a desire to stay at this job for more than a year.

    ...In October 2016, a 22-year-old Turkish man flew into the Facebook office and turned on a live broadcast. “No one believed when I talked about suicide,” he said. “So look!” The man pulled out a gun and shot himself. Just six months earlier, the company released Facebook Live - and did not calculate the load that fell on the moderators. Moderating video turned out to be much more difficult than texts and photos, especially if it is a live broadcast. nine0003

    More recently, Facebook has been accused of complicity in the Myanmar genocide for allowing the dissemination of incitement to violence on its platform. It turned out that the AI ​​program responsible for detecting such calls did not recognize the Burmese language at all: due to the fact that Myanmar was in isolation for a long time, its language was encoded not by the Unicode standard for the rest of the world, but by a local standard that incompatible with Unicode.


    Learn more