Unlike CIB, which is typically designed to mislead people about who is behind an operation in order to manipulate public debate for a strategic goal, IB is primarily centered around amplifying and increasing the distribution of content and is often financially motivated. This isn’t new — people have been trying to use inauthentic techniques to make money since long before the internet. Recently, we’ve seen malicious actors around the world use US political and social issues to drive people to off-platform domains filled with ads or selling merchandise. We’ve also seen them use similar tactics to leverage prominent topics in other regions around the world. At first glance, these can be mistaken for politically motivated influence operations, when in fact they come from malicious actors who simply use political themes as a form of spam or clickbait lures. Today’s report shares examples we’ve found in recent months of this behavior and how we handled them.
Because some of the content shared by IB actors isn’t itself violating, we take action based on the deceptive behavior we see on our platform. In this report, we share a range of enforcement levers we use – from warnings to reducing the distribution of content to removing IB actors from our platform. The report also describes how our policies have evolved over time to stay ahead of changing deceptive behaviors. We expect these tactics to continue to adapt, and so will we.
See the detailed Inauthentic Behavior Report for more information.