Facebook’s entire approach to AI is unsafe for advertisers
Brands don’t want to sponsor hate speech and disinformation. Facebook says its automated filtering is almost good enough to weed out toxic content, as it does nudity. Then everyone can feel good about advertising on Facebook again.
That’s the narrative Facebook wants. It’s not a true story.
Its content moderator is actually one of two Facebook robots advertisers worry about. The other decides whose ad appears where, when, and at what cost. Though it’s this AI that drives Facebook’s business, we hear much less about it. The more you think about its ad-placing robot, though, the less likely it seems a smarter machine will make Facebook safe for brands. In fact, the whole idea of a monolithic ad-placement machine should be suspect.
To automate every placement decision, every campaign from every Facebook advertiser has to be optimized to the same small set of user actions. That’s what makes the business frictionless: every choice of what advertisers sponsor is made for them through the same process. It’s this, more than content moderation, that makes Facebook a risky place to advertise. Conflicts of interest between advertisers and Facebook’s decisioning will persist after content moderation improves. Here’s three reasons why:
1. Facebook drives excessive advertising to base customers. Its learning is subordinate to a simple attribution model that severely underweights persuasion and trust-building. The placement robot ignores how brands grow. Instead, it greets customers at the door of your shop and claims credit for bringing them there. There is no Facebook option to use its AI to steal market share from your rivals. Facebook can’t customize what it does.
2. Automatic optimization to action creates an incentive to provoke, which poisons the media environment. The Covid-19 crisis makes this clearer than ever. Even a perfect disinformation filter could not have elevated accurate information above the noise on Facebook as the virus spread. Moderation will never offset the entire effect of rewarding content producers for upsetting people.
3. The ad machine devours every brand’s data assets. Facebook predicts user action by pooling interaction data across competitors. Such extreme sharing kills the incentive to innovate for new-customer acquisition: Any inventions would become the property of Facebook. So, instead, all the surplus created by predictive ad-placement accrues to Facebook, and advertisers feel locked in. More rebellion against this state of affairs is inevitable.
Zuckerberg thinks he can weather this storm. But brands’ questions are evolving from “When will your AI be able to detect Nazis?” to “When will your AI be able to distinguish between corrosive and productive attention to my brand?” The boycott amounts to brands just starting to realize Facebook is counting on never having to make distinctions like that, at all.