I was at Facebook in 2012, during the previous presidential race. The fact that Facebook could easily throw the election by selectively showing a Get Out the Vote reminder in certain counties of a swing state, for example, was a running joke.
Converting Facebook data into money is harder than it sounds, mostly because the vast bulk of your user data is worthless. Turns out your blotto-drunk party pics and flirty co-worker messages have no commercial value whatsoever.
Sign up to the new-look Media Briefing: bigger, better, brighter Read more But occasionally, if used very cleverly, with lots of machine-learning iteration and systematic trial-and-error, the canny marketer can find just the right admixture of age, geography, time of day, and music or film tastes that demarcate a demographic winner of an audience. The “clickthrough rate”, to use the advertiser’s parlance, doesn’t lie.
Without seeing the leaked documents, which were reportedly based around a pitch Facebook made to a bank, it is impossible to know precisely what the platform was offering advertisers. There’s nothing in the trade I know of that targets ads at emotions. But Facebook has and does offer “psychometric”-type targeting, where the goal is to define a subset of the marketing audience that an advertiser thinks is particularly susceptible to their message.
And knowing the Facebook sales playbook, I cannot imagine the company would have concocted such a pitch about teenage emotions without the final hook: “and this is how you execute this on the Facebook ads platform”. Why else would they be making the pitch?
The question is not whether this can be done. It is whether Facebook should apply a moral filter to these decisions. Let’s assume Facebook does target ads at depressed teens. My reaction? So what. Sometimes data behaves unethically.