The Wednesday Stack: Truth in News Implies Editors, Not Robots

Stack logo by Hilary Allison
Stack logo by Hilary Allison

One thing the Facebook data fiasco has highlighted is just how clueless Senators and Congressmen — and many consumers too — are about Facebook's business model, and about the whole process of aggregating, processing, and activating consumer data. Some journalists seem to struggle too. Here's Andrew Marantz of the esteemed New Yorker in the current issue

Zuckerberg should make some clear commitments: to protect Facebook's users from microtargeted propaganda; to use his algorithms to promote truth over reckless sensationalism; to prevent bad actors from using his tools to sow discord and bigotry

Sure, and he should stop kicking puppies, I guess. But let's break that down. The third component is surely desirable, and it's hard to believe that Facebook doesn't have — or can't acquire — sentiment analysis tools which might at least take a first cut at identifying bigotry and hate speech. That would be a good move, if it was followed up by serious consequences for the guilty accounts, and not just the weak temporary suspensions Google/YouTube thinks appropriate for grossly offensive behavior.

The first suggestion is perhaps a little unclear. What Facebook is supremely good at is micro-targeting users — specifically with advertising (and it does so, of course, using its own internal data sets, which are not — contrary to widespread assumptions — available for purchase by brands). But Marantz presumably means micro-targeting by others (like Cambridge Analytica). Fine, but the challenge is to distinguish between "propaganda" and content which is welcome and relevant to the users it's aimed at (which might well include what you, I, and Marantz would identify as "propaganda").

But it's that second suggestion which is the real head-scratcher. Using "algorithms" to promote (and first of all, I guess, identify "truth"). Good luck with that. As I argued recently, one thing which distinguishes human beings, and maybe chimpanzees too, from the most sophisticated AI engines, is that we can move around the world and, for the most part, see what's true and what isn't. Machines can only respond to input; the smart thing about AI is that it trains itself to respond better.

But while AI can certainly train itself to identify stories about politicians and pizza parlors, and at scale, what it can't do — except with close human supervision — is tell which ones are true and which ones aren't. Maybe I'm biased, but you're going to need editors to address that problem. One serious suggestion: crowd sourcing. There are lies on Wikipedia, but also an active community trying to spot and eradicate them. 

**********

Of course, those specific challenges don't arise if you simply hand out the beverage in an enclosed space. Note, then, the partnership between Cargo, the in-car commerce platform for the world of rideshare, and Coca-Cola, no less. Rideshare drivers using Cargo will be empowered to offer the brand's beverages (and snacks) for sale; check-out via Cargo's digital menu on a smartphone. I am thinking in-car commerce has limits when it comes to passenger's patience, but who can turn down a refreshing beverage? I am also now thinking about some of the very strange, not to say illicit, things cab-drivers have tried to sell me over the years.

**********

Nowhere left to go but the latest news:

**********

Finally, if you're at Aptos Engage today, so are we. Say hi to Amy Onorato if you see her.