12 May 3 Big Tech CEOs, 3 ways of spinning privacy
Traditionally, developer conferences are meant to get developers fired up about building stuff on some tech company’s technology platform. These days it’s a little more complicated.
I’ve already been to two of these events this year–Facebook’s and Microsoft’s–and I watched Google’s online. In 2019 the big keynote speeches that kick off these shindigs go something like this:
- Tech company CEO talks about not being evil, especially regarding privacy.
- Company exec announces new product/feature, then talks about how it protects user privacy.
- Repeat 1 and 2.
You get the feeling these carefully crafted words aren’t just meant for the people in the room, and they’re not. They’re addressing multiple audiences.
They’re talking to Washington. Trust in tech companies has eroded sharply in the past few years, and lawmakers are looking hard at ways of legislating how Big Tech can and cannot collect and use personal data. Big Tech expects some form of privacy legislation in the style of the European GDPR, but its lobbyists are working hard in Washington to shape the legislation into something easy to comply with. Projecting a privacy-conscious image can only help. Lawmakers–like Elizabeth Warren and many on the New Left–are thinking hard about how to reduce tech companies’ outgrown market power. So tech companies are trying to remind the world of the good they do, because antitrust law usually sticks when a company’s outsized power is harming consumers more than helping them.
They’re talking to Wall Street. Investors big and small want to hear about the addictive new products and features on the horizon, increases in customer numbers, and assurances that no onerous, growth-stifling privacy regulation is on the horizon.
They’re talking to developers. Yes, developers themselves want to know how the companies are thinking about user data privacy. But developers are also concerned about making money with their apps, and much of that money is made through advertising, and the targeting of the ads is guided by–you got it–personal data. That’s audience number one.
They’re talking to consumers. Millions of tech-savvy consumers watch the keynote addresses of Big Tech developer conferences to get a first look at the gadgets and web services they’ll be using in the next year. But increasingly they’re listening for reassurances that their personal data won’t be mishandled or misused.
They’re talking to advertisers. At least in Facebook’s and Google’s cases, legions of marketing and advertising types are listening to find out the future of their go-to platforms for reaching customers. They too are dancing delicately, if far less publicly, around personal privacy, and they want to know what privacy concessions might be made by the companies that could effect the potency of the targeting data, and the industry’s data practices as a whole.
Facebook’s Mark Zuckerberg, Google’s Sundar Pichai, and Microsoft’s Satya Nadella approached the privacy issue in somewhat different ways, but they all shrewdly addressed the same audiences.
Facebook’s privacy turn
At Facebook’s F8, CEO Mark Zuckerberg believes Facebook’s world-uniting power is temporarily being overshadowed by its harmful byproducts–like its enabling of mass incivility, election tampering, and large-scale erosion of personal privacy. (Zuckerberg’s F8 joke that his company doesn’t have a great record on privacy fell painfully flat, in perhaps the most poignant demonstration of the CEO’s disconnect from real life). He then talked in sincere tones about the changes his company would make to rebuild itself around private interactions between members, as opposed to open and social ones.
“I believe the future is private,” Zuckerberg said. “We should have private messaging, groups, payments, and private ways to share location–the private parts of our social network will be more important than our digital town squares.”
It sounded sincere. But while Zuckerberg said ephemeral (disappearing) messages were part of Facebook’s overall privacy vision, he didn’t say when or if Messenger or Instagram Direct or WhatsApp would support them. While WhatsApp supports end-to-end encryption already, Zuck said Messenger and Instagram Direct would get it, too, but never said when. Just like he didn’t say when in 2018 with the announcement of Clear History, which has never been heard from again. Same story with the Data Transfer Project portability feature. More than a week after the event, it all seems a bit, well, ephemeral.
Facebook announced a redesign to its core Facebook app that seems to play up the Groups feature. This appears to be the follow-through on the announcement the company made in 2018 in the wake of the Cambridge Analytica data scandal that it would put “meaningful content” from friends, family, and groups front and center. But there’s no significant proof Facebook users are flocking over to Groups from the open social network.
And there’s the rub. Zuckerberg’s task at F8 may have been to placate regulators on privacy while at the same time reassure Wall Street that no fundamental changes were coming to its core advertising business, which targets ads based on users’ personal data.
You would be forgiven for thinking that all the privacy talk at F8 was a carefully coordinated PR exercise. If Facebook were to move to a private messaging business it would require a new ad model. The stock market would be reacting to the implied uncertainty and risk. It didn’t. Facebook’s open social network will be around for a long time, and so will its current booming ad business. A move to messaging would also require a whole new security model. When I asked Facebook AI executives how they planned to detect malicious content on an encrypted channel (like harassment, extremism, or propaganda) they had no clear answer. This suggests to me they haven’t thought about it much–because private, encrypted channels aren’t where Facebook’s business is.
Google’s “helpful” approach
Facebook, of course, participates in a global advertising duopoly with Google. Google invented the idea of targeting ads based on personal data collected within the free services it also offers. Google has managed this not-very-virtuous-but-not-quite-evil cycle far better than Facebook. Google is proof that that model can work if the tech company is open and honest about it. Google has not been perfect, but it’s been far less shady about how its business really works than Facebook has. The company proved this yet again at its I/O conference in Mountain View last week.
CEO Sundar Pichai began his talk by reminding everybody of the company’s mission statement, which is “to organize the world’s information and make it universally accessible and useful.” Why? Google’s angle is that your relationship with the company is a transactional one. You give up some of your personal data so Google can target ads, but what you get in return is more valuable than your loss of privacy.
Pichai didn’t say that flat out, but rather focused on clearly demonstrating the value users get from Google. Then he and his executives rattled off a long list of features and new products, and most would agree it was impressive. The features, like the added capabilities in Assistant, seem like the result of a tech company listening well to the wants and needs of people, not just productizing some gee-whiz technology.
He also makes the point that some of the data Google gathers directly improve the usefulness of the products. In Maps, for instance, if the app knows where your home is, it can tell you how long it’ll take you to get through the traffic on the way.
Pichai also announced some new privacy measures, like new ways to limit how advertisers track users in the Chrome browser, and some new ways of controlling or removing data from Google’s graph. Only after these features are released will we know how significant they are, but there are some important caveats: blocking web cookies, as Chrome already allows users to do, doesn’t stop Google from tracking users across its services; and removing data from Google won’t necessarily delete the inferences the company has already made about you. And, as others have pointed out, as long as these features remain opt-in, most users won’t take advantage of them.
Microsoft pushes privacy and safe elections
At Microsoft’s Build developer conference last week, Satya Nadella’s keynote, more so than Zuckerberg’s and Pichai’s, seemed aimed at developers. But make no mistake that Microsoft knew that Washington, Wall Street, and others were listening, too. While Microsoft has so far largely escaped tough scrutiny over privacy issues, two minutes into his remarks Nadella repeated his “privacy is a human right” mantra, and proceeded to talk about privacy as a sort of pact between platform companies and developers.
“[As engineers we need to truly incorporate [trust] in the design process, in the tooling around how we build things, so when we think about privacy and the fact that privacy is a human right, its as much of an engineering design principle as an engineering process issue,” he said. “Same thing with cybersecurity. Same thing with AI ethics.”
One announcement was surely meant for Washington’s ears. Nadella announced a new open-source election infrastructure project Microsoft is building with Portland, Oregon-based election technology developer Free & Fair. The two are developing homomorphic encryption that will help ensure “real transparency and verifiability” in voting systems, Nadella said.
Next up: Apple
Apple’s developer conference is coming up in June. The company will get the last word. Apple has been in the enviable position of being able to bash with impunity the heads of its Big Tech rivals who make their bread on data collection and ad targeting. Apple has said repeatedly that a user’s personal data should be between the user and their device, and not Apple’s servers.
But that secure device isn’t cheap, as Google’s Pichai appeared to point out in a thinly-veiled jab at Apple: “For us, that means privacy cannot be a luxury good offered only to people who can afford to buy premium products and services,” he wrote in a New York Times op-ed. “We think privacy is for everyone.” (Pichai also announced a new initiative to train AI models directly on users’ devices, to minimize or avoid sending users’ personal data to the cloud.)
But in another sense, Apple makes the highly addictive devices that run the third-party apps that collect personal data. How far is Apple willing to go to upend the “surveillance economy?” That’s a question the company’s leadership team, and PR, marketing, and government relations people are likely wrestling with right now as they prepare the words that its executives will tell developers–and everybody else–at its Worldwide Developers Conference on June 3rd.
Of course the keynote messaging is also meant for the ears of journalists. I worry that because the privacy messaging comes wrapped in a developer conference setting, the tech media is a little too ready to report it at face value, whether or not the company’s “privacy vision” is backed up by real product development or not.
Big Tech’s careful messaging on privacy won’t end with this year’s developer conference season. The current reckoning on the issue was many years in the making. What remains to be seen is how far society wants to go in punishing tech companies for their data privacy sins. It’s likely we’ll get a stronger set of privacy laws, eventually. But our willingness to break up companies or fundamentally change how they operate will be tempered by our dependence on free services like Gmail and Facebook. Today’s “tech-lash” anger may fade back into a tacit consent if big tech can convince us—and lawmakers, Wall Street, advertisers, and others—that the data-for-services bargain we make with big tech is still basically worth it.
Source: Fast Company