In practice, ‘showing car ads to people who read about cars’ led the adtech industry to build vast piles of semi-random personal data, aggregated, disaggregated, traded, passed around and sometimes just lost, partly because it could and partly because that appeared to be the only way to do it. After half a decade of backlash, there are now a bunch of projects trying to get to the same underlying advertiser aims - to show ads that are relevant, and get some measure of ad effectiveness - while keeping the private data private.
Apple has pursued a very clear theory that analysis and tracking is private if it happens on your device and is not private if leaves your device or happens in the cloud. Hence, it’s built a complex system of tracking and analysis on your iPhone, but is adamant that this is private because the data stays on the device. People have seemed to accept this (so far - or perhaps the just haven’t noticed it), but acting on the same theory Apple also created a CSAM scanning system that it thought was entirely private - ‘it only happens your device!’ - that created a huge privacy backlash, because a bunch of other people think that if your phone is scanning your photos, that isn’t ‘private’ at all. So is ‘on device’ private or not? What’s the rule? What if Apple tried the same model for ‘private’ ads in Safari? How will the public take FLoC? I don’t think we know.
On / off device is one test, but another and much broader is first party / third party: the idea it’s OK for a website to track what you do on that website but not OK for adtech companies to track you across many different websites. This is the core of the cookie question.
At this point one answer is to cut across all these questions and say that what really matters is whether you disclose whatever you’re doing and get consent. Steve Jobs liked this argument. But in practice, as we've discovered, ‘get consent’ means endless cookie pop-ups full of endless incomprehensible questions that no normal consumer should be expected to understand, and that just train people to click ‘stop bothering me’. […] Perhaps ‘consent’ is not a complete solution after all.
If you can only analyse behaviour within one site but not across many sites, or make it much harder to do that, companies that have a big site where people spend lots of time have better targeting information and make more money from advertising. If you can only track behaviour across lots of different sites if you do it ‘privately’ on the device or in the browser, then the companies that control the device or the browser have much more control over that advertising.
I think this captures the complexity of privacy in practice. “Protecting privacy” sounds good, but what exactly do we mean by “privacy,” and at what threshold do we consider it protected? Who gets to enforce and control those standards, and how?
Using the internet shouldn’t be so complicated. I want to read an article / buy a thing / watch a video without being tracked or tossing my hat into a web of convoluted and shady data pipelines.
Not only are ads present on every physical surface you can imagine, they are also on every digital surface. Websites have ads, the most popular apps in the world are made by advertising companies, and many of the accounts people follow are walking billboards.
I don’t think ads = bad; I appreciate the art, craft, and work that goes into a good advertisement. But I feel cynical about the size and omnipresence of the ad industry, which is a staggering $600 billion globally. That kind of weight skews the priorities of the entire economy toward attention as a commodity, incentivizing practices like superficiality and spam in the decision-making processes and objectives of businesses everywhere. What’s that money doing for ordinary people?