That phrasing raises my weasel-word hackles… first of all, it’s unclear what it would mean to “use your photos and videos for advertising.” That sounds to me like reprinting your photos to advertise something—which nobody accused them of doing.
Perhaps more importantly, it only mentions the photos and videos themselves in relation to the advertising. Analyzing the photos (as per the demo in TFA) isn’t “advertising,” and neither is building a user profile.
Then later on, when they use that user profile to allow others to advertise to the user—that’s not “using your photos or videos for advertising” either. Nor is it “selling your personal information to anyone,” since what they’re selling is access to you instead of selling specific personal dossiers.
From where I’m sitting, that still seems to leave the door open to Google itself using what it gleans from your photos to build out your profile, use those insights across their whole company, and target ads at you. It also seems to leave the door open to selling “depersonalized” analyses to third parties, not to mention giving free access to whoever it might see fit (research groups, state actors,…), no?
There’s also a big difference between “doesn’t” and “will never.” Once an analysis with value exists, it seems counter to the forces of nature and commerce for it not to find its way out eventually. Just as the consumer DNA-sequencing firms pinky-swore everything was private, then gradually started spreading people’s genomes farther and wider.
It’s as weaselly as the wording where they say things like “we use your data to improve our services, eg. personalised advertising. To opt out of personalised advertising […]”
It feels just as weaselly to me when, by use of confidence-inspiring “plain language,” firms manage to pass off the impression that they’re making Solemn Categorical Pledges foreswearing the bad behavior that made users nervous—while preserving almost entirely the substance of the bad behavior.
Google seems especially invested in that kind of stunt. Remember their “ad privacy” consent screens for Chrome—which, ridiculously, framed consent to additional tracking as a “privacy” measure? (https://news.ycombinator.com/item?id=37427227; Aug 2023 / 974 points / 557 comments)
More to the point, when Google sought approval to buy DoubleClick, they testified before congress that they would not merge information gleaned from your use of Google services with your advertising profile.
If their CEO's congressional testimony on this point isn't considered binding at Google, verbiage on their website certainly isn't.
> To assuage concerns, Google told Congress and the FTC that it would not combine the user data it got from assets in search, e-mail, or GPS maps with information from DoubleClick about which consumers visited which publications. And so, the acquisition was greenlighted. Ten years later, though, Google did not hesitate to break its promise.
Google has been caught multiple times violating their own rules and the law to use all the information they have on you for advertising purposes.
The only opt out is to stop using their services.
Somehow, neither Google, nor Microsoft, nor Samsung, nor (probably) any other big tech company, can usefully extract data from photos anymore. Face recognition in particular works like one of those Shabbat-compatible appliances: something gets extracted at some point, eventually, but infrequently, and only when you're not looking - and, most importantly, it's not possible for you to control or advise the process. The AI processing runs autonomously in such a way that you may start doubting whether it's happening at all!
I assume that this is the vendors' workaround around GDPR and such in relevant jurisdictions, but this also makes face search/grouping nearly useless. Don't get me wrong - I'm very much with the EU on data protection and privacy, but getting gaslighted by the apps themselves about the extents of and reasons for ML limitations in those apps, that's just annoying.