I'm continually amazed and worried about what google knows about me, to the point where I use duck duck go half the time now. For example, I was just watching a car video on my tv (roku youtube app) and they had a random interstitial for a "ridge wallet". It didn't even appear officially it was just part of the video. So I pause it, grab my ipad, and type "ri" and sure enough "ridge wallet" is the first suggestion. Useful and powerful? No doubt. Creepy and uncomfortable? Immensely.
When I worked for an advertising network (a little guy compared to the duopoly of Google/Facebook), we had a request from an advertising agency for: Women, 25-35, Latino, in the greater Los Angeles area, who have 2 children and are expecting their 3rd child. Creepy is an understatement.
The other part of the industry which was extremely upsetting was the amount of racism. Ad agencies will make blacklists of both domains and/or keywords where they don’t want any of their ads to run - regardless of campaign. There are many things on the list which are related to terroism/crime/etc, but there are some which are directly about race - keywords like “black”, “ebony”, “asian”. It is depressing to say the least.
It is hard to say if something like that is racism or laziness. There are laws around advertising jobs and real estate when it comes to race. I can totally see small agencies not have the legal expertise to determine if an ad is in compliance with the law and just blanket banning anything related to race.
Oof, I don't think the world becomes a better place by shoving people into buckets. It also seems self defeating-- they're constantly reducing their audience. Im a 37 year old white male but maybe Im interested, how do they know? I've had to watch this horrible Ford commercial on youtube about 200 times, frequently like five times per video. It wouldn't hurt if they showed me ANYTHING else.
Nothing in the constitution says you're required to socialize or do business with every class of people. I don't know where people get these ideas from.
Acts are not part of the constitution, they're part of the US code. I'm not even sure that forces you to do this (outside of certain situations like public school, housing, and employment.) In fact the constitutionality of that is questionable.
That's a particularly interesting example, because the ridge wallet is (supposedly) a tool for improving privacy, but your experience suggests that the process used for selling it is hardly respectful of privacy. I've also seen the ads, and I'm damn sure they're coming to me through "surveillance advertising" as OP calls it. Fear-based advertising that exacerbates the underlying cause of that (possibly quite legitimate) fear seems to be an accelerating trend.
Well, it's troubling because I don't want strangers watching what I do, which I think is normal. Saving me seven characters of typing isn't worth having an unaccountable corporation tracking every detail of my life.
How would I rank it? I wouldn't. I've never had trouble typing two words. I do have trouble when my words are creepily predicted. If you can't tell, my point is it's creepy and unnecessary.
And by the way, I don't think this is harmless. I never plan to, but what if I applied to google, and what, they have all my emails and two decades of search? I don't have much to hide, but fuck all Id never apply
duckduckgo sells you out to microsoft (as reported here recently), time to find a searxng instance you like (this one isn't bad: https://searx.be/) or try out brave search.
I think this article is a must read for any and every person who participated in surveillance advertising.
So much of our present internet metaverse is based on these dogmas that any valid challenge to them should be taken seriously. And like many dogmas, people who rely on these as "obvious truths" will be unable to examine them. Read the comments: shoot the messenger, shoot the message and we can't do anything about it anyway.
The fix is to have laws that recognize that data is extremely valuable personal property, that stealing it is a criminal act and that the predatory "buying" of it (such as offering a $ service in exchange for $$$$$$ worth of data) should be highly regulated.
Maybe amend and strengthen the Uniform Deceptive Trade Practices Act and then make it Federal instead of just some states?
Of course this kind of lawmaking has gone out of fashion and was only around for a couple decades so I'm not holding my breath.
I prefer my data to be considered like my organs. I can still donate them and receive transplants but these aren't for sale, and no one should profit from it.
The great thing is that we can always build a new boat. Google's and Facebook's business model is not more important than the individual right to privacy, and if Google and Facebook cannot peacefully co-exist with the individual right to privacy then Google and Facebook must simply not exist.
I'm still digesting this and might yet decide that I disagree with some parts, but as a vehicle to provoke thought it's excellent. Reexamining assumptions (note: not conclusions based on data) from time to time is often beneficial, and OP provides a lucid framework for doing so within this space.
The people who make this tech have homes and communities. There used to be a time where if you did things that harmed the common people you werent welcome to attend meetups, clubs, or associate with mainstream society.
Edit: Not sure how you immediately jump to comparisons to "pogroms" but I suppose there is always someone who goes there.
I fear that if/when the masses wake up and start anti-tech pogroms, they won't be discriminating. Such mob actions are rarely precisely targeted; they'll come for programmers regardless of our specializations.
She made several good points but good luck transforming these into laws. First, it would be extremely difficult to even formulate (how do you fight all possible dark patterns with laws?), and second, there is an enormous pressure from the industry not to regulate these things, with some good and bad arguments.
"The right to privacy shall not be infringed" bill would be a good start. She can formulate it, Schumer can demand to vote on it. The bill can include a provision on sharing data without an ink-signed, up to 1 year long, contract.
>It's not infringement if you have to do some paperwork and pay a tax first.
>It effects interstate commerce even if it doesn't involve an entity in another state directly.
I laud the attempt, but unfortunately you are going to have to get much more explicit.
The right to personal privacy shall not be infringed by any actor. No business that engages in a circumvention of this prohibition shall be considered to be operating in good faith unless X, Y, and Z tests are passed.
And in reality, you will see businesses structured such that you must jump through hoops X, Y, and Z must happen. So you'll need an additional clause.
Next, you'll end up with a Supreme Court challenge on free speech grounds, as not monetizing customer data will be seen as compelled speech by the government.
I felt the same way. She seems well-meaning but any battle against the major tech companies is probably a losing effort.
Off topic, but I feel frustrated by my friends’ and family’s almost complete lack of interest in privacy. Most express a preference for privacy but don’t want to do the work.
Why do you say that I have given up? I use ProtonMail for most of my personal e-mail (I just switched back to using it), I use private browsing tabs most of the time, I often delete all web data and cookies from my browser, etc.
I have also read Surveillance Capitalism, and I am almost done with Privacy is Power.
EDIT: I reread your comment and mine: sure, elites+tech companies will “win” but that is no reason to not push back.
> The views expressed in these remarks are my own and do not necessarily reflect the views of the Federal Trade Commission or any other commissioner.
Is that even a valid disclaimer when it's published directly on the ftc.gov ?
> In I would like to challenge: that we can solve for
data abuses by providing consumers with more transparency and control—in other words, more
notice and choice.
This is a bit of a non-argument. Look at the EU, GDPR etc they work. So much so that some US websites will not serve the EU as they're not in line. A clear indication that companies are influence.
Also I don't think she understands the notion of how privacy is supposed to work.
Further to that, most companies are refusing to comply with the spirit of GDPR and pushing the burden of transparency onto the user with those dreadful cookie UX journeys. The intent is clear: to promote negative opinion of what is actually pretty good, customer-centric legislation.
Regardless her views representing the official FTC position I thought
this challenge to dogmas of surveillance capitalism by Rebecca Kelly
Slaughter was on the money, and it evidences widespread sentiment in
the US population that supports her ideas.
The first section on "Not just privacy" echos my own thoughts. Privacy
is a too broad and ill-defined term. Labelling some things as "privacy
issues" reduces them to relative personal morals and sweeps other
harms under the rug. She properly terms these broader effects - that
touch on civil liberties, freedom of movement, harms to competition,
misinformation and exploitation - as "data abuses", the same language
I've used in Digital Vegan and in Ethics for Hackers. So I am really
happy to see this gaining usage. Overall though I still think a useful
term is "Digital Dignity" because many of the harms done are hard to
formulate yet seem natural affronts to personal dignity. Later she
touches on the body dysphoria, suicides and other mental health issues
plaguing teenage girls as a result of digital exploitation, but does
not fully link them to the business models of Facebook etc.
In the second section "Notice and Choice Is Not the Answer"
Commissioner Slaughter nicely sums up why simply telling someone that
you are going to harm them, when they have no effective choice or
capacity to extricate, merely adds insult to injury. It is certainly
no excuse or useful legal mechanism. I have long maintained that
"meaningful choice" is absent since most ordinary people, even the
most intelligent and well educated amongst us, effectively lack the
capacity to consent over complex technical issues. Onerous contracts
leveraging general ignorance abound in the digital world.
In a way, section three is really her conclusion, albeit an obvious
one, that "Minimization is a Better Model", but possibly because the
scope of the FTC is limited fails to identify the broader problems of
how abusive technologies are foisted onto people via other agencies,
such as schools, governments and medical services. A more mature
analysis would not blame only commercial data collection, but the
overall societal normalisation of risky behaviours, and poor data
hygiene. That said, I feel she gets the deeper point in saying:
"It should not be necessary to trade one’s data away as the cost of
full participation in society and the modern information economy."
There is much more to this position statement that asserts "The FTC
Can Lead the Way Forward on Data Minimalism", but whether that is
possible or whether technical counter-surveillance and other kinds of
Digital Self-Defence will be needed to beat back surveillance
capitalism remains to be seen. In a sense the problem is that FTC and
similar organisations may see themselves as omnipotent, or over-rate
the effects of regulation. Therefore what I think is missing is a
clear statement from bodies like the FTC that, where they exercise
equity of power they will no longer impede citizens digital self
defence measures, including reverse engineering, cracking and hacking
DRM and keys, or any kind of hacking so long as it is clearly in the
pursuit of protecting ones data, privacy and digital dignity. That
would mean dissolving things like the DCMA of course. It maybe that
USA, being a country that upholds a strong tradition of allowing
people to defend themselves and their property, might lead the way on
this, in different a way from Europe.
They are made to look like they work, in practice, its unenforceable, and backups including software still dont have any easy way to delete an individuals records, mostly because its a database that is backed up and you'd have to restore the database in order to clear the individuals data.
Secondly, nobody has to wipe data if its being used for law enforcement or scientific purposes. Those are two loaded parameters, but look at what wiki says because its hard to get access to case law and the legislation search facilities in various countries is absolutely despicable; https://en.wikipedia.org/wiki/Law_enforcement
"Law enforcement is the activity of some members of government who act in an organized manner to enforce the law by discovering, deterring, rehabilitating, or punishing people who violate the rules and norms governing that society."
Discovering means hacking people's computer systems as Edward Snowden highlighted, so the security services will take everything they can find and when they dont have the tools, they have the search engines to fall back on via court orders if need be!
Other point is anyone who is not the police can make records and say they are for law enforcement purposes as well! Neighbourhood watch is a voluntary law enforcement scheme engaged by the public. Its even listed on wiki in the UK Law enforcement index https://en.wikipedia.org/wiki/Category:Law_enforcement_in_th...
The other parameter is scientific purposes, again what exactly is scientific purposes?
Is it training an AI, would all the tech companies like facebook, MS, Apple & Google et al be justified in retaining all and any data they can get their hands on in order to further develop AGI? They would be, but recently the UK data commissioner seemed to ignore the law and fined US based Clearview AI a facial recognition company £7.5 million for collecting 20billion images of people.
https://www.dailymail.co.uk/news/article-10845123/Orwellian-...
Now the fact that UK Police authorities are reportedly using their services, would suggest the Police are happy with what Clearview AI have done, no criminality whatsoever, but the data commissioner thinks otherwise. The other misdirection is the fine is not that much money. Whilst it will effect a behaviour change in Clearview AI, that behaviour change may well include putting up the prices because the data commissioner gave them a free advert which police and law enforcement agencies elsewhere in the world will sit up and notice. As they are also US based, any UK enforcement notices are just hot air. This imo is nothing more than an advert and a way to make Clearview AI charge more.
Some of the Northwest Passage is classed as international waters, the US coastguard using it instead of Panama (usage fees) gave Canada the reason to take the US to some international court and whilst getting a fine levied on the US coastguard also happened to get the Northwest Passage recognised as Canadian waters so that Canada could start charging fees for ships using it! It also advertised the waterways as open for business because of the lack of arctic ice, ironically Russia also helped Canada out here as well by recognising the Canadian waters so there is now an alternative to using the Panama canal for some journeys.
If you are a soccer player, this is kicking the ball out of play for the other team.
GDPR hasn't magically fixed everything but people are actually thinking about privacy now which is the first step to achieving anything at all no matter the exact route.
Scrupulous companies really do think about compliance (they usually miss something because they're not tech people so they forget about backups as you mention)
The financial sector has the largest amount of compliance to contend with, but even with all that compliance history shows there are always loopholes and interpretations which need to be tested in court. What is also common, is big business doesnt advertise the testing of boundaries, they just go ahead and do it.
Or look at the big tech companies running their affairs through offshore tax havens. I think you would have to be naive to think that scrupulous companies exist, their primary objective is existence and profit making which can involve some risk taking.
Imagine you're out in town and your decide to go home, so you start walking to your car. Suddenly a guy comes up and blocks your path while rambling about how he cares about your privacy. He says he won't let you through until you tell him to go away or ask to learn more. By telling him to go away, you just consented to him putting a GPS tracker on your car and looking through your windows at night.
Anyway, GDPR in its current state is broken and needs to be updated.
I suspect those pop-ups aren't really compliant. Bur it will probably take years for a test case. "Go away/confirm my choices" should mean "I don't consent".
Yeah, telling that guy to go away is obviously not giving consent to be tracked.
The problem is mostly impatience with how slow the data protection agencies are with putting a stop to these violating popups, especially the Irish DPA that has jurisdiction on the big FANG. This slowness isn't too surprising given that the industry developed to where it is under the eye of the DPAs in the first place, so they aren't exactly packed with folk that have shown initiative...
>By telling him to go away, you just consented to him putting a GPS tracker on your car and looking through your windows at night.
That is 100% against GDPR. The button to oppose data processing should be as big and accessible as the one to accept it. Closing the prompt does not equate to giving consent.
I'm sure all drivers did not stop at stop signs immediately when they were introduced. And I still see people not stopping at these signs. I still believe they're good regulation and the more people know it makes roads safer, the more they respect the signs. Same with seat belt.