If you are not typing in a passphrase or plugging in a device containing a key to unlock your disk then the secret exists somewhere else. Chances are that secret is available to others. The root issue here is that the user is not being made clearly aware of where the secret is stored and what third party(s) have access to it or reasonably might be able to get access to it.
These sorts of things should be very unsurprising to the people who depend on them...
>“I’m currently working on a mounting plate that’s 4.5 by 8 in. that needs a 40 mm bore 1/2 -in. deep located 75 mm from the edge with m10 tapped holes and two 1/4-20 set screws tangential to the bore. Please kill me.”
When I was designing stuff here in Canada, that was basically Wednesday. One big advantage of the USA withdrawing from trade is that Canada will have the opportunity to finally complete the metric conversion.
If the user can get immediate access to older messages then normally those messages will be available on a confiscated phone. That's why things like Signal have you set a retention period. A retention period of zero (message is gone when it scrolls off the screen) is safest.
If you want to protect older messages you can have the user enter a passphrase when they are in a physically safe situation. But that is only really practical for media like email. Good for organizing the protest but perhaps not so great at the protest.
If it is an actual legitimate interest then you would likely be expected to contact the site out of band to object to the use of your data. Depending on the technical details you might not be able to continue using the site after a successful objection. In some cases the site might be able to reject your request.
The cookie banner thing is intended to allow the user to explicitly provide consent, should they for some reason wish to do so.
The cookie banners are routinely used to object to "legitimate interest" uses and the corresponding sites continue to work normally, not sure what your alternate understanding is based on.
The cookie banners are for initial consent. You just consent to less stuff sometimes.
A website might claim some sort of legitimate interest for the initial collection of data but might not think that they can claim that for the retention of data I suppose. That would seem kind of dodgy to me...
Just because a website claims something doesn't mean it is valid. There isn't a lot that falls under legitimate interest for a website.
What you state is provably wrong. Consent and objection to legitimate interest are two different things, in the eyes of GDPR, and are managed separately in privacy banners:
Navigate to a website of your choice [1]. Let's assume its privacy banner is served by onetrust.
The text at the top of their "Privacy Center" says, verbatim, "We share this information with our partners on the basis of consent and legitimate interest. You may exercise your right to consent or object to a legitimate interest"
If you then unfold the "Manage Consent Preferences" you will notice that you can, _separately_, provide your consent for a given purpose, by sliding the switch to the right to enable it, and also, _at the same time_, "Object to Legitimate Interests" by clicking on the button labeled so.
Of course, this is a dark pattern to make it as cumbersome as possible to object to Legitimate Interest purposes.
This seems like a fairly reasonable assumption for the Briar case. An adversary would have to get a user accept a Briar link in error. Contrast with things like Signal[1] that base their trust on phone numbers.
Note that Briar groups are controlled by one moderator. Other participants of the group can not add members. Note also that Briar has the concept of introductions. So it is easy to avoid making the sort of identity errors commonly made by other schemes.
Basically it is because Forth programs are fairly flat and don't go deep into subfunctions. So the interpreter overhead is not that great and the processor spends most of the time running the machine code that underlays the primitives that live at the bottom of the program.
Back in my day the local telephone company used waxed lacing cable for that sort of thing[1]. These days it seems that polypropylene string is popular (search on "conduit pull string").
You basically want something that is slippery and will tend to not get stuck. I have used Dacron fishing line, but that is mostly because I had a bunch of it laying around.
For me the solution to subpixel issues with tiny fonts is to not do subpixels. I have been looking at bitmapped fonts my whole life so I don't see any reason to give up all that perceptual context.
Which is why I ended up buying a replacement 24" 1920x1080 recently. I needed pixels large enough to distinguish with my fave small font. If I want more pixels at once on the screen I need a larger screen.
I think the two cases are different. The EFAIL researchers were suggesting that the PGP code (whatever implementation) should throw an error on an MDC integrity error and then stop. The idea was that this would be a fix for EFAIL in that the modified message would not be passed on to the rest of the system and thus was failsafe. The rest of the system could not pass the modified message along to the HTML interpreter.
In the gpg.fail case the researchers suggested that GPG should, instead of returning the actual message structure error (a compression error in their case), return an MDC integrity error instead. I am not entirely clear why they thought this would help. I am also not sure if they intended all message structure errors to be remapped in this way or just the single error. A message structure error means that all bets are off so they are in a sense more serious than a MDC integrity error. So the suggestion here seems to be to downgrade the seriousness of the error. Again, not sure how that would help.
In both cases the researchers entirely ignored regular PGP authentication. You know, the thing that specifically is intended to address these sorts of things. The MDC was added as an afterthought to support anonymous messages. I have come to suspect that people are actually thinking of things in terms of how more popular systems like TLS work. So I recently wrote an article based on that idea:
It's occurred to me that it is possible that the GnuPG people are being unfairly criticized because of their greater understanding of how PGP actually works. They have been doing this stuff forever. Presumably they are quite aware of the tradeoffs.
These sorts of things should be very unsurprising to the people who depend on them...
reply