Sounds like a great Idea, the court can imprison for contempt indefinitely, is there a limit? I don't think so. That sounds like a great way to get rid of someone, just write a luks header somewhere on his harddrive and call the cops. He will die in prison.
Then again the US courts have probably killed thousands of innocents. At least there is no death penalty for contempt of court (yet).
It doesn't strike me as a good road to go down to require judges to guess as to whether or not a defendant is being honest about whether or not he or she can remember a password on an encrypted volume.
Especially considering that it's possible to be wrong about a volume being encrypted.
This is exactly what worries me, especially if the government legislates this and makes the do-you-remember issue an affirmative defense, which shifts the burden of proof onto the defendant. I recently forgot the very complicated passphrase I used on a months-old encrypted USB thumb drive. And I'm a doctor (implying that I should generally be good at remembering stuff).
In many criminal cases, months or years pass between evidence seizure and the criminal trial. Seems very easy to forget the decryption key. Or not have access to the 2-factor token anymore. Lot's of edge cases with very serious side effects.
But in March 2010, a federal judge in Michigan ruled that Thomas Kirschner, facing charges of receiving child pornography, would not have to give up his password. That's "protecting his invocation of his Fifth Amendment privilege against compelled self-incrimination," the court ruled (PDF).
from the article, it looks like a judge in 2010 agrees
I don't think the defendant is claiming to have forgotten the password in this case.
Nonetheless, I agree with you that every once in a while, FBI would seize a hard drive that is (a) not encrypted but filled with random-looking bytes, or (b) encrypted but the owner has forgotten the password.
When I sell a used hard drive, I usually fill it with random data. Sometimes I do this by creating a TrueCrypt volume that takes up the entire drive. (This destroys all the data on every partition, as well as the MBR, so the drive appears unformatted to the buyer.) But suppose the FBI suspects me of downloading CP and buys my drive as part of a sting operation. They'll think my drive contains CP. After all, it has the typical signature of a TrueCrypt volume! But guess what, I wasn't intending to store anything on that drive, so I threw away the password as soon as I typed it into TrueCrypt. How do you prove that I cannot possibly remember the password? With a physical safe, at least it's possible to prove that I don't have the key in my possession.
"With a physical safe, at least it's possible to prove that I don't have the key in my possession."
Immediate, on the person, maybe. But you cant prove you cant get hold of it. That's the old proving a negative thing.
In fact, even if they search you, all it proves is that they didn't find it, not that you don't have it.
When governments start expecting us to prove we didn't do or don't have something, you might as well give up. Its a line no one should be able to cross, not least governments.
Not that I'm very familiar on TC, but wouldn't high entropy without a header be considered a typical signature? It may not be proof but it's at least a good indicator.
Use Bayes theorem. Out of all the hard drives owned by private citizens, how many are encrypted (E), and how many contain a significant chunk of random bits (R)?
Let's say you picked a hard drive at random, and noticed a significant chunk of seemingly random bits. Is it encryption, or not? Well, given what you know, the best you can do is assign E÷(E+R) probability for the drive being encrypted.
Now change the problem, where you suspect the owner of the drive may have reasons to encrypt it (suspicion of child porn fits perfectly). Random chunks are now even more suspect.
Personally, I suspect that the vast majority of seemingly random chunks of bits are in fact encrypted data (meaning, E÷(E+R) is quite close to 1). So, while it's not proof, while it's not a signature, while for various reasons it's not something we want courts to use as an argument, it's still damn strong evidence that encryption is going on.
So, you are evaluating the probability that this drive is encrypted by taking the ratio of E/(E+R), where E and R are frequencies?
This is simple probability, and does not involve Bayes Theorem at all.
But how do you determine the frequencies E and R (or their ratio)?
Perhaps you could sample the population of drives (E+R) and decide which of these are encrypted, and which are just randomized. And how to decide? Oh... you can't.
Forget about Bayes Theorem. I meant Bayesian Probability Theory, as described in Probability Theory: the Logic of Science by E. T. Jaynes.
My point was just that to me, noticing in a hard drive a big seemingly random chunk of bytes is very strong Bayesian evidence that the disk holds encrypted data. And if there is encrypted data, the owner of the disk is more likely than not able to access it.
> But how do you determine the frequencies E and R (or their ratio)?
Just ask people in non-adverse situations. With enough effort, that should get you a decent probability distribution over the possible frequencies of E and R.
Well… exactly. When you learn that a given woman has taken a pregnancy test, your probability that she is in fact pregnant goes up.
However, her probability is not affected by her taking the test: she takes the test for a reason, so merely taking actual action doesn't give her any meaningful information. Only the result of the test will tell her anything.
Similarly, if we're talking about your wife, you most likely know when she has sex (because it's with you), and maybe she tells you about her period and such. In this case, merely learning that she took a pregnancy test doesn't tell you anything you don't know.
How do hidden volumes, as for example TrueCrypt provides them, play into this? If the defendant doesn't give them up voluntarily the prosecution will have a hard time proving they exist, no?
Also, if they do find evidence for their existence - does the defendant then have to give them up?
It is not that difficult to prove that a hidden volume exists. The TrueCrypt implementation of hidden volumes means that the "hidden" partition is all allocated at the end of the visible partition. If you have a 20G TC volume with a 4G hidden volume, the file system in the non-hidden volume will never allocate a block beyond 16G. This shows up as very anomalous file system layouts at the block level. Simple visualization of the block allocations will show a clear delineation where the hidden partition starts. The TC implementation of hidden volumes is definitely not robust as plausible deniablility.
The police forensics investigators know to look for this already. It is in their recommended best practices for how to handle TrueCrypt volumes.
The safest way to use a TrueCrypt hidden volume is:
* Create the largest regular volume that you can.
* Create the smallest hidden volume that you can.
* Never mount the hidden volume as "protected"
The idea is that your sparsely populated cover volume won't create enough block allocations to have an obvious "end", and additionally, that those blocks will have a low likelihood of being allocated inside your hidden volume and overwriting your secret data.
I'm pretty sure this isn't true. You cannot prove the hidden volume exists. In fact hidden volumes are completely useless without plausible deniability. The file system of the outer volume will happily overwrite your hidden volume if you tell it to. The point is that you know its there so you intentionally don't write more than 16G on your 20G volume. But the "unused" space looks just like random data so you can't prove there is anything meaningful there.
Which you apparently didn't read thoroughly. I clearly state that the secure way to use TrueCrypt is to never mount the hidden volume in protected mode. That will enable the scenario you describe. I even state the reason why you want use it the way I suggest is to minimize the amount of hidden data that is overwritten.
I think the point of TrueCrypt is simply "plausible deniability". They can suspect with a very high degree of confidence that there's an hidden volume, but how do they prove it?
They can prove it sufficiently to force you to hand over your password. File systems have a particular behaviour, they allocate blocks in certain ways. Most notably, they use all of the space available to them equally (or, pseudorandomly anyway). If a file system never allocates any data in the last N bytes, where N is a very large number, that is indication that the file system is treating the volume as Size-N. Since this behaviour is the signature of a hidden volume in a TrueCrypt container, that is "proof". It will be sufficient proof for a court of law.
Essentially you are arguing that the file system implementation exhibited implausible behavior (it allocated only from the first N% of bytes), and that TrueCrypt exhibited implausible behavior ("ok, normally that would mean a hidden volume, but not in this case!").
All of which is to say, that TrueCrypt's implementation of Hidden volumes (as typically used by end users) is not actually plausibly deniable.
Just because a filesystem isn't using all the space available in a partition does not mean that the rest of the space is being used by something else. Imagine I run `newfs -s 2097152 sd1p` where sd1p is actually 4194304 sectors. Now imagine sd1 is a softraid volume with a CRYPTO discipline. There's no way you can prove that the extra 2097152 sectors aren't being used, but there's also no way you can prove that they are.
That's certainly true. But also (from a investigation point of view) more suspicious than a filesystem covering the whole harddisk, but only filled to 20%. That will always be a problem, as long as the "visible" filesystem just maps block 1..N directly onto encrypted blocks 1+k..N+k (with k being a constant offset), as it's currently the case e.g. in linux LUKS (I assume CRYPTO discipline in BSD is similar).
The proper solution most likely would be to integrate a kind of block-mapping into the encryption software which allocates randomly distributed blocks from the encrypted harddisks whenever a filesystem begins to write to the blocks of an volume. This randomization algortithm then will be aware of all currently active "hidden partitions", but due to the randomness, a pattern to draw conclusions about the existence of other partitions would not emerge.
"More suspicious" is meaningless. If you can't prove - with incontrovertible evidence and beyond any reasonable doubt - that there's something there, then there's plausible deniability.
It would be really nice if truecrypt implemented a feature whereby you could have a special password to use to render the secret partition un-usable, perhaps by rendering your existing password/key worthless.
Something like this wouldn't really protect you from law enforcement. They perform forensic disk duplication before mucking around with a drive. If you provide a fake password to TrueCrypt and it starts overwriting things, it would be pretty obvious to anyone investigating the drive what's going on.
I'm not sure how such a feature would work or how useful it would be, for that matter. Maybe the TrueCrypt binary would attempt to decrypt the first X bytes of the partition under the "coercion" password and then check if it matches some known signature. If so, flip a bit in each encrypted block to scramble it.
Problem: forensics people can use a write-blocking adapter on the original disk and simply make copies to try out the decryption. So, the feature sounds both irritating to implement and (worse) perhaps give a false sense of security to a novice.
> they use all of the space available to them equally (or, pseudorandomly anyway)
This is simply false for many (if not most) filesystems, which preferentially write to blocks near the beginning of the disk. For spinning disks, random distribution of blocks would kill performance.
No, it is not false. The important thing with a spinning disk is locality of reference. You want the blocks which store the file content to be as close together as possible, to minimize the head seek times. This means you want as long a chain of contiguous blocks as possible. This does not mean that you want all those blocks to be at the beginning of the disk. In fact, the exact opposite. You want to start that chain at a random location so you are more likely to have a large number of contiguous unallocated blocks. See the implementation of HFS+ Extents, or Ext4, or UFS for examples of how this works.
A) You have forgotten basic physics. The beginning of the disk is faster. Locality is desirable but is not and has never been the only thing that matters.
B) You have just named three uncommon filesystems that few people will ever use in the first place, much less with TrueCrypt.
A) You have forgotten basic physics. The beginning of the disk is faster. Locality is desirable but is not and has never been the only thing that matters.
If you haven't actually looked at the block allocation patterns of common filesystems, then you can't say conclusively that fuzzbang is incorrect. Arguments from first principles (e.g. "basic physics") cannot override empirical evidence.
Further, locality of reference will have a much, much bigger influence on spinning disk I/O throughput than location at the front of the disk. The difference between the outer rim and inner rim might be 120MB/s to 70MB/s, so reading a contiguous 200MB file will take 1.7x as long if it's stored at the inner rim (286ms vs 167ms). However, if that 200MB file is stored in 100 2MB fragments, and seeking to each fragment takes 4ms, your reading time will be dominated by the seek time due to fragmentation (686ms vs 567ms, or a mere 1.2x difference).
Based on my experience I'm inclined to accept fuzzbang's description of block allocation strategies. It used to be common wisdom that you could defragment a volume by copying all the data off, formatting it, then copying the data back on. I did this once with an NTFS volume (using ntfs-3g), then checked the resulting data in the Windows disk defragmenter. The data was primarily located around the center of the volume, with numerous gaps. Filesystems leave gaps to allow room for files to expand.
B) You have just named three uncommon filesystems that few people will ever use in the first place, much less with TrueCrypt.
"Commonness" for the purposes of forensics is a much lower bar than for market analysis. I'd also wager that, servers included, there are at least as many ext2/ext3/ext4 volumes on the planet as NTFS volumes.
> If you haven't actually looked at the block allocation patterns of common filesystems
I have.
> you can't say conclusively that fuzzbang is incorrect
And I can.
I'm aware of the degree of difference in speed. It is sufficient that it is standard practice for filesystems to be restricted to the first 1/4-1/2 of a spinning disk in performance-sensitive applications. Or at least it was, in the last few years we've become more likely to just use SSDs or keep everything in RAM.
> if that 200MB file is stored in 100 2MB fragments
Thank you for assuming I don't even have the knowledge of a typical computer user, it greatly increases the likelihood I'll not waste further time with you. Raises it, in fact, to 100%.
> If you haven't actually looked at the block allocation patterns of common filesystems
I have.
And? What distribution of block allocation did you observe on said filesystems? Does it contradict the original supposition that filesystems spread out allocations to prevent fragmentation, thus possibly overwriting hidden data at the end of a partition?
It is sufficient that it is standard practice for filesystems to be restricted to the first 1/4-1/2 of a spinning disk in performance-sensitive applications.
This has as much to do with seek times as sequential reading speed. A drive with 8ms average seek times might average 2ms if you only make the heads travel 25% of the width of the platter.
The fact that you have to restrict the filesystem to the beginning of the disk suggests that filesystems don't do this automatically.
Thank you for assuming I don't even have the knowledge of a typical computer user, it greatly increases the likelihood I'll not waste further time with you. Raises it, in fact, to 100%.
I'm not sure how you got that impression. I was just providing numbers to complete the example. There's no need to become defensive; and if you find that you might be wrong, saying, "That's a fair point, I'll have to do some more research," goes a lot further than continuing to beat a dead horse.
Maybe it's the fact that this thread is on an article related to law, politics, and morality that is causing needless argumentation.
As a full-time Mac user for 8 years, and a Linux user for much of the decade before that, I am acutely aware of how common many filesystems, including HFS+, actually are.
If OS X ever breaks 10% (and still uses HFS+ at that time), I'll reconsider my judgement of its commonality.
I'm writing this from a computer with two partitions: one HFS+ running OSX and one Ext4 running Linux. These are both the default options when installing OSX (Mountain Lion) and Ubuntu 12.04, respectively:
Of course you are. And if you took a poll of HN users, you might even find OS X + Linux near a majority. That has nothing at all to do with what filesystems the vast majority of people are using, much less what they'd be using with TrueCrypt.
That's how it does hidden volumes huh? I always thought it was something sneakier, like flag a handful of files that exist in the regular volume for it to build its hidden volume out of.
Well ... a normal PC can have up to 10-12 TB of storage these days . It is more than normal for 1 or 2 to be left empty. You just create the hidden volume there.
Hidden volumes aren't all that great for hiding porn. Generally, porn takes up a large amount of space. Hidden volumes are great for hiding a small amount of data (passwords, launch codes, bank statements, etc) among a large amount; not so great for hiding a large amount of data.
This would not be the best argument. It's possible to prove you're lying. If on the other hand you merely state you forgot the password, it's impossible to prove you're lying.
Although I have feelings about how I would like the case to be decided, I suspect that it will be precedence in 10 years to be forced to hand over passwords. I see the similarities that it has with a safe, the only difference being that you can memorize the key. You are not actually admitting to wrongdoing when saying the key, so it might not even be covered under the fifth amendment.
With a safe, the government can defeat the protecion by cutting or drilling into it.
The proper analogy is that the government is welcome to get a warrant and attempt to break the protection provided by encryption. Since the defendant in a criminal case has an absolute right to remain silent they cannot force them to speak or otherwise convey any information that might incriminate them.
If the government cannot break the encryption then that is an unfortunate side effect of preserving people's rights.
1) With a warrant the government doesn't have to break the safe--the court can compel you to hand over a key.
2) In the U.S., there is no "absolute right to remain silent." The 5th amendment says: "No person shall be... compelled in any criminal case to be a witness against himself." This is a somewhat narrower protection. For example, someone can be compelled to give authorization to a third party to search property of the defendant in the possession of the third party.
How the 5th applies to encrypted volumes is still up in the air. On one hand, reciting a key could be seen as testimonial speech. On the other hand, it could be seen as analogous to handing over the key to a safe, which is not considered testimonial speech subject to the 5th amendment.
The grim reality facing our country today is one where we currently have a percentage of our population behind bars that surpasses even the heights of the gulags in Stalinist Russia
It's not that no one "should" be compelled. The constitution states that no one "can" be compelled. Huge difference. "Should" implies choice. There is no choice here. I hope the court sees this as a circumstance where the person is being compelled to be a witness against himself. I like laws that protect individual citizens (and I mean actual living, breathing entities and not that BS 'businesses are people, my friend').
So, I agree with you... Just a bit more firm and by the book. :-)
Says who? The historical practice, at least in the U.S., allows for some level of compulsion to help the prosecution. For example, there are cases (in connection with fraud and tax avoidance) holding that defendants can be forced to give written authorization to overseas banks to release financial records to prosecuting authorities.
"Banner said agents did find evidence that suggested Feldman was using a peer-to-peer program called eMule to exchange files with titles suggestive of child pornography."
Someone paranoid enough to encrypt 20tb worth of hard drives is dumb enough to use eMule to transfer highly illicit material?
Unless of course it was emule over something like Tor or I2P, which I wasn't even aware worked. If that was the case, I'm more curious about how they tracked him to his actual IP through those services, than the actual drives.
Was it the same way they got that anonymous hacker a few months back, where they camped outside his place and sniffed his wifi for the packets to start flying?