My threat model assumes you want an attacker advantage of less than 2^-64 after 2^64 keys exist to be fingerprinted in the first place, and your threat model includes collisions.
If I remember correctly, cloud providers assess multi-user security by assuming 2^40 users which each will have 2^50 keys throughout their service lifetime.
If you round down your assumption to 2^34 users with at most 100 public keys on average (for a total of 2^41 user-keys), you can get away with 2^-41 after 2^41 at about 123 bits, which for simplicity you can round up to the nearest byte and arrive at 128 bits.
The other thing you want to keep in mind is, how large are the keys in scope? If you have 4096-bit RSA keys and your fingerprints are only 64 bits, then by the pigeonhole principle we expect there to be 2^4032 distinct public keys with a given fingerprint. The average distance between fingerprints will be random (but you can approximate it to be an order of magnitude near 2^32).
In all honesty, fingerprints are probably a poor mechanism.
OK, to be clear, I am specifically contending that a key fingerprint does not include collisions. My proof is empirical, that no one has come up with an attack on 64 bit PGP key fingerprints.
Collisions mean that an attacker can generate two or more messaging identities with the same fingerprint. How would that help them in some way?
Sorry that my, perhaps, poor wording caused you to waste your time producing colliding 64 bit PGP key IDs. I should have used the term "threat model". We were discussing how long key fingerprints should be. My point was that even though 64 bit key IDs are trivially collidable there did not seem to be any practical attacks based on that. So you in a sense provided support for my argument. :) So we can skip directly to your proposed attack...
I have to admit that I don't actually understand it. First the attacker gets some kernel devs to sign key1 of the two keys with colliding key IDs. Why? How does that help the attacker? Then I am guessing that the attacker signs some software with key1. Are the signatures important here? Then the attacker signs the malicious software with key2? Key2 isn't signed by any developers so if that was important the attack fails. If it wasn't important then why mention it?
Could you please provide a more detailed description of the attack? It seems to me that the sort of attack you are describing would require some trusted third party to trick. Like a TLS certifying authority for example.
Why so high? Computers are fast and massively parallel these days. If a cryptosystem fully relies on fingerprints, a second preimage of someone’s fingerprint where the attacker knows the private key for the second preimage (or it’s a cleverly corrupt key pair) catastrophically breaks security for the victim. Let’s make this astronomically unlikely even in the multiple potential victim case.
And it’s not like 256 bit hashes are expensive.
(I’m not holding my breath on fully quantum attacks using Grover’s algorithm, at high throughput, against billions of users, so we can probably wait a while before 256 bits feels uncomfortably short.)
A key fingerprint is a usability feature. It has no other purpose. Otherwise we would just use the public key. Key fingerprints have to be kept as short as possible. So the question is, how short can that be? I would argue that 256 bit key fingerprints are not really usable.
Signal messenger is using 100 bits for their key fingerprint. They combine two to make a 60 digit decimal number. Increasing that to 256 x 2 bits would mean that they would end up with 154 decimal digits. That would be completely unusable.
I was asked about the minimum value, and gave my explanation for why some values could be considered the minimum. By all means, use 256-bit fingerprints.
But to the point, how long should something like a key fingerprint be?