Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From what I understand, this would allow somebody with reasonable resources to easily crack captchas with a high-end GPU. You could increase the complexity of the PoW, but ultimately you then just end up in a resource race.

Ideally you want an attacker to spend as long as possible using up computation resources, whilst your normal clients spend on average little. I believe you would want a multi-stage approach that involves some human interaction, which is statistically failed more often by computers, and keeps them busy for a long time doing pointless math.

The reason it would have to be multi-staged is that you don't want the attacker doing the computing to realize they are solving a pointless list of problems with no hope of getting a token. A normal user might on average do a -> b -> c, whereas you would make an attacking machine do a -> b -> c -> d -> e -> f, and the not get a token for example. (It would have to be statistically setup as not to encourage the attacker to bail early.)

I think I would make it a slightly RAM heavy problem that limits how many of these problems could be solved in parallel. You would obviously setup the problem in such a way that networking is minimal.

For example, you could send a variation of a Langton's Ant map pre-seeded with random patches for a significantly large map, and then tell the client to run for X time steps, then give you the result in some specific area.



The term for describing is memory hard functions. RandomX[0] is one such example where GPU parallelism does not net them a large advantage over CPUs.

[0]: https://github.com/tevador/RandomX


Thinking about it, this could be the way forwards. Memory offers several natural bottlenecks:

1. Memory size - Memory is somewhat costly (even now), with most entry laptops being stuck in the range of 8GB.

2. Access bandwidth - Getting a CPU to communicate with the RAM takes some time, improvements are incremental and are fundamentally limited.

3. Thread access - Threads compete for bandwidth, as long as cache hits are low.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: