Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was referring to the network transfer process concerning the overhead of single-file transfers.


The comment to which you were replying mentioned both the excessive local disk usage and the excessive network transfer, and so your comment appeared to apply to both portions. This is why I started my comment by explicitly restricting it to the case of local disk usage.


For hard links to work you still need to know that the brand new layer you just downloaded is same as something you already have, i.e. running a deduplication step.

How? Well, the most simple way is compute the digest of the content and look it up, oh wait :thinking:


I’m not sure what point you’re trying to make. Are you assuming that a layer would be transferred in its entirety, even in cases where the majority of the contents are already available locally? The purpose of bringing up hard links was to state that when de-duplication is done at a per-file granularity rather than a per-layer granularity, it doesn’t introduce a ru time overhead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: