Hacker Newsnew | past | comments | ask | show | jobs | submit | rthnbgrredf's commentslogin

If you are in an enterprise setting and you currently evaluate ArcGIS vs QGIS, pick QGIS and thank me later. ArcGIS Enterprise is a piece of software that feels straight out of the 90s and has no native linux binary (can be started with wine). It is expensive as hell and resource hungry.


My brother is a GIS expert and does this for a living. At his workplace (trans-european electrical project) they use ArcGIS and privately he uses QGIS. He said he'd pick QGIS over ArcGIS every single day.

ArcGIS is very polished, but everything costs extra. QGIS has less polish but is supremely hackable and there are plugins for nearly everything.

Having used QGIS as a non-expert to extract mountain heightmaps from a border region between two datasets from different national bodies and looking up some property borders I can really recommend it. Took me less than an afternoon to get started


I come from the ArcView 3 / ArcInfo days. I still maintain a non professional home license which is nice, however they killed off ArcMap for non-enterprise and I just cant for the life of me get into Arc Pro or QGis. Old dog, no new tricks for me I guess.


Geopandas and QGIS are my go to. QGIS for basic work, automate with Geopandas.

It makes the work a lot of fun!


Any tips on smoothing the transition between the two that mean work isn't duplicated?


Cache interim data. Use QGIS for exploration.


+100. There is very little QGIS cannot do as well or better than ArcGIS. For any shortcomings, there are generally other specialized tools that can fill the gaps. It's really just a training issue more than technical one at this point imo.


The _one_ thing I wish would be improved is the georeferencing pipeline.

The fact Arc gives you a transparent live preview of where your image will end up is 1000x better than QGISs, "save a tiff, load it, check it, do it again" approach.


It’s been a while since I georeferenced in qgis, but there used to be some great plugins. Looks like some of those are gone now, and the core module has improved a lot. This newer plugin looks promising, though: https://github.com/cxcandid/GeorefExtension


You're so correct. It's an odd shortcoming in the whole suite of tools and I desperately wish it would get a refresh.


There is exactly one thing that I would have ever needed ArcGid for and thats for Non Rectangular Map Borders. That does not yet exist in QGIS. But i managed to do using a GMT.jl.


ArcGIS is a social club that issues software. How do you spot GIS people? They tell you about planning for, going to, or what went on at the ESRI conference.


YES. I made the switch 10 years ago and my professional life improved overnight


Uh, that is demonstrably not true. ArcGIS Enterprise (Portal, hosting servers, datastore, geoevent) all also run on Linux.

Now where ArcGIS enterprise succeeds is being in an actual enterprise (thousands of users), having groups collaborate, data control, and more. None of the enterprise-y bits exist.

And QGis is more akin to ArcGIS Pro, not Enterprise.

Now, yes, it is definitely resource hungry. And also, if you administer it, HA isn't really HA. Theres tons of footguns in how they implement HA that makes it a SPOF.

Also, for relevancy, I was the one who worked with one of their engineers and showed that WebAdapters (iis reverse proxy for AGE) could be installed multiply on the same machine, using SNI. 11.2 was the first to include my contribution to that.

Edit: gotta love the -1s. What do you all want? Screenshots of my account on my.esri.com? Pictures of Portal and the Linux console they're running on? The fact its 80% Apache Tomcat and Java, with the rest Python3? Or how about the 300 ish npm modules, 80 of which on the last security scan I did showed compromise?

Everything I said was completely true. This is what I'm paid to run. Can't say who, cause we can't edit posts after 1 or so hours.

I would LOVE to push FLOSS everywhere. QGIS would mostly replace ArcGIS Pro, with exception of things like Experience Builder and other weird vertical tools. But yeah. I know this industry. Even met Jack a few times.


Speaking of ArcGIS and reverse proxies, they were circulating a single-file .ashx script for about a decade that ended up being the single worst security breach at several large government customers of mine… ever. By a mile.

For the uninitiated: this proxy was a hack to work around the poor internal architecture of ArcGIS enterprise, and to make things “work” it took the target server URL as a query parameter.

So yes, you guessed right: any server. Any HTTP to HTTPS endpoint anywhere on the network. In fact you could hack TCP services too if you knew a bit about protocol smuggling. Anonymously. From the Internet. Fun!

I’m still finding this horror embedded ten folders deep in random ASP.NET apps running in production.


I'm acutely aware of that.

The folks who hired me didn't realize I was also a hacker. I did my due diligence as well, and this was more 10.3 . And yes, it was terrible.

I know that FEMA and EPA both are running their public portals as 10.8 , which is really bad. There's usually between 8-12 critical (cvss 3.0 9 or greater) per version bump. Fuck if I know how federal acquisitions even allow this, but yeahhh.

Also, on Hosting Server install, theres configs with commented out internal ticket numbers. You search this on google, and you'll find out 25% of the IPs that hit it are Chinese. Obviously, for software thats used predominantly in the US government, a whole bunch of folks in opposition to us are writing it. And damn, the writing quality is TERRIBLE.

basically, if you have to run ArcGIS enterprise, keep it internal only if at all possible. Secure Portal operation is NOT to be trusted. And if you do need a public API, keep the single machine in DMZ, or better yet, isolated on a cloud. Copy the data as a bastion, like a S3 bucket or rsync, or something. Dont connect it to your enterprise.

Oh and even with 11.5 , there are a multitude of hidden options you can set with the config for WebAdapter, including full debug. Some even save local creds like for portaladmin.

Oh yeah, and if you access the Portal postgres DB, and query the users table, you'll find 20 or so Esri accounts that are intentionally hidden from the Users list in portal on :7443 . The accounts do appear disabled... But, why are they even there to begin with?


This is horrible!


It's sadly the norm for monopolistic industry-specific software. You see the same lack of due diligence in SCADA software and the like.


How is it monopolistic?


ArcGIS has essentially no competition in their industry and market. The open source alternatives are not as fully featured, etc…

It’s roughly the same story as with MS Office vs its alternatives. They exist, but 99% of enterprises will use only the Microsoft suite.


But that doesn't imply being monopolistic, does it? It's just the only game in town unless they're killing off the competition.


> demonstrably not true ... all also run on Linux

I'm not saying that it can't run in Linux, I'm saying there is no native binary for Linux.

They have bash scripts that starts the windows executables in wine.

You can see that when you read the scripts or in htop.


> ArcGIS Enterprise (Portal, hosting servers, datastore, geoevent) all also run on Linux

This isn’t about what platform an enterprise hosts its cloud offerings on. That barely affects the customer experience, outside of lock-in situations.

The concern was on OS support for customer-run software.


> even met Jack a few times

The Danger Man!

Yes, I know his name is Jack Dangermond.


What about GRASS?

https://grass.osgeo.org


Yes, that's one missing piece. Excellent software but there is a steep learning curve, and it has its own format that you need to convert back and forth from.



This comment is especially funny to anybody who's run QGIS

Yes, it has a better UI than ArcGIS, and uses less memory, but only slightly so. It still looks like it escaped from 1995's Neckbeard Labs, is clunky as heck, and eats tons of memory as well.

It's still a great piece of software, don't get me wrong. I wouldn't trade it for any other GIS tool. But there's a long way to go for GIS software.


What about library support/APIs if you want to embedd gis functionality in other applications? Does QGIS provide widgets etc?


I played with it some last year. not much has changed since I used it in a GIS class in 2007 in college.


I'd argue a lot has changed, though mostly extensions and bottlenecks in QGIS.

Cant's speak much for arcgis, but it is bloated usually for me so I use it sparingly.


I was talking about ArcGIS.


No one doing serious cartography uses qgis. Also geostatatistics like kriging is fully supported and easy to use in ArcGis


That no one doing serious statistics uses QGis is false as evidenced both by community and sponsors. Try searching “who uses QGis”.


While survivor bias is relevant, I strongly doubt any modern transaction stored digitally in a DB such as Postgres could last 5k years.


but they can tho??? no one said about transferring such data into another disk


The IBM 80-hole punch card will only turn 100 in 2028. Who knows what the world will look like in 2128.


I bet the clay tablet will be just fine. No word on moldy cards.


Note that it isn't the norm for clay tablets to survive. We have lots of them, far more than we're willing to provide the manpower to read, but in most cases[1] that's not because they were made to be durable.

Whenever a city was conquered, the tablets there were immortalized as the city burned down. But cities that didn't get sacked didn't burn down, and their tablets could be lost. For example, we don't have the clay records from Hammurabi's reign in Babylon, because (a) he was a strong king, and Babylon wasn't conquered on his watch; and (b) he reigned a long time ago, and that period of Babylon sank below the water table, dissolving all the records.

[1] Some tablets were intentionally fired for posterity.


I think it is all well and good, but the most affordable option is probably still to buy a used MacBook with 16/32 or 64 GB (depending on the budget) unified memory and install Asahi Linux for tinkering.

Graphics cards with decent amount of memory are still massively overpriced (even used), big, noisy and draw a lot of energy.


> and install Asahi Linux for tinkering.

I would recommend sticking to macOS if compatibility and performance are the goal.

Asahi is an amazing accomplishment, but running native optimized macOS software including MLX acceleration is the way to go unless you’re dead-set on using Linux and willing to deal with the tradeoffs.


It just came to my attention that the 2021 M1 Max 64gb is less than $1500 used. That’s 64gb of unified memory at regular laptop prices, so I think people will be well equipped with AI laptops rather soon.

Apple really is #2 and probably could be #1 in AI consumer hardware.


Apple is leagues ahead of Microsoft with the whole AI PC thing and so far it has yet to mean anything. I don't think consumers care at all about running AI, let alone running AI locally.

I'd try the whole AI thing on my work Macbook but Apple's built-in AI stuff isn't available in my language, so perhaps that's also why I haven't heard anybody mention it.


People don’t know what they want yet, you have to show it to them. Getting the hardware out is part of it, but you are right, we’re missing the killer apps at the moment. The very need for privacy with AI will make personal hardware important no matter what.


Two main factors are holding back the "killer app" for AI. Fix hallucinations and make agents more deterministic. Once these are in place, people will love AI when it can make them money somehow.


How does one “fix hallucinations” on an LLM? Isn’t hallucinating pretty much all it does?


Coding agents have shown how. You filter the output against something that can tell the llm when it’s hallucinating.

The hard part is identifying those filter functions outside of the code domain.


It's called a RAG, and it's getting very well developed for some niche use cases such as legal, medical, etc. I've been personally working on one for mental health, and please don't let anybody tell you that they're using an LLM as a mental health counselor. I've been working on it for a year and a half, and if we get it to production ready in the next year and a half I will be surprised. In keeping up with the field, I don't think anybody else is any closer than we are.


Wait, can you say more about how RAG solves this problem? What Kasey is referring to is things like compiling statically-typed code: there's a ground truth an agent is connected to there --- it can at least confidently assert "this code actually compiles" (and thus can't be using an entirely-hallucinated API. I don't see how RAG accomplishes something similar, but I don't think much about RAG.


No no, not at all, see: https://openai.com/index/why-language-models-hallucinate/ which was recently featured on the frontpage - excellent clean take on how to fix the issue (they already got a long way with gpt-5-thinking-mini). I liked this bit for clear outline of the issue:

´´´Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know.”

As another example, suppose a language model is asked for someone’s birthday but doesn’t know. If it guesses “September 10,” it has a 1-in-365 chance of being right. Saying “I don’t know” guarantees zero points. Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty."´´´´


Other than that, Mrs. Lincoln, how was the Agentic AI?


You can’t fix the hallucinations


  > People don’t know what they want yet, you have to show it to them
Henry Ford famously quipped that had he asked his customers what they wanted, they would have wanted a faster horse.


We've shown people so many times and so forcefully that they're now actively complaining about it. It's a meme.

The problem isn't getting your Killer A I App in front of eyeballs. The problem is showing something useful or necessary or wanted. AI has not yet offered the common person anything they want or need! The people have seen what you want to show them, they've been forced to try it, over and over. There is nobody who interacts with the internet who has not been forced to use AI tools.

And yet still nobody wants it. Do you think that they'll love AI more if we force them to use it more?


And yet still nobody wants it.

Nobody wants the one-millionth meeting transcription app and the one-millionth coding agent constantly, sure.

It a developer creativity issue. I personally believe the creativity is so egregious, that if anyone were to release a killer app, the entirety of the lackluster dev community will copy it into eternity to the point where you’ll think that that’s all AI can do.

This is not a great way to start off the morning, but gosh darn it, I really hate that this profession attracted so many people that just want to make a buck.

——-

You know what was the killer app for the Wii?

Wii Sports. It sold a lot of Wiis.

You have to be creative with this AI stuff, it’s a requirement.


Ryzen AI 9 395+ with 64MB of LPDDR5 is 1500 new in a ton of factors and 2k with 128. If I have 1500 for a unified memory inference machine I'm probably not getting a Mac. It's not a bad choice per se, llama.cpp supports that harware extremely well, but a modern Ryzen APU at the same price is more of what I want for that use case, with the M1 Mac youre paying for a Retina display and a bunch of stuff unrelated to inference.


Not just LPDDR5, but LPDDR5X-8000 on a 256-bit bus. The 40 CU of RDNA 3.5 is nice, but it's less raw compute than e.g. a desktop 4060 Ti dGPU. The memory is fast, 200+ GB/s real-world read and write (the AIDA64 thread about limited read speeds is misleading, this is what the CPU is able to see, the way the memory controller is configured, but GPU tooling reveals full 200+ GB/s read and write). Though you can only allocate 96 GB to the iGPU on Windows or 110 GB on Linux.

The ROCm and Vulkan stacks are okay, but they're definitely not fully optimized yet.

Strix Halo's biggest weakness compared to Mac setups is memory bandwidth. M4 Max gets something like 500+ GB/s, and M3 Ultra gets something like 800 GB/s, if memory serves correctly.

I just ordered a 128 GB Strix Halo system, and while I'm thrilled about it, but in fariness, for people who don't have an adamant insistence against proprietary kernels, refurbished Apple silicon does offer a compelling alternative with superior performance options. AFAIK there's nothing like Apple Care for any of the Strix Halo systems either.


The 128 GB Strix Halo system was tempting me, but I think I'm going to hold out for the Medusa Point memory bandwidth gains to expand my cluster setup.

I have a Mac Mini M4 Pro 64GB that does quite well with inference on the Qwen3 models, but is hell on networking with my home K3s cluster, which going deeper on is half the fun of this stuff for me.


>The 128 GB Strix Halo system was tempting me, but I think I'm going to hold out for the Medusa Point

I was initially thinking this way too, but I realized a 128GB Strix Halo system would make an excellent addition to my homelab / LAN even once it's no longer the star of the stable for LLM inference - i.e. I will probably get a Medusa Halo system as well once they're available. My other devices are Zen 2 (3600x) / Zen 3 (5950x) / Zen 4 (8840u), an Alder Lake N100 NUC, a Twin Lake N150 NUC, along with a few Pi's and Rockchip SBC's, so a Zen 5 system makes a nice addition to the high end of my lineup anyway. Not to mention, everything else I have maxed out at 2.5GbE. I've been looking for an excuse to upgrade my switch from 2.5GbE to 5 or 10 GbE, and the Strix Halo system I ordered was the BeeLink GTR9 Pro with dual 10GbE. Regardless of whether it's doing LLM, other gen AI inference, some extremely light ML training / light fine tuning, media transcoding, or just being yet another UPS-protected server on my LAN, there's just so much capability offered for this price and TDP point compared to everything else I have.

Apple Silicon would've been a serious competitor for me on the price/performance front, but I'm right up there with RMS in terms of ideological hostility towards proprietary kernels. I'm not totally perfect (privacy and security are a journey, not a destination), but I am at the point where I refuse to use anything running an NT or Darwin kernel.


That is sweet! The extent of my cluster is a few Pis that talk to the Mac Mini over the LAN for inference stuff, that I could definitely use some headroom on. I tried to integrate it into the cluster directly by running k3s in colima - but to join an existing cluster via IP, I had to run colima in host networking mode - so any pods on the mini that were trying to do CoreDNS networking were hitting collisions with mDNSResponder when dialing port 53 for DNS. Finally decided that the macs are nice machines but not a good fit for a member of a cluster.

Love that AMD seems to be closing the gap on the performance _and_ power efficiency of Apple Silicon with the latest Ryzen advancements. Seems like one of these new miniPCs would be a dream setup to run a bunch of data and AI centric hobby projects on - particularly workloads like geospatial imagery processing in addition to the LLM stuff. Its a fun time to be a tinkerer!


It’s not better than the Macs yet. There’s no half assing this AI stuff, AMD is behind even the 4 year old MacBooks.

NVDIA is so greedy that doling out $500 dollars will only you get you 16gb of vram at half the speed of a M1 Max. You can get a lot more speed with more expensive NVDIA GPUs, but you won’t get anything close to a decent amount of vram for less than 700-1500 dollars (well, truly, you will not get close to 32gb even).

Makes me wonder just how much secret effort is being put in by MAG7 to strip NVDIDA of this pricing power because they are absolutely price gouging.


Ryzen 9 doesn't exist in Europe


I recently got an M3 Max with 64g (the higher spec max) and ts been a lot of fun playing with local models. It cost around $3k though even refurbished.


M1 doesn't exactly have stellar memory bandwidth for this day and age though


M1 Max with 64GB has 400GB/s memory bandwidth.

You have to get into the highest 16-core M4 Max configurations to begin pulling away from that number.


Oh sorry I thought it was only about 100. I'd read that before but I must have remembered incorrectly. 400 is indeed very serviceable.


Get an Apple Silicon MacBook with a broken screen and it’s an even better deal.


The mini pcs based on AMD Ryzen AI Max+ 395 (Strix Halo) are probably pretty competitive with those. Depending on which one you buy it's $1700-2000 for one with 128GB RAM that is shared with the integrated Radeon 8060S graphics. There's videos on youtube talking about using this with the bigger LLM models.


If Moore's Law is Ending leaks are to be believed, there are going to be 24GB GDDR7 5080 Super and maybe even 5070 Super Ti variants in the 1k (MSRP) range and one assumes fast Blackwell NVFP4 Tensor Cores.

Depends on what you're doing, but at FP4 that goes pretty far.


You dont even need Asahi, you can run comfy on it but I recommend the Draw Things app, it just works and holds your hand a LOT. I am able to run a few models locally, the underlying app is open source.


I used Draw Thing after fighting with comfyui.


What about AMD Ryzen AI Max+ 395 Mini PCs with upto 128GB unified memory?


Their memory bandwidth is the problem. 256 GB/s is really, really slow for LLMs.

Seems like at the consumer hardware level you just have to pick your poison or what one factor you care about most. Macs with a Max or Ultra chip can have good memory bandwidth but low compute, but also ultra low power consumption. Discrete GPUs have great compute and bandwidth but low to middling VRAM, and high costs and power consumption. The unified memory PCs like the Ryzen AI Max and the Nvidia DGX deliver middling compute, higher VRAMs, and terrible memory bandwidth.


It's an underwhelming product in an annoying market segment, but 256GB/s really isn't that bad when you look at the competition. 150GB/s from hex channel DDR4, 200GB/s from quad channel DDR5, or around 256GB/s from Nvidia Digits or M Pro (that you can't get in the 128GB range). For context it's about what low-mid range GPUs provide, and 2.5-5x the bandwidth of the 50/100 GB/s memory that most people currently have.

If you're going with a Mac Studio Max you're going to be paying twice the price for twice the memory bandwidth, but the kicker is you'll be getting the same amount of compute as the AMD AI chips have which is going to be comparable to a low-mid range GPU. Even midrange GPUs like the RX 6800 or RTX 3060 are going to have 2x the compute. When the M1 chips first came out people were getting seriously bad prompt processing performance to the point that it was a legitimate consideration to make before purchase, and this was back when local models could barely manage 16k of context. If money wasn't a consideration and you decided to get the best possible Mac Studio Ultra, 800GB/s won't feel like a significant upgrade when it still takes 1 minute to process every 80k of uncached context that you'll absolutely be using on 1m context models.


But for matrix multiplication, isn't compute more important, as there are N³ multiplications but just N² numbers in a matrix?

Also I don't think power consumption is important for AI. Typically you do AI at home or in the office where there is lot of electricity.


>But for matrix multiplication, isn't compute more important, as there are N³ multiplications but just N² numbers in a matrix?

Being able to quickly calculate a dumb or unreliable result because you're VRAM starved is not very useful for most scenarios. To run capable models you need VRAM, so high VRAM and lower compute is usually more useful than the inverse (a lot of both is even better, but you need a lot of money and power for that).

Even in this post with four RPis, the Qwen3 30 A3B is still an MOE model and not a dense model. It runs fast with only 3B active parameters and can be parallelized across computers but it's much less capable than a dense 30B model running on a single GPU.

> Also I don't think power consumption is important for AI. Typically you do AI at home or in the office where there is lot of electricity.

Depends on what scale you're discussing. If you want to get similar VRAM as a 512GB Mac Studio Ultra with a bunch of Nvidia GPUs like RTX 3090 cards you're not going to be able to run that on a typical American 15 AMP circuits, you'll trip a breaker half way there.


Works very well and very fast with this Qwen3 30B A3B model.


I think it could be worthwhile to fork Alpine and maintain a glibc variant. That way we would keep nearly all of Alpine’s advantages while avoiding the drawbacks of musl.


I think that I am using something which is essentially https://zapps.app where I can just give it a glibc app and it would give me a folder structure with all the dependencies which I feel like might be similar to nix in only that sense (and not reproducibility or language)

I recently tried to run it in alpine after seeing this blog post and here's what I can say. If you lets say have debian or something or any system where any binary might work, run the script and then it would output a folder/tar and then you can move it anywhere and run it including alpine.

I am thinking of creating an article. But in the meanwhile I have created an asciinema gif to show you guys what I mean. Open to feedback as always.

https://asciinema.org/a/qHGHlU0o4V7VgyyWxtHY2PG5Y


Very interesting. Thank you. I run Chimera Linux which is also MUSL based, so I have the same issue raised in this article.

I mostly consider it a non-issue because I use Distrobox. An Arch Distrobox gives me access to all the Arch software including the AUR. Graphical apps can even be exported to my desktop launcher. They are technically running on a container but I just click on them like anything else and they show up in their own Wayland window just like everything else. Or, I can run from the command line, including compiling on top of Glibc if I want/need to. And keeping everything up to day just means running yay or pacman.

I can see the advantage of Zapps in some cases though, especially for CLI stuff. Very cool.


What are the disadvantages of musl? It is really just compatibility with Glibc. But the only reason we say that instead of saying that Glibc is incompatible with musl is because of popularity.

If POSIX compliance and Glibc compatibility were the same thing, it would not be a problem.


Bare-metal servers sound super cheap when you look at the price tag, and yeah, you get a lot of raw power for the money. But once you’re in an enterprise setup, the real cost isn’t the hardware at all, it’s the people needed to keep everything running.

If you go this route, you’ve got to build out your own stack for security, global delivery, databases, storage, orchestration, networking ... the whole deal. That means juggling a bunch of different tools, patching stuff, fixing breakage at 3 a.m., and scaling it all when things grow. Pretty soon you need way more engineers, and the “cheap” servers don’t feel so cheap anymore.


A single, powerful box (or a couple, for redundancy) may still be the right choice, depending on your product / service. Renting is arguably the most approachable option: you're outsourcing the most tedious parts + you can upgrade to a newer generation whenever it becomes operationally viable. You can add bucket storage or CDN without dramatically altering your architecture.

Early Google rejected big iron and built fault tolerance on top of commodity hardware. WhatsApp used to run their global operation employing only 50 engineering staff. Facebook ran on Apache+PHP (they even served index.php as plain text on one occasion). You can build enormous value through simple means.


If you use a cloud, you need a solution for security (ever heard of “shared responsibility”?), global delivery (a big cloud will host you all over, and this requires extra effort on your part, kind of like how having multiple rented or owned servers requires extra effort), storage (okay, I admit that S3 et al are nice and that non-big-cloud solutions are a bit lacking in this department), orchestration (the cloud handles only the lowest level — you still need to orchestrate your stuff on top of it), fixing breakage at 3 a.m. (the cloud can swap you onto a new server, subject to availability; so can a provider like Hetzner. You still need to fail over to that server successfully), patching stuff (other than firmware, the cloud does not help you here).


I used to say "oh yeah just run qemu-kvm" until my girlfriend moved in with me and I realized you do legitimately need some kind of infrastructure for managing your "internal cloud" if anyone involved isn't 100% on the same page and then that starts to be its own thing you really do have to manage.

Suddenly I learned why my employer was willing to spend so much on OpenStack and Active directory.


> until my girlfriend moved in with me

lol, why was this the defining moment? She wasn't too keen on hearing the high pitch wwwwhhhhuuuuurrrrrrr of the server fans?


She was another software engineer and needed VMs too so I thought I'd just let her use some of my spare compute.


I haven't thought that the game is actually that hard. However, Hateris actually is https://github.com/qntm/hatetris


Haha, thank you for this. I will implement this in my personal version of tetris!


Tetris is already kind of like that. My version avoids dropping the same piece consecutively (re-roll once on duplicate) in order to get a more realistic Tetris experience.


Huh, I always thought it was a bag randomized, but looked it up and that's how the NES Tetris worked (reroll on duplicate)


I have three randomizers in my version here: https://www.susmel.com/stacky/ you can switch between them with a t on your keyboard (no phones, sorry). If you expand controls with c you will see which one is active. NES and 7-bag are in there as well as "random"/stacky one. Shift+enter in the main menu to pick a level to start with.


I wish I had bookmarked the incredible deep dive into the multiple algorithm schemes for Tetris piece picking and which families of the game used which algorithms and why. The modern standard (as now dictated by rights holder The Tetris Company) is a bag approach, but has interesting nuances. That one I believe is 7-Bag these days (7-Piece bags). I've also heard a lot of love for 35-Bag which is the approach used by Tetris: The Grand Master 3. (From what little I know about it, TGM is a fascinating "family", especially its use in both tournament and speed running cultures. But because of all the usual drama of Tetris it can be sometimes hard to play TGM in the US as the license holder is Japanese.)



In case known VPN providers are blocked you can pick a small VPS from a hoster like Hetzner and setup your own VPN.


Used ThinkPads from eBay, especially the smaller models from the X-Series, can make surprisingly cost effective "MiniPCs".

If you need multiple devices, as mentioned in the article, you can even stack the laptops and build a small tower of "MiniPCs" all with different purposes.

Another advantage is that they already come with a built-in screen and keyboard allowing for quick debugging, without needing to connect external peripherals.


Do you have a green filter on all of your images to make it look more creepy?



Wow, that looks to be very well-lit. The bad lighting was the only really spooky thing about the station.


Hmm. The article photo is misleading enough that I won't ever click on any other of their posts...


ok its not too bad

the only spooky thing is how much stairs you must climb not the lighting lol


Can anyone translate the sign?


ようこそ 日本一のモグラえき 土合へ Translates to something like: "Welcome to Doai, Japan's number one mole station (mogura-eki)".


Welcome to "Japan's No. 1 Mole Station"

-The staircase is 338 meters long and has 462 steps.

Climb the stairs to the top of the 143m (24 steps)

Go through the connecting passage and you will reach the ticket gate.

The elevation of the down platform is 583 meters above sea level, and the elevation of the station building is 653.7 meters, meaning the difference in elevation between the station building and the down platform is 70.7 meters.

It takes about 10 minutes to get to the ticket gate.

Please be careful of your step when climbing.

Doai Station


Autotranslate below. The 'unclear' was added by me and originally read "Welcome to Japan's No.1 Google", which seems like it might be ab error.

Welcome to "Japan's No.1 [unclear]"

・This staircase is 338 meters long and has 462 steps. Climb up the steps and go through a 143 meter (24 step) connecting passage to reach the ticket gate.

Also, the altitude of this downhill platform is 583 meters above sea level, and the altitude is 653.7 meters, and there is a difference in elevation of 70.7 meters between this and the downhill platform.

It takes approximately 10 minutes to reach the ticket gate.

Please be careful where you step.


Was thinking so too, green is a colour that makes humans feel something is 'off' / makes us feel uncomfortable. The Matrix used the same colour tone to differentiate inside/outside The Matrix.


Why do surgeons wear green?


Visual contrast with blood and organs.


Same reason they wear blue too: Less visual strain on the eyes.


> that makes humans feel something is 'off'

Uhm, yo do realize that the human eye can differentiate the most colors in the green spectrum? Green is literally inscribed in our genes to not be "off" but rather our home.


Most colors are differentiated in the segment between yellowish green and reddish orange, passing through yellow and orange.

Inside the green segment, there is little color differentiation. All the green hues between 510 nm and 540 nm wavelength look pretty similar, while in the yellow-orange segment a change in wavelength of 1 nm may cause an easily noticeable change in hue.

Also in the blue-green segment, between blue and green, there is easy color differentiation, with the hue changing strongly even for small wavelength differences.

Inside the red, green and blue segments there is little ability to differentiate the colors, unlike in the regions between these segments. This is exactly as expected, because only in the segments between the primary colors you have 2 photoreceptors in the eye that are excited simultaneously, in a ratio that is a function of the color frequency/wavelength. In the frequency/wavelength segments where only 1 of the photoreceptors is excited, the ability to differentiate hues is lost.


I'd assume that's context dependent? Nature (natural green) vs things that look natural but aren't. e.g, green hue on a building? But I'm no expert on this :)


Seems like there probably is a green filter, but from my memory, the station was quite dark, so the filter might be setting the right mood.


TLDR; 100 prompts, which is roughly my daily usage, use about 24 Wh total, which is like running a 10 W LED for 2.4 hours.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: