Hacker Newsnew | past | comments | ask | show | jobs | submit | more xyzzy_plugh's commentslogin

O'Sassy or whatever is certainly Source available, and not Open Source. DHH can pound sand.

I used to think the pedantry was foolish, but I've grown to understand the distinction. It's one thing to criticize the OSI's claim to the term, and I do think they could do a better job at getting out ahead of new licenses and whatnot, but even if you ignore OSI entirely then the distinction is of substantial value.

I do think we need more Source Available licenses in the world. Certainly I would greatly appreciate being able to browse the source of the many proprietary software systems I've administered over the years.

At the same time it is not worth it if the spirit of Open Source is watered down.


> I do think we need more Source Available licenses in the world. Certainly I would greatly appreciate being able to browse the source of the many proprietary software systems I've administered over the years.

Yeah. Releasing a project under a source-available proprietary license and calling it Open Source, or doing a rugpull and changing an established Open Source license to a source-available proprietary license, is the kind of thing that causes the most grief. If you release something under a source-available proprietary license and make no pretenses about it being something else, and the alternative was not releasing it at all, it's a (slight) improvement.


> I do think we need more Source Available licenses in the world. Certainly I would greatly appreciate being able to browse the source of the many proprietary software systems I've administered over the years.

I think we need more differentiation and different terms. Because O'Sassy / FSL / whatever that just forbids other companies from selling the same software as a Service is quite different than just the source being available with no rights at all, or with restrictions on who can use it and when (size of company, for profit or not, production, etc).


I've written this about four times for two employers and two clients: ABC: Always Be Cycling

Basic premise is to encode, be it lifecycle rules or a cron, behavior such that instances are cycled after at most 7 days, but there should always be an instance cycling (with some cool down period of course).

It has never not improved overall system stability and in a few cases even decreased costs significantly.


If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.

It's not a stretch to imagine that a different namespace might want different semantics e.g. to allow a container to opt out of overcommit.

It is hard to justify the effort required to enable this unless it'll be useful for more than a tiny handful of users who can otherwise afford to run off an in-house fork.


> If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.

Except this won't happen, because "cope with allocation failure" is not something that 99.9% of programs could even hope to do.

Let's say that you're writing a program that allocates. You allocate, and check the result. It's a failure. What do you do? Well, if you have unneeded memory lying around, like a cache, you could attempt to flush it. But I don't know about you, but I don't write programs that randomly cache things in memory manually, and almost nobody else does either. The only things I have in memory are things that are strictly needed for my program's operation. I have nothing unnecessary to evict, so I can't do anything but give up.

The reason that people don't check for allocation failure isn't because they're lazy, it's because they're pragmatic and understand that there's nothing they could reasonably do other than crash in that scenario.


Have you honestly thought about how you could handle the situation better than an crash?

For example, you could finish writing data into files before exiting gracefully with an error. You could (carefully) output to stderr. You could close remote connections. You could terminate the current transaction and return an error code. Etc.

Most programs are still going to terminate eventually, but they can do that a lot more usefully than a segfault from some instruction at a randomized address.


I used to run into allocation limits in opera all the time. Usually what happened was a failure to allocate a big chunk of memory for rendering or image decompression purposes, and if that happens you can give up on rendering the current tab for the moment. It was very resilient to those errors.


Even when I have a cache - it is probably in a different code path / module and it would be a terrible architecture that let me access that code.


A way to access an "emergency button" function is a significantly smaller sin than arbitrary crashes.


I question that. I would expect in most cases that even if you manage to free up some memory you only have a little bit longer to run before something else uses all the memory and you are back to the original out of memory problem but no place to free up more. Not to mention those caches you just cleared should exist for a good reason and so your program is running slower in the mean time.


What if for my program, 99.99% of OOM crashes are preventable by simply running a GC cycle?


> If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.

What would "cope" mean? Something like returning an error message like "can't load this image right now"? Such errors are arguably better than crashing the program entirely but still worth avoiding.

I think overcommit exists largely of fork(). In theory a single fork() call doubles the program's memory requirement (and the parent calling it n times in a row (n+1)s the memory requirement). In practice, the OS uses copy-on-write to avoid both this requirement and the expense of copying. Most likely the child won't really touch much of its memory before exit or exec(). Overallocation allows taking advantage of this observation to avoid introducing routine allocation failures after large programs fork().

So if you want to get rid of overallocation, I'd say far more pressing than introducing alloc failure handling paths is ensuring nothing large calls fork(). Fortunately fork() isn't really necessary anymore IMHO. The fork pool concurrency model is largely dead in favor of threading. For spawning child processes with other executables, there's posix_spawn (implemented by glibc with vfork()). So this is achievable.

I imagine there are other programs around that take advantage of overcommit by making huge writable anonymous memory mappings they use sparsely, but I can't name any in particular off the top of my head. Likely they could be changed to use another approach if there were a strong reason for it.


tar/pax are kind of terrible formats. They are hard to implement correctly. I'm glad they are not used more often.

cpio is pretty reasonable though.

zip is actually pretty great and I've been growing increasingly fond of it over the years.


The thing is, there is always tar(1) even in the most basic of distributions. And everyone uses tar.gz's or .bz2's or whatever for distributing all kinds of things, so tar is pretty ubiquitous. But the moment you want to do some C development, or any binutils-related, nope, install and use ar(1) which is used for literally one single purpose and nothing else. Because reasons.


At the time, ar existed and tar didn't. Tar came later.


At the time, a.out existed and COFF and ELF didn't. The switch from STABS to DWARF has also never happened, right?


Im not sure how ar does it, but tar has no centralised directory. The only way to get file 100 is to walk trough the 99 files before. This kills random access speed.


Ar puts a file called "/" as the first file of the archive. Inside, there is a number N, then a list of N file offsets, and then a list of N null-terminated strings. It's a symbol table of sorts: each null-terminated string is a symbol name, and the corresponding file offset points at the archive header for the object file that contains the symbol. The filenames themselves are not recorded centrally since it's not really needed.


> tar/pax are kind of terrible formats. They are hard to implement correctly. I'm glad they are not used more often.

I'll grant you "kind of terrible", but what's hard to correctly implement about tar? It's just a bunch of files concatenated together with a tiny chunk of metadata stuck on the front of each.


Having never done it myself, I don't know, but I do know that the "microtar" library I picked up off GitHub is buggy when expanding GNU Tar archives but perfect when expanding its own archives. Correctly creating one valid archive is a lot easier than reliably extracting all valid archives. The code appeared competent, I assume tar just has a bunch of historical baggage that you can get wrong or fail to implement.


I struggled with this too and it took me a while to accept that there is no right way. There are many ways, and there is a lot of legacy style out there, but ultimately you have to do what works for your own productivity/sanity.


I'm not conflicted. Nothing compares to nix. I've been using it on macOS, for Linux hosts, for years now, and it's been incredibly rock solid. I stopped using homebrew years ago and I couldn't be happier about that.

> Consistently through the 25.05 period nix-darwin and nixpkgs would fall out of sync. I learned not to `nix flake update` too often as a result.

I find using a singular nixpkgs version is almost always a recipe for things breaking if you are on unstable. I usually end up juggling multiple nixpkg versions, for example you might want to pin the input to nix-darwin separately.

This is squarely a nixpkgs problem. It's the largest most active package repository known to man. I am pretty sure GitHub has special-cased infrastructure just for it to even function. Things are much more stable in release branches. If that causes you pain because you want the latest and greatest, it's worth considering that you'd experience the same problem with other package repositories (e.g. Debian), and then asking yourself what it is you are actually trying to accomplish. There's a reason they call it unstable.

> but if you squint and reason that mise and nix solve the same issue, why not use the less opinionated, easier to reason about mise?

If mise works for you then great, use it. When I squint and reason, they do not solve the same issue. I don't know how you come to the same conclusion either. Why are you using nix-darwin at all? What is the overlap between nix-darwin and mise? I don't see it.

If all you want is dev environments, I recommend flox.

At the end of the day I'll continue using nix, and especially nix-darwin, _solely_ because it let me set up a new machine in under 5 minutes and hit the ground running. Nothing else compares.


They do have and apparently the scale of the repo is actively breaking things: https://discourse.nixos.org/t/nixpkgs-core-team-update-2025-...


This is all great feedback, thanks!

I got here through devenv, I was fully bought in on its proposal and once I found its edges I started peeking under the covers to understand how it worked.

At that point I was pretty deep in mise for everything that wasn’t using devenv. This perhaps help frame why I see them solving the same problem.

I definitely had my “aha!” and ditched mise because nix seemed it had solved my problems. But now, in a new gig, I’m running into lots of edge cases that mise could solve at the drop of a hat and nix (/ my poor understanding of the fundamentals) struggles with.

So, with that all said, I suppose my point is that you get a lot of overlap between the two, and mise is easier to use and get buy-in on. There are certainly elements I find appealing about nix which mise doesn’t touch (promise of repeatable builds, the entire package ecosystem, etc), however.


mise will be a better mise than nix will. You should use mise.

Especially because installing Nix is still a pain for most users.


Furthermore it's incredibly convenient to mentally cache volumes like "10mL for this one, 24mL for that one" for ~6-12 months at a time.


Silos are so much worse though. Open communication/collaboration is great but needs to be rate limited to enable focused work.


That’s why we have daily and other “ceremonies”.

But if I get $1 for every comment nagging about daily standups I would be rich.

I guess the ones nagging about daily rather be interrupted every 30 mins.

Sometimes there is this guy like we had a guy who would come over at me before I took my jacket off in office with “quick question” - FFS we have daily for that.

Want to have a cake and wish happy birthday, just got back from vacay and want tell all amazing stories? FFS do it after daily not when you come in and with every person separately.

You have a question/celebration/fluff - do it after daily meeting.


I'm doing something very similar but even simpler and Gemini 3 is absolutely crushing it. I tried to do this with other models in the past, but it never really felt productive.

I don't even generate diffs, just full files (though I try and keep them small) and my success rate is probably close to 80% one-shotting very complex coding tasks that would take me days.


problem without diffs is that it overrides your changes which gets really old and annoying.

earlier models couldn't generate diffs and I had to generate them which was jank since sometimes it would generate unmergeable code


This behavior has practically nothing to do with Labradors. Many, many dogs regardless of breed can do this. Cats too. And foxes and wolves and rats and... well pretty much all quadrupeds with reasonable sizes limbs relative to their body. You might notice it's more or less the same motion as walking. Animals that drown usually do so from exhaustion, not because they can't keep their head above water.

Primates are relatively unique in their complete lack of innate swimming abilities.


> Primates are relatively unique in their complete lack of innate swimming abilities.

Human babies can swim, so it's maybe more initially an innate one that gets lost. Though they won't be able to keep their head over water by default if that's what you meant (can be trained to as a toddler). But I'm talking about swimming on the umbilical in water births, etc., showing that there isn't a complete lack of innate swimming abilities.


Yes, while these motor reflexes are not innate, autonomic responses remain. Search for the "mammalian diving reflex".


Is it "primates" or is it the strange semi/erect limb attachment that primates have?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: