Hacker Newsnew | past | comments | ask | show | jobs | submit | more fweimer's commentslogin

Unfortunately, this board seems to be using the CIX CPU that has power management issues:

> 15W at idle, which is fairly high


I have a pre-Ryzen AMD thin client which draws MAXIMUM 10W. Idle 5W. A lot less CPU power but still. 15W for a modern SBC is a joke.


For comparison, an N150 mini PC uses around 6 watts at idle.


And here I was thinking the Pi 5 which idles at 3W was unreasonably high.


The AGPL does not prevent offering the software as a service. It's got a reputation as the GPL variant for an open-core business model, but it really isn't that.

Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.


These days, people solve similar problems by wrapping their data in an OCI container image and distribute it through one of the container registries that do not have a practically meaningful pull rate limit. Not really a joke, unfortunately.


Even Amazon encourages this, probably not intentionally, more like as a bandaid for bad EKS config that people can do by mistake, but still - you can pull 5 terabytes from ECR for free under their free tier each month.


I'd say it'd just Kubernetes in general should've shipped with a storage engine and an installation mechanism.

It's a very hacky feeling addon that RKE2 has a distributed internal registry if you enable it and use it in a very specific way.

For the rate at which people love just shipping a Helm chart, it's actually absurdly hard to ship a self contained installation without just trying to hit internet resources.


People have tried, and so far, achieving safety through trusted compilers and (fairly complicated) run-time support has been much more efficient. A small team could probably design a RISC-V CPU with extensions for hardware-assisted bounds checking and garbage collection, but any real CPU that they can built would likely have performance levels that are typical for research-oriented RISC-V CPUs. Doing the same thing in software on a contemporary commercially established CPU is going to be much, much faster.


See that's the problem. Unless this is government mandated, no sane vendor is going to pay for the performance penalty.

> Doing the same thing in software on a contemporary commercially established CPU is going to be much, much faster.

In what sense? Do you know if there's been proper research done in this area? Surely implementing the bounds checking / permissions would be faster in hardware.


I'm worried that if memory tagging becomes mandatory, it sucks the air out of the room for solutions that might have a more long-lasting impact. Keep in mind that memory tagging is just heuristics beyond very specific bug scenarios (linear buffer overflows are the prime example). The whole thing does not seem fundamentally resistant to future adaptions of exploitation techniques. (Although oddly enough, I have been working on memory tagging lately.)

Regarding performant implementations of capability architectures, Fil-C running on modern CPUs is eventually going to overtake Arm's Morello reference board because it doesn't look like there's going to be a successor to the board. Morello was based on Arm's Neoverse-N1 core and produced using TSMC's N7 process. It was a research project, but it's really an outlier because such projects hardly ever have access to these kinds of resources (both CPU IP and tape-out on a previous-generation process). It seems all other implementations of CHERI are FPGA-based.


I find it strange that this web site completely ignores the Java ecosystem, which offers memory-safe implementations for most of the protocols and services listed.


Java does fine on memory safety, but does not do great on null safety (and overall invariant protection / "make invalid states unrepresentative" ethos), has difficult to harden concurrency primitives, and won't be adopted in many scenarios due to runtime cost and performance pitfalls. Future Valhalla work fixes some of these issues, but leaves many things spiky.


I dislike Java's abstraction-through-indirection approach, which is related to the non-representable invalid states you mention. But I think it's more of a matter of taste.

Somewhat controversially, I think Java is actually doing fine on null safety: it uses the same approach for it as it does for array index safety. The latter is a problem for any language with arrays without dependent types: out-of-bounds accesses (if detectable at all) result in exceptions (often named differently because exceptions are controversial).

Java's advantage here is that it doesn't pretend that it doesn't have exceptions. I think it's quite rare to take down a service because handling a specific request resulted in an exception. Catching exceptions, logging them, and continuing seems to be rather common. It's not like Rust and Go, where unexpected panics in libraries are often treated as security vulnerabilities because panics are expected to take down entire services, instead of just stopping processing of the current request.


I'm not talking about null safety in the sense of null pointers. Null pointers and out of bound pointers are still in the realm of memory safety, which of course Java has solved for the most part.

Proper null safety (sometimes called void safety) is to actually systematically eliminate null values, to force in the type system a path of either handling or explicitly crashing. This is what many newer expressive multi-paradigm languages have been able to achieve (and something functional programming languages have been doing for ages), but remains out of reach for Java. Java does throw an exception on errant null value access, but allows the programmer to forget to handle it by making it a `RuntimeException`, and by the time you might try to handle it, you've lost all of the semantics of what went wrong - what value was actually missing and what a missing value truly means in the domain.

> Catching exceptions, logging them, and continuing seems to be rather common. It's not like Rust and Go, where unexpected panics in libraries are often treated as security vulnerabilities because panics are expected to take down entire services, instead of just stopping processing of the current request.

Comparing exceptions to panics is a category error. Rust for example has great facilities for bubbling up errors as values. Part of why you want to avoid panicking so much is that you don't need to do it, because it is just as easy to create structured errors that can be ignored by the consumer if needed. Java exceptions should be compared to how errors are actually handled in Rust code, it turns out they end up being fairly similar in what you get out of it.


Java introduced Optional to remove nulls. It also introduced a bunch of things to make it behave like functional languages. You can use records for immutable data, sealed interfaces for domain states, you can switch on the sealed interface for pattern matching, use the sealed interfaces + consumers or a command pattern to remove exception handling and have errors as values.


using an instance of a sealed class in a switch expression also has the nice property that the compiler will produce an error if the cases are incomplete (and as such there's also no need for a default case). So a good case for the "make invalid states unrepresentable" argument.


I understood what you meant. I just disagree about priorities. Conceptually, every array access (absent dependent types) can produce a null value because the index might be out of bounds. Languages that eliminate null values in other areas typically fail to deal with the array indexing issue at the type level, which seems at least as prevalent in real-world code as null pointer deferences, if not more so.

Regarding the category error, on many platforms, Rust panics use the same underlying implementation mechanism as C++ exceptions. In general, Rust library code is expected to be panic-safe. Some well-known Rust tools use panics for control flow (in the same way one would abuse exceptions). The standard test framework depends on recoverable panics, if I recall correctly. The Rust language gives exceptions a different name and does not provide convenient syntax for handling them, but it still has to deal with the baggage associated with them, and so do Rust library authors who do not want to place restrictions on how their code is reused. It's not necessarily a bad approach, to be clear: avoiding out-of-bounds indexing errors completely is hard.


unwrap


That's not a reason for this page to ignore the Java ecosystem, which extremely fits with Prossimo's mission.


There's nothing stopping you from writing code that is completely functional and devoid of nulls these days. It's just that java obviously still allows nulls if someone needs to use them (partly for interoperability with legacy code)

But if you're going to argue about the mere presence of null being problematic, you might as well complain about the ability to use "unsafe" code in Rust too.


It mentions Java,

> Memory safe languages include Rust, Go, C#, Java, Swift, Python, and JavaScript. Languages that are not memory safe include C, C++, and assembly.

https://www.memorysafety.org/docs/memory-safety/


Can I use the Java implementations in another language without significant headache?


It's possible in practice (at least more so than with Go), but it's highly unusual. Back when free Java became a thing, I used it at first to obtain a memory-safe TLS implementation. It worked out well, I think, but there is a strong tendency for the JVM to become the trunk of your application that holds everything together.


> at least more so than with Go

It's actually quite easy to create a C bindings for a Go library, using CGo and -buildmode=c-shared.

I'm not sure what effect the Go runtime has on the overall application, but it doesn't seem like it would be "less possible" than with Java.


Depends on what is done, GraalVM supports native libraries.

https://www.graalvm.org/latest/reference-manual/native-image...

Then again, many times OS IPC is a much better option.


These approaches can only detect linear overflows deterministically. Use-after-frees (temporal safety violations) are only detected with some probability. It's mostly a debugging tool. And MTE requires special firmware, which is usually not available in the cloud because the tag memory reservation is a boot-time decision.


Still better than status quo on most systems.

It is kind of interesting how all attempts to improve security are akin to arguing about usefulness of seatbelts when people still die wearing them.


At a certain point, it's a trade-off. A systems language will offer facilities that can be used to break encapsulation and abstractions, and access memory as a sequences of bytes. (Anything capable of file I/O on stock Linux can write to /proc/self/mem, for example.) The difference to (typical) C and C++ is that these facilities are less likely to be invoked by accident.

Reasonable people will disagree about what memory safety (and type safety) mean to them. Personally, bounds checking for arrays and strings, some solution for safe deallocation of memory, and an obviously correct way to write manual bounds checks is more interesting than (for example) no access to machine addresses and no FFI.

Regarding bounds checking, GNAT offers some interesting (non-standard) options: https://gcc.gnu.org/onlinedocs/gnat_ugn/Management-of-Overfl... Basically, you can write a bounds check in the most natural way, and the compiler will evaluate the check with infinite precision (or almost, to improve performance). In standard, you might end up with an exception in some corner cases where the check should pass. I wish more languages would offer something like this. Among widely used languages, only Python offers this capability because it uses infinite-precision integers.


The standard does not assign meaning to this sequence of execution, so an implementation can detect this and abort. This is not just hypothetical: existing implementations with pointer capabilities (Fil-C, CHERI targets, possibly even compilers for IBM i) already do this. Of course, such C implementations are not widely used.

The union example is not particularly problematic in this regard. Much more challenging is pointer arithmetic through uintptr_t because it's quite common. It's probably still solvable, but at a certain point, changes the sources becomes easier, even at at scale (say if something uses the %p format specifier with sprintf/sscanf).


> The standard does not assign meaning to this sequence of execution, so an implementation can detect this and abort.

Real C programs use these kinds of unions and real C compilers ascribe bitcast semantics to this union. LLVM has a lot of heavy machinery to make sure that the programmer gets exactly what then expected here.

The spec is brain damage. You should ignore it if you want to be able to reason about C.

> This is not just hypothetical: existing implementations with pointer capabilities (Fil-C, CHERI targets, possibly even compilers for IBM i) already do this

Fil-C does not abort when you use this union. You get memory safe semantics:

- you can use `i` to change the pointer’s intval. But the capability can’t be changed that way. So if you make a mistake you’ll end up with an OOB pointer.

- you can use `i` to read the pointer’s current intval just as if you had done an ptrtoint cast.

I think CHERI also does not abort on the union itself. I think storing to `i` removes the capability bit so `p` crashes on deref.

> The union example is not particularly problematic in this regard. Much more challenging is pointer arithmetic through uintptr_t because it's quite common.

The union problem is one of the reasons why C is not memory safe, because C compilers give unions the expected structured assembly semantics, not whatever nonsense is in the spec.


I'm not involved in Go development, only watching from the sidelines. I think it's very likely due to the project dynamics that after the first (published) exploit against real software, the compiler will be changed so that low-level data races can no longer result in type confusion. There will be some overhead, but it's going to be quite modest. I think this is realistic because there's already a garbage collector. Indirection to fresh heap allocations can be used to make writes to multiple fields to appear as atomic.

So I think Go is absolutely not in the same bucket as C, C++, or unsafe Rust.


Did it involve bitfields? GCC is notoriously bad at optimizing them. There are some target-specific optimizations, but pretty much nothing in the middle-end.


It did, yes. On an architecture without bit field extracts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: