Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenBSD Innovations (openbsd.org)
163 points by vogon_laureate on July 14, 2023 | hide | past | favorite | 63 comments


Even if you're not an OpenBSD user, it's always good to see just what sort of innovations they (and the other BSDs) are regularly coming up with to solve all sorts of operating system problems, audit code, and improve security and performance.


It was discouraging to realise that I don’t know anyone who at least has tried a BSD.


We ran FreeBSD on a few racks full of highly used web servers (and some auxiliary servers) from 2000-2012.

I liked it because it was rock solid and configurations were simple and straightforward. It also made some things, like read-only bind mounts, easier than Linux did at the time.

However we finally gave up because the update/upgrade process was a big pain compared to Debian so it was harder to schedule updates, and because it was hard to find people who were familiar enough with it to take anything off my plate.

Now only using it for firewalls. Everything else on Debian linux which is slowly getting more annoyingly opaque with stuff like systemd that infuses magic and binary-format logs and stuff into things that used to be easy to debug. Who knows, maybe we'll be begging it to come back into our arms again one day.


binary upgrades has made all that a lot less painful.


One of my biggest knowledge gaps was networking, so many years ago I bought a little single board computer and committed to learn OpenBSD & roll my own router.

I learned a ton, and definitely recommend this as the next step for someone who installed a BSD in a VM and is intrigued.


That sounds similar to my "origin story", I managed to get an older PC from a gamer friend who had replaced it and had been too lazy to get rid of it. I installed NetBSD on it after I failed to get my ISDN card to work on FreeBSD and set it up as a dial-on-demand router. And I also tried out a lot of other stuff related to system administration and networking - Apache, Squid, BIND. A few years later, I inherited an old SparcStation 20 and set up diskless boot to run NetBSD on that as well. Fun times, I can highly recommend something like this to anyone new to IP networking and Unix administration.


This is on my list of projects I want to try. Can you recommend a hardware platform to start from?

Edit: hardware for building a router that is.


Odroid H3 is pretty good. Bought one myself as my (yet to be fully operational) home router.

It's small, quiet and has 2 ethernet for WAN and LAN and you can plug in an official USB Wifi dongle addon and it's good to go as a router.

You need to pick a few addons from the bare machine, like memory (I went with 8GB to run many containers), ssd (I went with m2 instead of emmc) and a case.

It's x86, so you'd have maximum compatibility for architectural differences.

https://www.hardkernel.com/shop/odroid-h3/


Unfortunately, the APU series from PCEngines was recently EOL'd, but they might still have some stuff available in the shop: https://www.pcengines.ch/

Throwing a NIC in an old box works well, but any board with 2 or more network connections will suffice.


As mentioned the apu2s were great but EOL now. They also didn’t push line rate when I upgraded to gigabit Ethernet.

I just went to a local computer shop and picked up a HP desktop which was likely off a business lease, then tossed another NIC in. It works a charm and routes gigabit just fine.


My personal suggestion would be anything you have lying around your place that has supported graphics. Graphics are always the big end-user pain point for any OS.


Oh I meant for building a router


Protectli [0] has a bunch of systems that should meet any number of price ranges and network needs. I've not personally tried OpenBSD on them, but I see nothing that should cause any problems. They're also small and fanless systems, which I really appreciate. Also usable as general purpose machine, so not limited to simple networking.

[0] https://protectli.com/product-comparison/


A router doesn't need a ton of power. My router is still a single core. There hasn't been a need to update it because my internet connection is only 50 megabit. If you have a lot of users and heavy traffic, then you will need more, but if it's just you and the WAN, you really don't need much at all.


You know 32 core NIB EPYC 7551s are $100 on ebay would make a very nice router w the right mb https://www.ebay.com/itm/394632582514


Honestly, any off the shelf or eBay x86 mini-ITX board is fairly well supported. Probably stay away from Atom CPUs (Celeron or i3 is fine) and any exotic hardware, use whatever ITX case you want and a PicoPSU, stuff it with a 2-4 port NIC and you're good to go.


I’m interested in the same thing. Also, what resources OP used for learning.


The thing about the BSDs, and especially OpenBSD, is that their MAN pages are phenomenally good. Documentation on their respective websites covers most of the rest of what you need. If you want to really nerd out, I highly recommend all of the No Starch Press BSD books by Michael W Lucas (the Absolute BSD books are a good place to start), but they are entirely supplemental.


“OpenBSD’s man pages are good” is a meme for a reason:

https://man.openbsd.org/pf.conf.5

That + https://www.openbsd.org/faq/pf/example1.html were more than enough to get me going.


As the other poster said, the man pages are great, but OpenBSD's homegrown daemons are also all very similar in style. A daemon ${service}d and a control interface ${service}ctl (ie. ntpd/ntpctl ripd/ripctl ospfd/ospfctl relayd/relayctl pf/pfctl etc). The control interfaces are all designed to work similarly to each other, so they are very intuitive once you learn one. It's also worth noting that most services have separate manual pages for the daemon, the control interface, and the configuration file. I've also noted that many tools are very similar to Cisco IOS equivalents.


I decided to try it out on my laptop sometime in 2020. I really enjoyed using it in the terminal, but found it less usable in desktop environments (I used gnome shell). It was certainly usable, but less refined that I had hoped.

Then I installed FreeBSD on an arm based sbc I use as a server and NAS. It has been a joy to use there. Basically rock solid, and the scheduler seems to prioritize interactive processes so that even under heavy load I can diagnose what’s going on quickly. I now prefer it to Linux in shell environments. I also have openBSD on a thinkpad laptop, which has been fun


If your aesthetics for desktop use are at all influenced by 90s Unix-like workstations, it doesn't take much to be comfortable on the BSDs.


BSDs may not have a significant presence on desktops, but they're well known in the networking world for their reliability. They also were the foundation used to build OSes for specific applications. OpnSense and XigmaNAS, for example, are two excellent FreeBSD based applications aimed at firewalling/security and NAS/services.

https://opnsense.org/

https://xigmanas.com/xnaswp/


Isn't OPNsense a fork of PFsense, which itself derives from FreeBSD?


Sorta. The original firewall distribution derived from FreeBSD was m0n0wall, then its development stopped and it was forked into pfSense, then pfSense was forked into OPNSense and the two projects are now complete separate things.

About which one to use, Manuel Kasper, the original m0n0wall author, encourages to use and contribute to OPNSense, which is also the one I would personally choose after [0] happened.

0: https://opnsense.org/opnsense-com/


As a sysadmin, they are 100% better in a server environment. Desktop usage takes more tweaking and time to get working but will work eventually. I used Open and Free BSD as my main os on my Dell Latitude until I found nixos


I'm the only OpenBSD user I know in person. But I'm okay with that because it's my favorite OS and makes me happy. :)


I've tried it, including running it as a VM for a while with the hopes of using it more, feeling like I was missing out on something. It is different than what I am used to with *nixes, and for my interests and efforts, I just haven't had the time to sit and make it work for my needs as well as something like Debian does out of the box. If you use macOS, you're using a cousin of BSD, if that scratches an itch.


This could mean that you need to expand your social circle. To be clear, I'm not saying it's common just that that might benefit you.


Or maybe I should speak about it on local meetups.


You can always give it a try yourself :)


Oh yeah, I used to run FreeBSD on desktop and now I play with servers. I have big gaps in networking and it’s a problem. I’m eager to learn though.


They run great in VMs too if you don’t want to commit to running them on bare metal hardware just yet.


CARP is probably my most favorite innovation from OpenBSD. It is a direct competitor to the closed Cisco VRRP and I find it to be easier to use than Linux' Keepalived (I'm assuming this came out post-VRRP licensing?).



No way, that was great! Thank you for sharing.

> This is a Cisco HSRP patent document with the word "Cisco" crossed out and the word "IETF" written in crayon.

That one got me.


I see this when poking around on routers and such, but I always wonder if it is useful or common to see for consumers. Is this something that only ISPs will implement? Or is it closer to the acceptable self hosted solutions like running pihole? It seems only useful for like 0.1% of the year if something goes very wrong? I don't think I have understood it enough to know why I might even be interested in using it.


I'm an OG OpenBSD user (literal 386 firewall) so I've been using it at home since before CARP was released.

When the first CARP release hit I immediately set it up on a pair of SUN Ultra1 pizza boxes I had gotten on eBay after the .com crash (with a third cold-spare) and ran that way for years. My ISP even called me at one point to find out what "those weird mac addresses" were on the SUN hardware.

They, of course, ended up being too power hungry and I moved to PC-Engines Alix boxes. When I wanted more horsepower as my internet speed increased, I moved first to a pair of PC-Engines APU boxes, and now to one APU and one virtualized OpenBSD firewall.

CARP has always been rock solid throughout, both on the internal and external interfaces of my firewalls. Rolling reboots for patches, updates, OS upgrades or hardware failures are a non-issue. No one in my family ever notices. Combine that with multi-homed ISPs and the internet at my house is more solid than a lot of enterprises and there's no expensive hardware or software involved.

I guess that was a long-winded way to say: Home consumers CAN benefit from high availability! It just isn't packaged in an easy to use or cheap enough form factor for them.


High availability isn't 'useful' or 'common' to consumers [home equipment]; consumer-class ISP services would generally be incompatible with highly available routers (dual WAN/ISP on a single router is going to be more common).

I used CARP for HAProxy in lab environments, but that is as 'close to home' as it got.

CARP is useful for any time you need to take a service down residing behind CARP. Updating an HAProxy box, for example. Failover/disable the node you'll be performing maintenance on, which is >0.1% of the year.


For what it's worth, I run into weird CARP issues every now and then with 7.1. Stuff like both machines thinking they're master, or a down machine not triggering failover in the secondary. Always seems to be worse with the multiprocessor kernel.

It's one of those problems that don't happen often enough for me to really spend time debugging, but is annoying nonetheless.


~5 years ago I decided to use keepalived as a simpler high availability. It's got a lot of weirdness that I've had to work through and around, largely related to services running on the nodes. In the end my conclusion was that it wasn't really simpler than corosync+pacemaker, so I'm switching back.


One funny thing, CARP on the same L2 segment can cause funny things to VRRP nodes - they're that similar :p


One protocol has the patent bit set to 1 while the other is set to 0.


Yep, they're the same protocol number 112, so they conflict. In things like wireshark, you'll want to change to the carp dissector.


The openbsd project tried to get a protocol number for carp, IANA made it more difficult then the project was able to comply with. so they made an executive decision to use the vrrp protocol number as the least wrong option.

Long story short pay extra close attention when when mixing vrrp and carp on the same network segment.


Um. No. You can't blame IANA for this. See:

https://queue.acm.org/detail.cfm?id=2090149

Key paragraph:

"The OpenBSD team, led as always by their Glorious Leader (their words, not mine), decided that a RAND license just wasn't free enough for them. They wrote their own protocol, which was completely incompatible with VRRP. Well, you say, that's not so bad; that's competition, and we all know that competition is good and brings better products, and it's the glorious triumph of Capitalism. But there is one last little nit to this story. The new protocol dubbed CARP (Common Address Redundancy Protocol) uses the exact same IP number as VRRP (112). Most people, and KV includes himself in this group, think this was a jerk move. "Why would they do this?" I hear you cry. Well, it turns out that they believe themselves to be in a war with the enemies of open source, as well as with those opposed to motherhood and apple pie. Stomping on the same protocol number was, in their minds, a strike against their enemies and all for the good. Of course, it makes operating devices with both protocols in the same network difficult, and it makes debugging the software that implements the protocol nearly impossible."


It is hard to say, I am not involved in ether project. CARP was definitely created in response to perceived deficiencies(both technical and political) in VRRP. I agree it does sound like picked the same number out of not a little spite. However the openbsd project has this to say about picking the ipnumber.

"As a final note of course, when we petitioned IANA, the IETF body regulating "official" internet protocol numbers, to give us numbers for CARP and pfsync our request was denied. Apparently we had failed to go through an official standards organization. Consequently we were forced to choose a protocol number which would not conflict with anything else of value, and decided to place CARP at IP protocol 112. We also placed pfsync at an open and unused number. We informed IANA of these decisions, but they declined to reply."

https://www.openbsd.org/lyrics.html#35

Obviously the correct thing to do is get numbers via IANA but what is the least wrong thing to do when your project is too small to do this. Camp on unused numbers? If your project is successful enough they will eventually be granted. Use whatever number matches the closest fit? Pick some screwball assignment that failed to gain any actual use?


But why are the VRRP proponents so opposed to apple pie? We should dig deeper.


ucarp is carp on linux. It's what I use on linux.


Is this still maintained? The pureftpd.org link is dead and the GitHub repo is archived.

https://github.com/jedisct1/UCarp


It does look like ucarp doesn't have a current maintainer unfortunately.


The impact of AnonCVS is probably understated. Github was 2008. Remember SourceForge? It's easy to forget how casually browsing commit history was just not a thing.


I likewise remember maintaining svnsync mirrors to have locally-browsable subversion history. Not only was it faster, but it also meant I could work offline. At least until I had to sync up my local commits to a remote server, oh boy that's a pain I had long forgotten.

There's very good reasons Git won, this is among them.


> This is a list of software and ideas developed or maintained by the OpenBSD project

(Emphasis mine.)


So many good open source operating systems out there yet nearly all suffer from lack of drivers. I think there should be legislation forcing hardware manufacturers to open source all of the code needed to run their products. Not doing so prevents users from using them as they see fit, and are instead limited to low quality products such as windows.

Imagine if as a result all hardware would he compatible with all os’. Suddenly openbsd, freebsd and linux distros would be dominating the desktop.


That random relinking at boot is such a simple and elegant way to make things harder for an attacker. I would like to think I could have come up with it myself.


When memory safe programming language?


Here's Theo's response on integrating safe languages into openbsd, circa 2017: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2

To summarize: He mentioned a lack base posix utilities in these languages (posix compliant utilities), and that changes like this take a long time (stack protector took 10 years to implement), and that these new toolchains would dramatically increase build times (like haskell cgrep vs openbsd grep), and that openbsd requires that base can build base on all platforms and some of these languages won't work on some supported platforms (there wasn't enough address space on i386 for rust to compile itself, for example).

I realize some of this may be dated by now, but I assume the above are the types of concerns they would want to address.


On OpenBSD? I don't think it will happen any time soon. The developers seem pretty happy with C and instead enforce a strict code style, commit reviews, and auditing. There was a user on the lists a while back also running some static analysis and submitting bugs (moon-something, sorry, it was years ago).


Conceivably when the current team is retired and a different group of people with a different set of priorities takes over.


You say innovations, I say counterproductive bullshit. The idea to write a "secure" operating system on the basis of a) C, the single worst and most actively anti-secure language in the history of computing b) unix with its ambient authority/everything is global and myriad of other design flaws is just fucking crazy and about as helpful as efforts to "fix" cattle slavery by tinkering with the design of whips and chains to make them more humane.

So which of these layers upon layers of crap actually really, fundamentally fixes any of the core underlying problems (like using, in 2023, a memory unsafe language full of UB footguns to implement network facing services and not having the semblance of a proper security model)? Rather than just making them incrementally harder to exploit at the cost of ever increasing complexity and overheads that also get imposed on saner technologies which do not suffer from these problems in the first place?

How much faster (never mind secure, simple and robust) would our core computing infrastructure actually be if not everything was organized around a quixotic quest to partially mitigate some insane design decisions of C and the fact we're using a half-century outdated OS design?


I rather like OpenBSD. I find its simplicity and modularity to be a positive, and personally I find it much easier secure a system that I fully understand. Plus its liberal license and open source nature makes it easy for anyone to fully audit the entire system.

And if you don't like OpenBSD, you can use something more secure, made with modern safe language or whatever it is you expect an OS to be.


what's cattle slavery? we talkin dairy cows or what?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: