Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For IoT in particular, you hit a crossroads where the embedded devs haven't really dealt with advanced security concepts so you kinda have to micromange the implementation. And, in small teams it's hard to justify the overhead of managing x509 certs and all the processes that come along with it. Just my personal experience.


Surely white lists and certificate pinning are not advanced security concepts?


Yes, they are.

Perhaps not in a practical or educational sense but in the real world, of people with non-cryptographic or security related jobs, a certificate is a PITA that goes beyond the functional requirements.

I have seen many insecure building automation systems that are maintained by reclassified HVAC technicians. The movies about hackers taking over an elevator are entirely accurate.


The hassles of cert pinning, etc. should not be laid at the feet of the customer/integrator/whatever. Regardless of whether that person is an HVAC tech who learned about serial ports & busybox yesterday or is a seasoned expert with Ghidra & Wireshark & binwalk.

Companies are being incredibly lazy (at our expense), and the author states this obliquely:

>virtually the entire software landscape has been designed with the assumption of internet connectivity


The issue is the alternative does not scale.

It's not that companies are being lazy at our expense; it's that nobody wants to pick up the bill. If you write something to work against an online system, the fact it is online implies it adheres to some standard that you can work with, so solving the problem for one online client creates an artifact that is likely applicable to many clients.

Air-gapped systems drift. They get bespoke. They get very out of date. So you have the two practical problems of labor: (a) the product created solves the problem here, today, but nobody else benefits from repurposing that solution and (b) the developer isn't gaining as many transferrable skills for the next gig, and they know it, and so the developers who are willing to do the air-gapped work are harder to find and more expensive.

(I believe this is also the reason you see air-gap a lot more often in government security and banks: they can afford to retain talent past the current project with the certitude there will be more projects in the future).


The issue is the alternative does not scale.

That's a feature, not a bug.

Almost the entire downfall of the modern tech industry can be attributed to two things: greed, and the fetishization of "scale."

Not everything has to scale. Not everything should scale. Scale is too often used as an excuse to pinch pennies. If you business model only works at massive scale, then your business model might be broken. (Not always, but more often than most people think.)


how isn't b) a contradiction? You're stating the demand is there, but the developer is not seeing it? Did you mean to say the opportunity to remain in the same kind of gig is not as profitable/career advancing?


Basically. I mean the demand is there, but the developer recognizes a small island of architecture is a risk for long-term skill dev and wants compensation for that risk. For a developer to take the kind of gig that requires working bespoke air-gapped tech that sees few updates, they're going to want to be paid X+N over the median salary X (or have some guarantee of / expectation of job security).

It's a sucker's play to take the gig at price X, work on it for a year or two, and then get tossed to the curb when the project wraps with the only skills growth to show for it a combination of those ineffable fundamentals ("everything Turing-complete is fundamentally equivalent") that are useful forever (but can be picked up on any job) and some knowledge of Bob's House of Air-Gapped Machine's circa-1997 Flash install that their in-house kiosk infrastructure ran on.

There are jobs that'll pay for that Flash experience, but they're a lot harder to find than if Bob's House had been using some modern web architecture and you'd picked up, say, AWS experience.


Hey, that's the problem with global homogenous culture too!


Embedded devs can come from a variety of backgrounds (e.g. Electrical engineering) that don't necessarily concern themselves with software security. They're not dumb, it just isn't something they (typically) are knowledgeable in.


Then they need to learn it. Otherwise they’re being unprofessional and bad at their job.


They were hired by a company which is bad at its job of delivering secure or securable products. The products were purchased by someone bad at their job of selecting secure products. They were deployed by someone who was told that having the signs working ASAP is more important than anything else, so the management is bad at their job of securing the company.

But I won't say that the designing engineer was bad at their job, I would say that the product manager was bad at their job... but probably got promoted, because the company made a bigger profit and delivered faster because security didn't get any attention.

And that's why we need regulation, because "this product is secure" is not easily and cheaply verifiable and carries no penalties for being incorrect. The market can't tell, so everything is a lemon.


Sounds like not taking responsibility to me.

And don’t get me wrong: I’ve had managers that made it impossible to do the actual development job well. But it’s still my responsibility to do my job well so I escalated that. Most times I caused changes to improve things. If not I quit the job.

Personal accountability doesn’t just evaporate when someone else passes on bad orders. It’s not a fun position to be in but I think if engineers in general actually take responsibility for their own work, and confront management if that’s the source of issues, then that would improve things.

If you let yourself be pushed around into doing subpar work for deadlines you’re just signalling that it’s ok.


The makers of PC BIOSes are arguably the firmware developers who are closest to being normal PC programmers. They've been at it for 40+ years, and they have long provided network-connected features like network boot and remote management.

And yet over 200 motherboards and laptops have their secure boot root of trust key set to a log-ago-leaked example key from a development kit, named "DO NOT TRUST - AMI Test PK" [1]

The firmware industry at large just ain't good at this stuff.

(Of course from the perspective of the firmware industry, they can make a non-internet-connected heating timer or a washing machine control board that will work fine and reliably with no software updates, for 25+ years - while us PC software cowboys make software so bad crashes are just a fact of life, and bug fix/security updates are a daily occurrence. So the firmware industry isn't all bad - only when they start putting things onto the internet.)

[1] https://news.ycombinator.com/item?id=41071708


Well, they used be able to do that. They seem to be starting to assume connectivity and getting as sloppy as everybody else at basic functionality. I have a new Bosch cooktop that has WiFi and has downloaded at least one software update. The accompanying oven (also Bosch) had a timer that wouldn’t count down past 1:01, but doesn’t have WiFi, so it got its update by a tech coming out and replacing the entire controller board.

BTW, I looked at the board and noted that Bosch doesn’t even make the controller. They get it from Diehl Controls, an OEM who only makes appliance controllers.


OK, that's fine. Not everyone has to know everything.

So why aren't their employers investing in educating their devs & PMs about security? (rhetorical - we all know why)


Yeah you know, just roll out our MVP, let's see where the business goes with it, and then we'll fix it. Whaat? Budget of fixing it is 2x of the product itself? Hm. Let's have meetings over meetings to postpone the decision until the next one, indefinitely - we cannot really make the decision not to do it of course.


> And, in small teams it's hard to justify the overhead of managing x509 certs and all the processes that come along with it. Just my personal experience.

If you're using (say) Python in your client code, call SSLSocket.getpeercert() and check if your company's domain is in the subjectAltName:

* https://docs.python.org/3/library/ssl.html#ssl.SSLSocket.get...

You can ensure it is a valid cert from a valid public CA (like Let's Encrypt) instead of doing your own private CA (which you would specify with SSLContext.load_verify_locations()).


I think parent refers to the infrastructure that is required to (automatically?) sign certificates by an internal CA and managing the distribution of those certificates. I don't think verification is the issue.


This is correct. You have to consider every step from when the device is manufactured to when something goes catastrophically wrong in the field. All the internal documentation and tools so Joe from support can help customers and Bob in manufacturing can provision devices on his own all while maintaining controls around that process so nothing is getting leaked or abused.


Sure, for “small teams”. Does that apply to the companies with huge impact from this issue? Is Delta Airlines IT run by a small team? I hope not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: