Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work in embedded systems, and the best advice I can offer is: resist the urge to speculate when problems arise. Stay quiet, grab an oscilloscope, and start probing the problem area. Objective measurements beat conjecture every time. It's hard to argue with scope captures that clearly show what's happening. As Jack Ganssle says, "One test is worth a thousand opinions."


I thoroughly disagree with this sentiment.

In my experience, the most helpful approach to performing RCA on complicated systems involves several hours, if not days, of hypothesizing and modeling prior to test(s). The hypothesis guides the tests, and without a fully formed conjecture you’re practically guaranteed to fit your hypothesis to the data ex post facto. Not to mention that in complex systems there is usually 10 benign things wrong for every 1 real issue you might find - without a clear hypothesis, its easy to go chasing down rabbit holes with your testing.


That's a valid point. What I originally meant to convey is that when issues arise, people often assign blame and point fingers without any evidence, just based on their own feelings. It is important to gather objective evidence to support any claims. Sounds somewhat obvious but in my career I have found that people are very quick to blame issues on others when 15 minutes of testing would have gotten to the truth.


Very reasonable, I fully agree on that front


I think the GP is in a different world than you.

If you can grab an oscilloscope and gather meaningful data in 15 minutes, why would you spend several hours hypothesizing and modeling?

If you can't, then spending several hours or days modeling and hypothesizing is better than just guessing.

So I think that data beats informed opinions, but informed opinions beat pure guesses.


I agree with both of you. I think it’s sort of a hybrid and a spectrum of how much you do of each first.

When you test part of the circuit with the scope, you are using prior knowledge to determine which tool to use and where to test. You don’t just take measurements blindly. You could test a totally different part of the system because there might be some crazy coupling but you don’t. In this system it seems like taking the measurement is really cheap and a quick analysis about what to measure is likely to give relevant results.

In a different system it could be that measurements are expensive and it’s easy to measure something irrelevant. So there it’s worth doing more analysis before measurements.

I think both cases fight what I’ve heard called intellectual laziness. It’s sometimes hard to make yourself be intellectually honest and do the proper unbiased analysis and measuring for RCA. It’s also really easy to sit around and conjecture compared to taking the time to measure. It’s really easy for your brain to say “oh it’s always caused by this thing cuz it’s junk” and move on because you want to be done with it. Is this really the cause? Could there be nothing else causing it? Would you investigate this more if other people’s lives depended on this?

I learned about this model of viewing RCA from people who work on safety critical systems. It takes a lot of energy and time to be thorough and your brain will use shortcuts and confirmation bias. I ask myself if I’m being lazy because I want a certain answer. Can I be more thorough? Is there a measurement I know will be annoying so I’m avoiding it?


Another disagreeing voice, but I try to employ problem speculation when some problem arises. I'm working in a cross-company, cross-team project, where everyone's input interacts in interesting ways. When we come across a weird problem, getting folks together to ask, "what does x issue sound like its caused by?". This gets people thinking about where certain functionality lives, the boundary points between the functionality, and a way to test the hypothesis.

It's helped a dozen times so far essentially playing 20 questions and being able to point to the exact problem and have it resolved quickly.

This is a semi-embedded system. FGPAs, SoCs, drivers, userspace, userspace drivers, etc. Lots of stuff to go wrong, speculation gives a place to start.


Speculating is a great way to prioritizing what to investigate, but I've worked with many senior engineers (albeit not in embedded) that have made troubleshooting take longer because they disregarded potential causes based on pattern-matching against their past experiences.


This has become a personal debate for me recently, ever since I learned that there are several software luminaries who eschew debuggers (the equivalent of taking an oscilliscope probe to a piece of electronics).

I’ve always fallen on the side of debugging being about “isolate as narrowly as possible” and “don’t guess what’s happening when you can KNOW what’s happening”.

The arguments against this approach is that speculation and statically analyzing a system reinforces that system in your mind and makes you more effective overall in the long run, even if it may take longer to isolate a single defect.

I’ll stick with my debuggers, but I do agree that you can’t throw the baby out with the bathwater.

The modern extreme is asking Cursor’s AI agent “why is this broken?” I recently saw a relatively senior engineer joining a new company lean too heavily on Cursor to understand a company’s systems. They burned a lot of cycles getting poor answers. I think this is a far worse extreme.


For me, it's about being aware of the entire stack, and deliberate about which possibilities I am downplaying.

At a previous company, I was assigned a task to fix requests that were timing out for certain users. We knew those users had more data than the standard deviation, so the team lead created a ticket that was something like "Optimize SQL queries for...". Turns out the issue was our XML transformation pipeline (I don't miss this stack at all) was configured to spool to disk for any messages over a certain size.

Since I started by benchmarking the query, I realized fairly quickly that the slowness wasn't in the database; since I was familiar with all layers of our stack, I knew where else to look.

Instrumentation is vital as well. If you can get metrics and error information without having to gather and correlate it manually, it's much easier to gain context quickly.


To me, it's the method for deciding where I put the oscilloscope/debugger.

Without the speculation, where do you know where to put your breakpoint? If you have a crash, cool, start at the stack trace. If you don't crash but something is wrong, you have a far broader scope.

The speculation makes you think about what could logically cause the issue. Sometimes you can skip the actual debugging and logic your way to the exact line without much wasted time.


Its probably different depending on how much observability you have into the system.

Hardware, at least to me, seems impossible to debug from first principles, too many moving parts from phenomenon too tiny to see and from multiple different vendors.

Software is at a higher level of abstraction and you can associate a bug to some lines of code. Of course this means that you're writing way more code so the eventual complexity can grow to infinity by having like 4 different software systems have subtly different invariants that just causes a program to crash in a specific way.

Math proofs are the extreme end of this - technically all the words are there! Am i going to understand all of them? Maybe, but definitely not on the first pass.

Meh you can make the argument that if all the thoughts are in the abstract it becomes hard to debug again which is fair.

That doesn't mean any one is harder than the other and obviously between different problems in said disciplines you have different levels of observability. But yea idk


Implicit or explicit, you need a hypothesis to be able to start probing. Many issues can surely be found with an oscilloscope. Many others can't and an oscilloscope does not help in any way. It's experience that tells you which symptom indicates which class of issues this could be, so you use the right tool for debugging.

That's not to say that at some point you don't need to get your hands dirty. But it's equally important to balance that with thinking and theory building. It's whoever gets that balance right who will be most effective at debugging, not the one with the dirtiest hands.


Love this.

The most dangerous words during debug are: “…but it should work this way!” This is a mantra I try hard to instill in all EEs I mentor.

“Should” isn’t worth a damn to me. You test your way out of hardware bugs - you don’t logic your way out.


this works also in general purpose corporate programming.


Without speculation, what test do you decide to do?

Speculation is fine, but you need to ground it in reality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: