I don’t. I worked on firmware stuff where unexplainable behavior occurs; digging around the code, you start to feel like it’s going to take some serious work to even start to comprehend the root cause; and suddenly you find the one line of code that sets the wrong byte somewhere as a side effect, and what you thought would fill up your week ended up taking 2 hours.
I just find it so oversimplified that I can't believe you're sincere. Like you have entirely no internal heuristic for even a coarse estimation of a few minutes, hours, or days? I would say you're not being very introspective or are just exaggerating.
Working on drivers, a relatively recent example is when we started looking at a "small" image corruption issue in some really specific cases, that slowly spidered out to what was fundamentally a hardware bug affecting an entire class of possible situations, it was just this one case happened to be noticed first.
There was even talk about a hardware ECO at points during this, though an acceptable workaround was eventually found.
I could never have predicted that when I started working on it, and it seemed every time we thought we'd got a decent idea about what was happening even more was revealed.
And then there's been many other issues when you fall onto the cause pretty much instantly and a trivial fix can be completed and in testing faster than updating the bugtracker with an estimate.
True there's probably a decent amount, maybe even 50%, where you can probably have a decent guess after putting in some length of time and be correct within a factor of 2 or so, but I always felt the "long tail" was large enough to make that pretty damn inaccurate.
I can explain it to you. A bug description at the beginning is some observed behaviour that seems to be wrong. Now the process starts of UNDERSTANDING the bug. Once that process has concluded, it will be possible to make a rough guess of how long fixing it will take. Very often, the answer then is a minute or two, unless major rewrites are necessary. So, the problem is you cannot put an upfront bound on how long you need to understand the bug. Understanding can be a long winded process that includes trying to fix the bug in the process.
> A bug description at the beginning is some observed behaviour that seems to be wrong.
Or not. A bug description can also be a ticket from a fellow engineer who knows the problem space deeply and have an initial understanding of the bug, likely cause and possible problems. As always, it depends, and IME the kind of bugs that end up in those "bugathons" are the annoying "yeah I know about it, we need to fix it at some point because it's PITA".
So you can know before starting to work on the ticket if it's a few minutes boring job, if it could take hours or days or if it's going to be something bigger.
I can understand the "I don't do estimates" mantra for bigger projects, but ballpark estimations for bugs - even if you can be wrong in the end - should not be labelled as 100% impossible all the times.
Why did the other developer who passed you the bug not make an estimate then?
I understand the urge to quantify something that is impossible to quantify beforehand. There is nothing wrong with making a guess, but people who don't understand my argument usually also don't understand the meaning of "guess". A guess is something based on my current understanding, and as that may change substantially, my guess may also change substantially.
I can make a guess right now on any bug I will ever encounter, based on my past experience: It will not take me more than a day to fix it. Happy?
My team once encountered a bug that was due to a supplier misstating the delay timing needed for a memory chip.
The timings we had in place worked, for most chips, but they failed for a small % of chips in the field. The failure was always exactly identical, the same memory address for corrupted, so it looked exactly like an invalid pointer access.
It took multiple engineers months of investigating to finally track down the root cause.
But what was the original estimate? And even so I'm not saying it must be completely and always correct. I'm saying it seems wild to have no starting point, to simply give up.
Have you ever fixed random memory corruption in an OS without memory protection?
Best case you trap on memory access to an address if your debugger supports it (ours didn't). Worst case you go through every pointer that is known to access nearby memory and go over the code very very carefully.
Of course it doesn't have to be a nearby pointer, it can be any pointer anywhere in the code base causing the problem, you just hope it is a nearby pointer because the alternative is a needle in a haystack.
I forget how we did find the root cause, I think someone may have just guessed bit flip in a pointer (vs overrun) and then un-bit-flipped every one of the possible bits one by one (not that many, only a few MB of memory so not many active bits for pointers...) and seen what was nearby (figuring what the originally intended address of the pointer was) and started investigating what pointer it was originally supposed to be.
Then after confirming it was a bit flip you have to figure out why the hell a subset of your devices are reliably seeing the exact same bit flipped, once every few days.
So to answer your question, you get a bug (memory is being corrupted), you do an initial investigation, and then provide an estimate. That estimate can very well be "no way to tell".
The principal engineer on this particular project (Microsoft Band) had a strict 0 user impacting bugs rule. Accordingly, after one of my guys spend a couple weeks investigating, the principal engineer assigned one of the top firmware engineers in the world to track down this one bug and fix it. It took over a month.
This is why a test suite and mock application running on the host is so important. Tools like valgrind can be user to validate that you won't have any memory errors once you deploy to the platform that doesn't have protections against invalid accesses.
It wouldn't have caught your issue in this case. But it would have eliminated a huge part of the search space your embedded engineers had to explore while hunting down the bug.
There is a divide in this job between people who can always provide an estimate but accept that it is sometimes wrong, and people who would prefer not to give an estimate because they know it’s more guess than analysis.
You seem to be in the first club, and the other poster in the second.
It rather depends on the environment in which you are working - if estimates are well estimates then there is probably little harm in guessing how long something might take to fix. However, some places treat "estimates" as binding commitments and then it could be risky to make any kind of guess because someone will hold you to it.
More than some places. Every place I've worked, has been a place where you estimate at your own peril. Even when the manager says "Don't worry. I won't hold you to it. Just give me a ballpark.", you are screwed.
I used to work for a Japanese company. When we'd have review meetings, each manager would have a small notebook on the table, in front of them.
Whenever a date was mentioned, they'd quickly write something down.
Both are by design. Array covariance is a common design mistake in OOP languages, which the designer of TypeScript had already done for C# but there they at least check it at runtime. And the latter was declared not-a-bug already IIRC.
TypeScript designers insist they're ok with it being unsound even on the strictest settings. Which I'd be ok with if the remaining type errors were detected at runtime, but they also insist they don't want the type system to add any runtime semantics.
"By design", for me, doesn't say that it can't be changed — maybe the design was wrong, after all. Would it be a major hurdle or create some problems if fixed today?
In the first example you deliberately create an ambiguous type, when you already know that it's not. You told the compiler you know more than it does.
The second is a delegate, that will be triggered at any point during runtime. How can the compiler know what x will be?
First example: you're confusing the annotation for a cast, but it isn't; it won't work the other way around. What you're seeing there is array covariance, an unsound (i.e. broken) subtyping rule for mutable arrays. C# has it too but they've got the decency to check it at runtime.
Second example: that's the point. If the compiler can't prove that x will be initalised before the call it should reject the code until you make it x: number|undefined, to force the closure to handle the undefined case.
For the first one, the compiler should not allow the mutable list to be assigned to a more broadly typed mutable list. This is a compile error in kotlin, for example
val items: MutableList<Int> = mutableListOf(3)
val brokenItems: MutableList<Any> = items
If it only works when you write the types correctly with no mistakes, what's the point? I thought the point of all this strong typing stuff was to detect mistakes.
Because adding types adds constraints across the codebase that detect a broader set of mistakes. It's like saying what's the point of putting seatbelts into a car if they only work when you're wearing them - yes you can use them wrong (perhaps even unknowingly), but the overall benefit is much greater. On balance I find that TypeScript gives me huge benefit.
Are these devices popular? My friend has two and is excited about them, but I have no exposure to them outside of that, so it's cool to see it pop up here.
I went with the 47 mm wide roll of tape because that was the easiest to find on the shelf at the big box store. 3M painters tape because it will generally come off cleanly well past its rated time of like two weeks.
The appeal is the ability to make decent labels which can withstand almost all indoor use and abuse for a reasonable amount of time.
I generally hand-label my boxes and things with specialized ink, and they hold very well even after a decade.
But if I'm going to label a spice jar or something gonna handled a lot, I use the printer. It's legible, resistant/resilient enough and reprinting things is easy.
I think part of it is that these printers end up offering so much more flexibility than your traditional labeler. Single-font single-line labels are boring, crummy built in excuses for emoji…
>
Having seen LLMs so many times produce coherent, sensible and valid chains of reasoning to diagnose issues and bugs in software I work on, I am at this point in absolutely no doubt that they are thinking.
People said the same thing about ELIZA
> Consciousness or self awareness is of course a different question,
Then how do you define thinking if not a process that requires consciousness?