Hacker Newsnew | past | comments | ask | show | jobs | submit | triyambakam's commentslogin

I haven't used it recently but perhaps TabNine


> It’s virtually impossible for me to estimate how long it will take to fix a bug, until the job is done.

Now I find that odd.


I don’t. I worked on firmware stuff where unexplainable behavior occurs; digging around the code, you start to feel like it’s going to take some serious work to even start to comprehend the root cause; and suddenly you find the one line of code that sets the wrong byte somewhere as a side effect, and what you thought would fill up your week ended up taking 2 hours.

And sometimes, the exact opposite happens.


You might get humbled by overwhelming complexity one day. Enjoy the illusion of perfect insight until then.


I didn't say it must be always correct


Yeah, I’m obviously a terrible programmer. Ya got me.


I just find it so oversimplified that I can't believe you're sincere. Like you have entirely no internal heuristic for even a coarse estimation of a few minutes, hours, or days? I would say you're not being very introspective or are just exaggerating.


I think it's very sector dependent.

Working on drivers, a relatively recent example is when we started looking at a "small" image corruption issue in some really specific cases, that slowly spidered out to what was fundamentally a hardware bug affecting an entire class of possible situations, it was just this one case happened to be noticed first.

There was even talk about a hardware ECO at points during this, though an acceptable workaround was eventually found.

I could never have predicted that when I started working on it, and it seemed every time we thought we'd got a decent idea about what was happening even more was revealed.

And then there's been many other issues when you fall onto the cause pretty much instantly and a trivial fix can be completed and in testing faster than updating the bugtracker with an estimate.

True there's probably a decent amount, maybe even 50%, where you can probably have a decent guess after putting in some length of time and be correct within a factor of 2 or so, but I always felt the "long tail" was large enough to make that pretty damn inaccurate.


I can explain it to you. A bug description at the beginning is some observed behaviour that seems to be wrong. Now the process starts of UNDERSTANDING the bug. Once that process has concluded, it will be possible to make a rough guess of how long fixing it will take. Very often, the answer then is a minute or two, unless major rewrites are necessary. So, the problem is you cannot put an upfront bound on how long you need to understand the bug. Understanding can be a long winded process that includes trying to fix the bug in the process.


> A bug description at the beginning is some observed behaviour that seems to be wrong.

Or not. A bug description can also be a ticket from a fellow engineer who knows the problem space deeply and have an initial understanding of the bug, likely cause and possible problems. As always, it depends, and IME the kind of bugs that end up in those "bugathons" are the annoying "yeah I know about it, we need to fix it at some point because it's PITA".


That just means that somebody else has already started the process of understanding the bug, without finishing it. So what?


So you can know before starting to work on the ticket if it's a few minutes boring job, if it could take hours or days or if it's going to be something bigger.

I can understand the "I don't do estimates" mantra for bigger projects, but ballpark estimations for bugs - even if you can be wrong in the end - should not be labelled as 100% impossible all the times.


Why did the other developer who passed you the bug not make an estimate then?

I understand the urge to quantify something that is impossible to quantify beforehand. There is nothing wrong with making a guess, but people who don't understand my argument usually also don't understand the meaning of "guess". A guess is something based on my current understanding, and as that may change substantially, my guess may also change substantially.

I can make a guess right now on any bug I will ever encounter, based on my past experience: It will not take me more than a day to fix it. Happy?


My team once encountered a bug that was due to a supplier misstating the delay timing needed for a memory chip.

The timings we had in place worked, for most chips, but they failed for a small % of chips in the field. The failure was always exactly identical, the same memory address for corrupted, so it looked exactly like an invalid pointer access.

It took multiple engineers months of investigating to finally track down the root cause.


But what was the original estimate? And even so I'm not saying it must be completely and always correct. I'm saying it seems wild to have no starting point, to simply give up.


Have you ever fixed random memory corruption in an OS without memory protection?

Best case you trap on memory access to an address if your debugger supports it (ours didn't). Worst case you go through every pointer that is known to access nearby memory and go over the code very very carefully.

Of course it doesn't have to be a nearby pointer, it can be any pointer anywhere in the code base causing the problem, you just hope it is a nearby pointer because the alternative is a needle in a haystack.

I forget how we did find the root cause, I think someone may have just guessed bit flip in a pointer (vs overrun) and then un-bit-flipped every one of the possible bits one by one (not that many, only a few MB of memory so not many active bits for pointers...) and seen what was nearby (figuring what the originally intended address of the pointer was) and started investigating what pointer it was originally supposed to be.

Then after confirming it was a bit flip you have to figure out why the hell a subset of your devices are reliably seeing the exact same bit flipped, once every few days.

So to answer your question, you get a bug (memory is being corrupted), you do an initial investigation, and then provide an estimate. That estimate can very well be "no way to tell".

The principal engineer on this particular project (Microsoft Band) had a strict 0 user impacting bugs rule. Accordingly, after one of my guys spend a couple weeks investigating, the principal engineer assigned one of the top firmware engineers in the world to track down this one bug and fix it. It took over a month.


This is why a test suite and mock application running on the host is so important. Tools like valgrind can be user to validate that you won't have any memory errors once you deploy to the platform that doesn't have protections against invalid accesses.

It wouldn't have caught your issue in this case. But it would have eliminated a huge part of the search space your embedded engineers had to explore while hunting down the bug.


Custom OS, cross compiling from Windows, using Arm's old C compiler so tools like valgrid weren't available to us.

Since it was embedded, no malloc. Everything being static allocations made the search possible in the first place.

This wasn't the only HW bug we found, ugh.


Valgrind (and the sanitizers) are only as good as your test coverage.

Static analysis can cover all your code, though generally with a significant rate of false positives that you will need to analyse.


There is a divide in this job between people who can always provide an estimate but accept that it is sometimes wrong, and people who would prefer not to give an estimate because they know it’s more guess than analysis.

You seem to be in the first club, and the other poster in the second.


It rather depends on the environment in which you are working - if estimates are well estimates then there is probably little harm in guessing how long something might take to fix. However, some places treat "estimates" as binding commitments and then it could be risky to make any kind of guess because someone will hold you to it.


More than some places. Every place I've worked, has been a place where you estimate at your own peril. Even when the manager says "Don't worry. I won't hold you to it. Just give me a ballpark.", you are screwed.

I used to work for a Japanese company. When we'd have review meetings, each manager would have a small notebook on the table, in front of them.

Whenever a date was mentioned, they'd quickly write something down.

Those dates were never forgotten.


"Don't worry. I won't hold you to it. Just give me a ballpark."

Anytime someone says that you absolutely know they will treat whatever you say as being a commitment written in blood!


That scenario is usually either misuse of escape hatches (especially at API boundaries) or a misunderstanding of what Typescript actually guarantees.


Not really, I provided these examples a couple weeks ago on another HN thread. TypeScript is simply unsound.

https://www.typescriptlang.org/play/?#code/MYewdgzgLgBAllApg...

https://www.typescriptlang.org/play/?#code/DYUwLgBAHgXBB2BXA...


Aren't these bugs that could be "simply" reported and fixed? Or maybe those would get a label "not a bug" attached by the TS creators for some reason?


Both are by design. Array covariance is a common design mistake in OOP languages, which the designer of TypeScript had already done for C# but there they at least check it at runtime. And the latter was declared not-a-bug already IIRC.

TypeScript designers insist they're ok with it being unsound even on the strictest settings. Which I'd be ok with if the remaining type errors were detected at runtime, but they also insist they don't want the type system to add any runtime semantics.


"By design", for me, doesn't say that it can't be changed — maybe the design was wrong, after all. Would it be a major hurdle or create some problems if fixed today?


Perfect examples of the kind of thing I'm talking about, thank you.


In the first example you deliberately create an ambiguous type, when you already know that it's not. You told the compiler you know more than it does. The second is a delegate, that will be triggered at any point during runtime. How can the compiler know what x will be?


First example: you're confusing the annotation for a cast, but it isn't; it won't work the other way around. What you're seeing there is array covariance, an unsound (i.e. broken) subtyping rule for mutable arrays. C# has it too but they've got the decency to check it at runtime.

Second example: that's the point. If the compiler can't prove that x will be initalised before the call it should reject the code until you make it x: number|undefined, to force the closure to handle the undefined case.


For the first one, the compiler should not allow the mutable list to be assigned to a more broadly typed mutable list. This is a compile error in kotlin, for example

    val items: MutableList<Int> = mutableListOf(3)
    val brokenItems: MutableList<Any> = items


> The second is a delegate, that will be triggered at any point during runtime. How can the compiler know what x will be?

x is clearly defined to be a number. The compiler should produce an error if the delegate captures x before it has a value assigned.


If it only works when you write the types correctly with no mistakes, what's the point? I thought the point of all this strong typing stuff was to detect mistakes.


Because adding types adds constraints across the codebase that detect a broader set of mistakes. It's like saying what's the point of putting seatbelts into a car if they only work when you're wearing them - yes you can use them wrong (perhaps even unknowingly), but the overall benefit is much greater. On balance I find that TypeScript gives me huge benefit.


Seeing a loading spinner like that makes me feel like I'm back in the Flash days


Are these devices popular? My friend has two and is excited about them, but I have no exposure to them outside of that, so it's cool to see it pop up here.


They are quite handy for some people. Once you get one, you'll start labeling all stuff. It's fun and also helps finding stuff faster.


Meanwhile once I bought a roll of blue painters' tape I started labeling freaking everything.


Painter’s tape is where I started, too… then I learned that gaffer’s tape comes in 1” rolls, and I’ve never looked back.


I went with the 47 mm wide roll of tape because that was the easiest to find on the shelf at the big box store. 3M painters tape because it will generally come off cleanly well past its rated time of like two weeks.


This is the way. Tape and a sharpie. No wires, drivers, usb, bluetooth, or wifi needed.


I assume part of the appeal is much cheaper label supplies than eg Epson?


The appeal is the ability to make decent labels which can withstand almost all indoor use and abuse for a reasonable amount of time.

I generally hand-label my boxes and things with specialized ink, and they hold very well even after a decade.

But if I'm going to label a spice jar or something gonna handled a lot, I use the printer. It's legible, resistant/resilient enough and reprinting things is easy.


I think part of it is that these printers end up offering so much more flexibility than your traditional labeler. Single-font single-line labels are boring, crummy built in excuses for emoji…


> Having seen LLMs so many times produce coherent, sensible and valid chains of reasoning to diagnose issues and bugs in software I work on, I am at this point in absolutely no doubt that they are thinking.

People said the same thing about ELIZA

> Consciousness or self awareness is of course a different question,

Then how do you define thinking if not a process that requires consciousness?


Why would it require consciousness, when we can't even settle on a definition for that?


I mean I know it sounds snarky but it just sounds like you weren't awaiting the tasks properly


I don't really see how you're comparing Pydantic AI here to Typescript. I'm assuming you meant simply Pydantic.


Just comparing an agent framework written in python (with focus on being "typesafe") to one (any) written in typescript


That's a very poor comparison then and not very useful?


Ja find ich auch schön schreckliches Denglisch mit Claude zu reden.


Really I don't see how you can have a footer at all on a page with infinite scroll


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: