Hacker Newsnew | past | comments | ask | show | jobs | submit | scott_w's commentslogin

Only if you’re a Guesser ;-)

Seriously though, it depends on the boss and the relationship you have with them. It can really fall into either camp and it might even be situational with the same person!

I would say that, generally, I would prefer to be direct in these relationships unless you both know each other really well. It does make things easier for all involved.


> Seriously though, it depends on the boss and the relationship you have with them.

Those are the power dynamics the GP is referring to.


> I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.

Depending on your use-case, having soft-deletes doesn't mean you can't clean out old deleted data anyway. You may want a process that grabs all data soft-deleted X years ago and just hard-delete it.

> Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.

Yes but this is no more complex than the current situation, where you have to always create the audit records.


True unlimited PTO can exist in the UK, though the "unlimited" PTO can't, as employees can't agree to less than the statutory minimum.

That said, unscrupulous employers try to get around this by putting stupid requirements on taking PTO that practically mean taking your legal allocation aren't possible. Things like "we need minimum staff levels to cover" and, shockingly enough, you don't have enough staff to actually give out everyone's PTO. Combine this with requiring long timeframes to book it and your manager "forgetting" you had PTO booked and insecure job contracts and you have a recipe for grinding your staff to dust.


> First: it's not the best use of our time.

I want to push back strongly on this. I think this attitude leads to more bugs as QA cannot possibly keep up with the rate of change.

I understand that you, personally, may not have exhibited this based on your elaboration. However that doesn’t change the fact that many devs do take exactly this attitude with QA.

To take a slightly different comparison, I would liken it to the old school of “throwing it over the wall” to Ops to handle deployment. Paying attention to how the code gets to production and subsequently runs on prod isn’t a “good use of developer time,” either. Except we discarded that view a decade ago, for good reason.


It’s entirely dependent on the situation. Some areas, additional charges work best. In others, it’s possible/necessary to redesign road and street layouts to prioritise higher-density modes of transport and physically discourage low-density modes like cars. This might be priority lights for public transport, lowering speed limits and narrowing streets. In some contexts, it’s necessary to completely disallow cars with things like bus lanes, bike/pedestrian-only areas. Separated tram/metro lines, too.

Most of this infrastructure, in practice, also aids emergency vehicle use as they can usually fit down bike lanes and are obviously able to fit in bus lanes.


Cut him some slack, he might have been having a heart attack at the time and in need of one of those ambulances!

> Yes, I actually do think if Sanjay Ghemawat were instead Wojciech Przemysław Kościuszko-Wiśniewski, white European but otherwise an equal engineer, and I chose to elevate Jeff Dean over him, I would later feel equally bad about it.

You need to take a breath, read what people write, and stop trying to win the argument.


> stop trying to win the argument

Mr Rayiner is a lawyer by profession ;) https://news.ycombinator.com/item?id=11340543


Thanks! This explained to me very simply what the benefits are in a way no article I’ve read before has.


That’s great to hear! We are happy it helped.


I was at a podiatrist yesterday who explained that what he's trying to do is to "train" an LLM agent on the articles and research papers he's published to create a chatbot that can provide answers to the most common questions more quickly than his reception team can.

He's also using it to speed up writing his reports to send to patients.

Longer term, he was also quite optimistic on its ability to cut out roles like radiologists, instead having a software program interpret the images and write a report to send to a consultant. Since the consultant already checks the report against any images, the AI being more sensitive to potential issues is a positive thing: giving him the power to discard erroneous results rather than potentially miss something more malign.


> Longer term, he was also quite optimistic on its ability to cut out roles like radiologists, instead having a software program interpret the images and write a report to send to a consultant.

As a medical imaging tech, I think this is a terrible idea. At least for the test I perform, a lot of redundancy and double-checking is necessary because results can easily be misleading without a diligent tech or critical-thinking on the part of the reading physician. For instance, imaging at slightly the wrong angle can make a normal image look like pathology, or vice versa.

Maybe other tests are simpler than mine, but I doubt it. If you've ever asked an AI a question about your field of expertise and been amazed at the nonsense it spouts, why would you trust it to read your medical tests?

> Since the consultant already checks the report against any images, the AI being more sensitive to potential issues is a positive thing: giving him the power to discard erroneous results rather than potentially miss something more malign.

Unless they had the exact same schooling as the radiologist, I wouldn't trust the consultant to interpret my test, even if paired with an AI. There's a reason this is a whole specialized field -- because it's not as simple as interpreting an EKG.


> I've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs.

I normally see things the same way you do, however I did have a conversation with a podiatrist yesterday that gave me food for thought. His belief is that certain medical roles will disappear as they'll become redundant. In his case, he mentioned radiology and he presented his case as thus:

A consultant gets a report + X-Ray from the radiologist. They read the report and confirm what they're seeing against the images. They won't take the report blindly. What changes is that machines have been learning to interpret the images and are able to use an LLM to generate the report. These reports tend not to miss things but will over-report issues. As a consultant will verify the report for themselves before operating, they no longer need the radiologist. If the machine reports a non-existent tumour, they'll see there's no tumour.


I've seen this sort of thing a few times. "Yes, I'm sure AI can do that other job that's not mine over there.". Now maybe foot doctors work closer to radiologists than I'm aware of. But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field. Apparently there are one or two incredibly easy tasks that they can sort of do, but it comprises a very small amount of the job of an actual radiologist.


> But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field.

Just so I understand correctly: is it over-reporting problems that aren't there or is it missing blindingly obvious problems? The latter is obviously a problem and, I agree, would completely invalidate it as a useful tool. The former sounded, the way it was explained to me, more like a matter of degrees.


I'm afraid I don't have the details. I was reading about certain lung issues the AI was doing a good job on and thought, "oh well that's it for radiology." But the radiologist chimed in with, "yeah that's the easiest thing we do and the rates are still not acceptable, meanwhile we keep trying to get it to do anything harder and the success rates are completely unworkable."


AI luminary and computer scientist Geoffrey Hinton predicted in 2016 that AI would be able to do all of the things radiologists can do within five years. We're still not even close. He was full of shit and now almost 10 years later he's changed his prediction, though still pretending he was right, by moving the goal posts. His new prediction is that radiologists will use AI to be more efficient and accurate, half suggesting he meant that all along. He didn't. He was simply bullshitting, bluffing, making an educated wish.

This is the nonsense we're living through, predictions, guesses, promises that cannot possibly be fulfilled and which will inevitably change to something far less ambitious and with much longer timelines and everyone will shrug it off as if we weren't being mislead by a bunch of fraudsters.


"History doesn't repeat itself, but it often rhymes", except in the world of computer science where history does repeat.


I doubt this simply because of the inertia of medicine. The industry still does not have a standardized method for handling automated claims like banking. It gets worse for services that require prior authorization; they settle this over the phone! These might sound like irrelevant ranting, but my point is that they haven't even addressed the low-hanging fruits, let alone complex ailments like cancer.


IMO prior authorization needing to be done on the phone is a feature, not a bug. It intentionally wastes a doctor's time so they are less incentivized to advocate for their patients and this frustration saves the insurance companies money.


Heard. I do wonder why hospitals haven't automated their side though. Regardless, the recent prior auth situation is a trainwreck. If I were dictator, insurance companies would be non-profit and required to have a higher loss ratio.

2 quibbles: 1) a more ethical system would still need triage-style rationing given a finite budget, 2) medical providers are also culpable given the eye-watering prices for even trivial services.


I would love to know how much rationing is actually necessary. I have literally 0 evidence to support this but my intuition says that this is like food stamps in that there is way less frivolous use than an overly negative media ecosystem would lead people to believe.


Radiology has proven to be one of the most defensive jobs in medicine, radiologists beat AI once already!

https://www.worksinprogress.news/p/why-ai-isnt-replacing-rad...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: