Hacker Newsnew | past | comments | ask | show | jobs | submit | Traster's commentslogin

This seems a bit like a needlessly publicized finding. Surely our baseline assumption is that there are lots of systems that aren't very good at finding cancer. We're interested in findings that are good. You only need 1 good system to adopt. Yes, it's good scientific hygiene to do the study and publish it going "Well, this particular thing isn't good let's move on". But my expectation is you just going until you design a system that does do well and then adopt that system.

If I pluck a guy off the street, get him to analyze a load of MRI scans and he doesn't correctly identify cancer from them I'm not going to publish an article saying "Humans miss X% of breast cancers" am I.


I think finding that AI or at least specific model sold to be able to do something can't reasonable do it is entirely reasonable thing to publish.

In the end it is on the model marketer to prove that what they sell can do what it says. And counter examples is fully valid thing to then release.


I've been adjacent to this field for a while, so take this for what it is. My understanding that the developing a system that can accurately identify a specific form or sub-form of cancer to a degree equal or better than a human is doable now. However, developing a system that can generalize to many forms of cancer is not.

Why does this matter? Because procurement in the medical world is a pain in the ass. And no medical center wants to be dealing with 32 different startups each selling their own specific cancer detection tool.


Many people are confused and think the Bitter Lesson is that you can just feed up a bigger and bigger model and eventually it becomes omnipotent.

They promised us that AI was The Solution. Now they have to deliver.

If the TechBros fail us here, we may then assume they may fail us everywhere else as well.


And we'd be wrong about that. Different domains are showing wildly different characteristics, with some ML models showing superhuman or expert level performance in some domains (chess, face and handwriting recognition for example) and promising but as yet just not good enough in other domains (radiography, self-driving cars, question answering, prose writing). Currently coding is somewhere in the middle; superhuman in some ways, disappointingly unusable in others.

I don't we can make any conclusive verdict about the promise of ML for radiography right now; the life-critical nature of the application it's in the unusable middle, but it might get better in a few years or it might not. Time will tell.


Fundamentally when you think about it, what people know today as AI are things like ChatGPT and all of those products run on cloud infrastructure mainly via the browser or an app. So it makes perfect sense that customers just get confused when you say "This is an AI PC". Like, what a weird thing to say - my smartphone can do ChatGPT why would I buy a PC to do that. It's just a totally confusing selling point. So you ask the question why is it an AI PC and then you have to talk about NPUs, which apart from anything else are confusing (Neural what?) but bring you back to this conversation:

What is an NPU? Oh it's a special bit of hardware to do AI. Oh ok, does it run ChatGPT? Well no, that still happens in the cloud. Ok, so why would I buy this?


And importantly, in this analogy - most people here aren't even able to play that lottery. He founded a company based on the research he did whilst studying for a government funded PhD. Most people are not in a position in their life where they could even spend time trying to do research that would result in this type of eventual wealth.

This is one of the easiest paths to gain a competitive advantage that can be monetized. You are much less likely to fall into a pool of money.

Just like becoming a MD has much better odds at getting you some amount of money than dropping out of school. About the same path by the way.

But you can keep playing the lottery if you think it has better odds or even the same odds...


Firstly,

>Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.

The true competitor is Ryzen AI, Nvidia doesn't produce these integrated CPU/GPU/AI products in the PC segment at all.


It is the nature of YC that you're going to get instances like Pickle. YC invests at a very early stage in lots of companies. 40% are literally just an idea. It isn't a scam that one of the companies pivot, it's expected. They're meant to work on their idea, and if it doesn't work or they have a better idea they pivot.

What Pickle is doing is essentially they're falling on the wrong line of "fake it to you make it", it would be totally fine to do what they're doing (allowing pre-orders with a $200 deposit for a Q4 '26 product) if they just weren't lying about the specs. It's pretty clear they aren't going to deliver anything like what they've promised, but that is just ambition. The whole point of YC is that 1 out of 1000 of these companies are going to deliver something revolutionary and you don't get that without 1000 of them trying to do something revolutionary.

Having said that, you only need to watch the launch video to realise the CEO is total moron ("If everyone wore the same pair of glasses, what would they look like?"). But the way YC works, they don't actually have the power to tell Pickle what to do. YC are going to lose their investment on that company.


> It is the nature of YC that you're going to get instances like Pickle. YC invests at a very early stage in lots of companies. 40% are literally just an idea

But the whole supposed point of YC is investing in people not founders. If that's the pitch and you invest in a moron, that makes you look bad too. YC should be good at telling if people are morons - that's kind of their entire job.

> But the way YC works, they don't actually have the power to tell Pickle what to do.

They get 7% of your company. They do actually have some power.


This stuff is so difficult because it's all situational. In my first job I got hired into a small team, within a year or so my boss quit. I essentially stepped into that job. But in order to be the manager of that team you needed ot be a certain level of seniority, so I was doing the job, but I couldn't have the title or the money. I talked to my manager and made clear that I wanted the job and asked how I get there. The answer was that I needed to be a grade 7 and I was only a (new) grade 5, and since there are only a handful of promotions they can give each year, all things being equal I would get promoted to the seniority level needed to be doing that job in about 6-10 years.

I was essentially doing what this article was advising, but because of the corporate structure I was in, I was just volunteering to be taken advantage of. The correct strategy actually was just to leave. I wasn't going to be successful in that structure, it wasn't a meritocracy and the business results over time went how you would expect. In the end I leverage a job offer to put myself on track to get into that role within 2 years, but in reality that was 2 years wasted, I should've left immediately.


It's pretty clear from other articles the answer is not. Although from what I can see it claims to be about 50 times better than conventional methods. One claim is they tracked a plane to within a few hundreds meters for a 300 mile flight. Which is technically impressive but entirely useless for the stated purpose in this article.

It's kind of annoying that they don't go into specifics about why this is good. The problem with traditional accelerometers is that the error accumulates, and so even small errors accumulate. The article doesn't really address what makes this different - and in fact I don't think it is different. You're still just measuring acceleration using a sensor and that sensor will have errors that accumulate. So the question is how much better is it?

I would wager that actually this is probably just a way of funnelling money into research around quantum rather than genuinely trying to solve this specific problem. This specific problem sounds like it could be solved for a lot less money using conventional accelerometers in combination with some other local location data (optical sensors for example, you're in a very controlled environment).


> I would wager that actually this is probably just a way of funnelling money into research around quantum rather than genuinely trying to solve this specific problem.

You’re absolutely correct, from the government’s perspective the interest in the technology is for high accuracy inertial navigation systems for defence purposes, not for the London Tube. If you look at the other companies involved in this project, there’s a number of defence contractors involved.

This project isn’t really new, and historically the pitch has always been: We want to develop GPS grade navigation that doesn’t depend on satellites, and is smaller and better existing inertial navigation units. Oh look the London Underground is the perfect test bed for our technology!

It’s underground, so no GPS or many external signals. It’s already well mapped so we have something to compare against. Tube trains are loud, hot and vibrant a lot, which makes it a challenging environment for inertial systems. Plus it’s cheap and very easy to roll a box on an existing train, drive a few km under the city, and then compare your results to GPS from when you go underground, to when you surface again.

https://www.theguardian.com/science/article/2024/jun/15/lond...

The idea of using it map the underground I think is a bit of a red herring. Makes a good story, and TfL will probably be grateful for the data. But it’s not the kinda thing anyone thinks is worth developing quantum accelerometers for.


Given all the GPS jamming in Russia/Ukraine the defense world needs something not GPS based that works. The civilian world also needs this since they are often hit with the same jamming (both as collateral damage and intentional harm to the enemy)

At every station at least it should be very cheap to install optical markers that could allow for many opportunities for recalibration.

Agreed, but then said optical markers regularly positioned in tunnels and a map of the tube is enough to position yourself already. That's likely how it works right now, and it's fine.

The article says they just use wheel sensors. Most of them time they don't need to be very accurate so that is more than good enough, but in stations much higher accuracy should be needed and so they need an additional correction there. (note that I said should: I'm a strong believer in automated edge of platform doors which require stopping so the train doors align. Few systems in the world have this though)

A number of parts of the underground already have platform edge doors, and train stopping locations are tightly controlled regardless of the presence of platform edge doors.

the accumulation is due to integration when converting acceleration to distance, not the accelerometer itself

The accumulation might be from the integration, but the error is clearly from the accelerometers. Smaller error means smaller accumulated error, which means your magic box is usefully accurate for longer.

I wonder if it theoretically could (or already does) benefit from something like this: https://www.youtube.com/watch?v=nCg3aXn5F3M

It’s quantum so it’s better

I was quite surprised the direction this article took. I wasn't expecting reheated whinging about the toolchain.

FPGAs do need a new future. They need a future where someone tapes out an FPGA! Xilinx produced Ultrascale+ over a decade ago and haven't done anything interesting since. Their Versal devices went off a tangent into SoCs, NOCs, AI engines - you know what they didn't do? Build a decent FPGA.

Altera did something ambitious back in 2014 when they proposed the hyper-register design, totally failed to execute on it and have been treading water because of the Intel cluster**. They're now an independent company but literally don't have anyone who knows how to tape out a chip.

I'm less familiar with the Lattice stuff, but since their most advanced product is still 16nm finfet I suspect they aren't doing anything newer than Xilinx or Altera.

We need a company that builds an FPGA. It doesn't matter what tooling you have because the fundamental performance gap between a custom FPGA solution and a CPU or GPU based solution is entirely eaten up by the fact the newest FPGA you can buy is a decade old and inexplicably still tens of thousands of dollars.

If FPGA technology had progressed at the same rate NVidia or Apple had pushed forward CPU/GPU performance, thered be some amazing opportunities to accelerate AI (on the path to then creating ASICs). But they haven't, so all the scaling laws have worked against them and the places they have a performance benefit have shrunk massively.


There's a few things. Let's start with the core of what they say is their value. They have forward deployed engineers - this a totally new, previously unknown innovation - who go to a company, understand their needs and build data processing tools to give them insights. Then, they generalize these tools so that they can essentially sell them as SAAS software, giving them SAAS-type economics.

What other people say is their secret sauce, is they do consulting work for the government (a forward deployed engineer is just a consultant) and they make incredible margins because their senior management and early investors have connections to the government which gets them exclusive access to incredibly juicy contracts. As these contracts paid off they leant heavily into the social media meme stock trend so their CEO spends time talking like a psychopath and doing various non-economic things like spending huge amounts of money running adverts about how they're going to use AI to unleash Americas workers (America's workers aren't able to buy Palantir software or services, but they can buy it's stock).


How is this "forward deployed engineers" model different from that of basically any tech consulting firm?


It isn't.


OK. The first part sounds a bit like an innovation.

I was kind of expecting someone to say it had EG really sophisticated ETL tools that can normalise loads of different data or can query across disparate data sources or something.


The first part sounds like a basic ERP implementation. Only instead of leaning on your in-house domain experts who have years of experience/relevant knowledge/know the relevant caveats, you pay consulting rates to train up new domain experts who don't understand/know the caveats and who will charge you consulting rates to gain access to the results of the training you overpaid for.

'But they cleaned the data up'. That data was also cleaned up during all the last major system updates. And during the implementation of those systems. And the implementation of the systems before that.


I mean they do advertise themselves as an ERP system so that part tracks.


> What other people say is their secret sauce, is they do consulting work for the government

Also for the IDF


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: