Hacker Newsnew | past | comments | ask | show | jobs | submit | km144's commentslogin

> “In my trips to Wall Street,” Dyer told the panel, “one of my analyst friends took me to lunch one day and said, ‘Joe, you have to get iRobot out of the defense business. It’s killing your stock price.’ And I countered by saying ‘Well, what about the importance of DARPA and leading-edge technology? What about the stability that sometimes comes from the defense industry? What about patriotism?’ And his response was, ‘Joe, what is it about capitalism you don’t understand?’”

I find this article a pretty compelling critique of the extractive incentives of Wall Street and a good argument for government stepping in from time to time to adjust those incentives. Where is the societal good in the engine of capitalism prioritizing short-term extraction over long-term value creation?


It's also wrong to think that company performance has anything to do with stock prices nowadays, anyway. Look at Oracle, supposedly an established company with a predictable runway for future earnings, jumping 40% (40%!) and then shedding that jump over the following months.

Or... wait for it... Gamestop. Not just what happened in 2021. What happened in 2024. What's happening now. (Compare its market cap to its cash, and then how it compares to competitors, and then price-to-earnings, and then again to competitors).

Look at the market as-a-whole. Falling earnings, stock prices going up.

I wouldn't be surprised to find that iRobot was simply just marked for death. Any company not named Apple that is manufacturing in China, Wall Street has decided that they're going to face headwinds from IP theft and competitors backed by the full faith and force of the Chinese Communist Party, and they get busy squeezing every ounce of value out, potential be damned (because, as far as traders and shareholders are concerned, such companies already are).


What is the current AI/data center mega-boom if not forgoing short-term extraction for long-term value creation?


Given how many people are getting rich from the inflated stock prices of every AI-adjacent company right now, including the ones with no obvious path to profitability, I could make the argument that they're already in a short term extraction phase.

(I'm also not sure if putting a significant % of the population out of work will create long term value to society.)


The stocks are inflated because Wall Street believes current $$$$$ capital investments will create massive long-term value!!


As a lowly retail investor, I'm only investing in AI because everyone else is investing in AI. I hope for a greater fool to dump my AI stocks on before the music stops. I am sure plenty of Wall Street sees it the same way.

We're certainly seeing short term value at companies who grew profits by replacing workers with cheap AI tools. But the true cost of those AI tools is still being paid by investors, not customers. (Not to mention the indirect costs being paid by society, from the rising cost of RAM & electricity to global warming.)

In the long term these AI companies will need to raise their prices substantially if they're to break even. Will the value still be there for their customers when its no longer cheap?

And if AI puts enough people permanently out of work, the GDP will drop, leading to demand for any product made with AI dropping too. It is an industry that could eventually eat itself.


> Wall Street believes current $$$$$ capital investments will create massive long-term value

Clearly not. The stock market has a correction at least every few years. So Wall Street only believes they can sell the stock for higher within a few years. Not very long term is it?


If you could predict a stock market correction before it happens, you'd be very, very rich. The fact that corrections sometimes happen does not negate the existence of market-wide expectations for any given stock.


> If you could predict a stock market correction before it happens, you'd be very, very rich.

Same goes for if you can predict the price of a stock... but analysts do it anyway and set targets for stocks.

> The fact that corrections sometimes happen does not negate the existence of market-wide expectations for any given stock.

The crash or not is part of the expectation. Regardless of what you read on articles, those fund managers often sit out situations they don't deem worthy of investing.


> So Wall Street only believes they can sell the stock for higher within a few years.

Or they think the returns from holding the stock will be higher.


> What is the current AI/data center mega-boom

Short term extraction.

The long term value is in AI research not scaling LLMs.


Sure, but the author is arguing that the outcome you're describing is tightly coupled to the perverse incentives that he describes in the article. Investors pushed the company towards extraction over innovation and the end product suffered as a result.


Explore vs exploit?

Let's run an expierement where we just run exploit forever, let's restrucute the private sector, our countries moral baselines and eventually our executive leadership to be maximally exploitive, lets do that for about 45 years and see where it lands us -Some greed is good guys in the 80s probably.


It just doesn’t seem like these other vacuum robotics companies spend so much money on research and development.

I’m sure they could have built more advanced robots, etc. If they had focused on research, but when virtually every competitor is cheaper and offers better technology. It seems like their competitors just applied something off the shelf and not some grand big brain advancement.


Missing the point.

iRobot was far more than vacuums until they weren't.

Read the article. The author spells it out.

I lived it. I read about them and bought a Roomba back when they first sold them. They had so much in the pipeline, consumer and otherwise. Hell, they even had a STEM kit programmable Roomba.

History repeats itself because people forget.


It just says that they sold off their defense robot division and launched a consumer products company.

They just aren’t consumer centric. The neato was so much better than the Roomba and that was so long ago


That is probably true. But Roomba sucked in the early 2000's too. They never got better.


I believe the author's thesis is that if they had invested in innovation over a couple decades, the product probably would have sucked less.


Or perhaps would have sucked more where it needed to, and sucked less where it didn't.


It's a vacuum cleaner. All you want it to do is suck.


But not at navigation.


It does seem like that upon reading the article, but it’s not what the title of the article suggests.


The innovation being shutdown wasn't innovation towards making robot vacuum cleaners better. It was innovation direct towards military applications like building robotic hands.


Exactly this. If they had been innovating in vacuum technology then maybe this article would have a point. But they were building stuff for the military and for space, and there's a good reason investors wanted them to get out of that because it was sucking up money and not resulting in better vacuum cleaners.


Well it's 2025, we've just spent the better half of the year discussing the bitter lesson. It seems clear solving more general problem is key to innovation.


Hardware is not like software. A general purpose humanoid cleaning robot will be superior to a robot vacuum but it will always cost an order of magnitude more. This is different from software where the cost exponentially decreases and you can do the computer in the cloud.


I'm not sure advancements in AI and advancements in vacuum cleaners are at similar stages in terms of R&D. I'd be very wary of trying to apply lessons from one to the other.


And the commenter above is highlighting the article's hypothesis about why they never got better.


Yet they were about far more than just vacuums!

In the 2000s, no one was doing what they were doing.


I'm not so sure I buy the premise that engineers are really dismissing AI because it's still not good enough. At the very least, this framing does not get to the heart of why certain engineers dislike AI.

Many of the people I've encountered who are most staunchly anti-AI are hobbyists. They enjoy programming in their spare time and they got into software as a career because of that. If AI can now adequately perform the enjoyable part of the job in 90% of cases, then what's left for them?


Have you seen the prices of pre-owned Honda/Toyota sedans that are less than 5 years old? There are absolutely cars out there where trading in your new car after 3-4 years can make sense depending on the cost of the car, the depreciation curve, and whether you want to always be driving a relatively new car. Of course it's almost always going to be a better value proposition to drive the car for 10 years if you can, but that can still depend on depreciation.


The math doesn't work when you calculate the same thing based on buying low mileage used cars or leases.

You're throwing large amounts of equity away every 4 years.

Also electric cars get killed on the depreciation curve.


Low mileage used cars don't come with a warranty, or probably have a more limited warranty if they're CPO.

Leases can be better, but again they are usually better choices in high depreciation scenarios (like luxury vehicles or EVs, as you point out), not low depreciation scenarios.


    > Also electric cars get killed on the depreciation curve.
I have heard this a couple of times now, and I believe it. Is the cause battery wear or pure demand (buyers don't want used EVs for various non technical reasons)?


In California, at one point, you could get a few thousand rebate, if you were in the Central Valley, and additional few thousand rebate. Some local cities gave rebates on top of that, and the federal tax rebate on that. Buy a $45k Model 3 and get back $13k-$15k just for buying it. Rebates like that are going to play havoc with resale values. On top of that, new Tesla's went down in price over the past several years. I think as these incentives taper off we'll see more of a stable drop off.


I think buyers just understand the value of a battery that they've cared for and babied compared to a battery with unknown history.


Those things also require more willpower than taking a medication. Willpower is generally determined by your particular psychology which is determined by genetics and environmental factors. People don't have a choice in the matter as much as your comment seems to imply. Getting GLP-1s to everyone who could benefit from them is extremely important for overall health.


"Real industry" also has quite a hard time getting things done these days. If you look around at the software landscape, you'll notice that "getting things done" is much easier for companies whose software interfaces less with the real world. Banking, government, defense, healthcare etc. are all places where real-life regulation has a trickle-down effect on the actual speed of producing software. The rise of big tech companies as the dominant economic powerhouses of our time is only further evidence that it's easier to just do a lot of things over the internet and even preferred, because the market rewards it. We would do well to figure out how to get stuff done in the real world again.


I think the problem is false positives, not false negatives. The people you interact with during the interview process have all sorts of reasons to embellish the experience of working at their company.


> The people you interact with during the interview process have all sorts of reasons to embellish the experience of working at their company.

That's true, but you have to be kind of smart about it. If you just ask the question "Is working here fulfilling?", of course they'll say "Yes, super!". But you cannot take that at face value, your questions need to shaped in a way so you can infer if working there is fulfilling, by asking other questions that can give you clues into that answer.


You hit the nail on the head. There is no place on the internet more broadly susceptible to the same kinds of "founder brain" malaise that has afflicted so many in Silicon Valley--i.e. "I am good at software development so therefore I am confident I have a good understanding of (and opinion on) all sorts of intellectual topics".


Maybe it that's an apt analogy in more ways than one, given the recent research out of MIT on AI's impact on the brain, and previous findings about GPS use deteriorating navigation skills:

> The narrative synthesis presented negative associations between GPS use and performance in environmental knowledge and self-reported sense of direction measures and a positive association with wayfinding. When considering quantitative data, results revealed a negative effect of GPS use on environmental knowledge (r = −.18 [95% CI: −.28, −.08]) and sense of direction (r = −.25 [95% CI: −.39, −.12]) and a positive yet not significant effect on wayfinding (r = .07 [95% CI: −.28, .41]).

https://www.sciencedirect.com/science/article/pii/S027249442...

Keeping the analogy going: I'm worried we will soon have a world of developers who need GPS to drive literally anywhere.


I’m navigationally clueless but I don’t drive professionally


I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.

Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: