Hard to have a conversation when often the critics of LLM output receive replies like "What, you used last week's model?! No, no, no, this one is a generational leap"
Too many people are invested into AI's success to have a balanced conversation. Things will return to normal after a market shakedown of a few larger AI companies.
On HN I think you overestimate the number of optimists that are optimists because they have some vested interest. Everyone everywhere arguably has a vested interest. I would also argue all of the folks on HN that are hostile and dismissive of coding agents also have a vested interest (just for the sake of contrasting your argument). If coding agents were really crappy I wouldn’t be using them just like I didn’t use them until end of 2025.
What conversation is hard to have? If you mean trying to convince people coding agents can or cannot do a specific thing then that may never go away. If you take an overall theme or capability, in some cases it will “just work” and in other cases it needs some serious steering or scaffolding, and in other cases it will just waste as much time as you will let it. It’s an imperfect tool and it may always be, and two people insisting it can do something and it cannot do that same thing may both be right.
What is troubling to me is the attitude of folks that are heavily hostile towards these models and the people that use them. People routinely conflate market promises and actual delivered tools and capabilities and lump people who enjoy and get lots of mileage out of these tools into what appears to be a big strawman camp of fawning fans who don’t understand or appreciate Real Software Engineering; people who would write bad code anyway and not know. It’s quite insulting but also wrong. Not saying you are part of this camp! But as one lonely optimist in a sea of negativity that’s certainly the perspective I’ve developed from the “conversations” I’ve seen on HN
It's been just 2 years since the OLED release, I think we're closing in on a refresh. Unless a deck is a year away from a generational bump. A refresh could include the updated joysticks featured on the Steam controller, though.
Till then I'd think I'd do more good for Valve to focus on their steam app and store experience.
I was mussing this summer if I should get a refurbed Thinkpad P16 with 96GB of RAM to run VMs purely in memory. Now that 96GB of ram cost as much as a second P16.
I feel you, so much. I was thinking of getting a second 64gb node for my homelab and i thought i’d save those money… now the ram alone cost as much as the node, and I’m crying.
Lesson learned: you should always listen to that voice inside your head that say: “but i need it…” lol
I rebuilt a workstation after a failed motherboard a year ago. I was not very excited about being forced to replace it on a days notice and cheaped out on the RAM (only got 32GB). This is like the third or fourth time I've taught myself the lesson to not pinch pennies when buying equipment/infrastructure assets. It's the second time the lesson was about RAM, so clearly I'm a slow learner.
I don't know overall in the ecosystem but Fedora has been working for me with secureboot enabled for a long time.
Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.
Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.
Technically a property based test caught the issue.
What I've found surprising is that the __proto__ string is a fixed set from the strings sampling set. Whereas I'd have expected the function to return random strings in the range given.
But maybe that's my biased expectation being introduced to property-based testing with random values. It also feels like a stretch to call this a property-based test, because what is the property "setters and getters that work"? Cause I expect that from all my classes.
Good PBT code doesn't simply generate values at random, they skew the distributions so that known problematic values are more likely to appear. In JS "__proto__" is a good candidate for strings as shown here, for floating point numbers you'll probably want skew towards generating stuff like infinities, nans, denormals, negative zero and so on. It'll depend on your exact domain.
I had to use it a couple times recently in Firefox on Android, and it's a nice thing to have.
The UX is not polished, and not responsive. No indicator that translation is happening, then the interface disappears for the translation to materialize, with multisecond delays. All understandable if the model is churning my mobile CPU, but it needs a clear visual insicator that something happening
Too many people are invested into AI's success to have a balanced conversation. Things will return to normal after a market shakedown of a few larger AI companies.
reply