It's funny because he tried to excuse it in another comment by saying he's using speech-to-text. I really don't understand what this lie was supposed to accomplish since he seems to be the only person using a STT system incapable of proper capitalisation.
A bit like we should trust RFK on how "vaccines don't work" thanks to his wide experience?
The idea here is not to say that antirez has no knowledge about coding or software engineering, the idea was that if he says "hey we have the facts", and then when people ask "okay, show us the fact" he says: "just download claude code and play with it one hour and you have the facts" we don't trust that, that's not science
That's a great example in support of my argument here, because RFK Jr clearly has no relevant experience at all - so "figuring out, based on prior reputation and performance, who you should trust" should lead you to not listen to a word he says.
Well guess what, a lot of people will "trust him" because he is a "figure of power" (he's a minister of the current administration). So that's exactly why "authority arguments" are bad... and we should rely on science and studies
It entirely depends on the language you were using. The quality of both questions and answers between e.g. Go and JavaScript is incredible. Even as a relative beginner in JS I could not believe the amount of garbage that I came across, something that rarely happened for Go.
Half the vendor software I come across asks you to mount devices from the host, add capabilities or run the container in privileged mode because their outsourced lowest bidder developers barely even know what a container is. I doubt even the smallest minority of their customers protest against this because apparently the place I work at is always the first one to have a problem with it.
If you're at a point where you are exposing services to the internet but you don't know what you're doing you need to stop. Choosing what interface to listen on is one of the first configuration options in pretty much everything, if you're putting in 0.0.0.0 because that's what you read on some random blogspam "tutorial" then you are nowhere near qualified to have a machine exposed to the internet.
This post doesn't read anything like ChatGPT. Correct grammar does not indicate ChatGPT. Em-dashes don't indicate ChatGPT. Assessing whether something was generated using an LLM requires multiple signals, you can't simply decry a piece of text as AI-generated because you noticed an uncommon character.
Unfortunately I think posts like this only seem to detract from valid criticisms. There is an actual ongoing epidemic of AI-generated content on the internet, and it is perfectly valid for people to be upset about this. I don't use the internet to be fed an endless stream of zero-effort slop that will make me feel good. I want real content produced by real people; yet posts like OP only serve to muddy the waters when it comes to these critiques. They latch onto opinions of random internet bottom-feeders (a dash now indicates ChatGPT? Seriously?), and try to minimise the broader skepticism against AI content.
I wonder whether people like the Author will regret their stance once sufficient amount of people are indoctrinated and their content becomes irrelevant. Why would they read anything you have to say if the magic writing machine can keep shitting out content tailored for them 24/7?
Yes, that's an absolutely deranged opinion. Most tech jobs can be done on a $500 laptop. You realise some people don't even make your computer budget in net income every year, right?
That's a bizarrely extreme position. For almost everyone ~$2000-3000 PC from several years ago is indistinguishable from one they can buy now from a productivity standpoint. Nobody is talking about $25 ten year old smartphones. Of course claiming that a $500 laptop is sufficient is also a severe exaggeration, a used desktop, perhaps...
Overspending on your tools is a misallocation of resources. An annual $22k spend on computing is around 10-20x over spend for a wealthy individual. I'm in the $200-300k/year, self-employed, buys-my-own-shit camp, and I can't imagine spending 1% of my income on computing needs, let alone close to 10%. There is no way to make that make sense.
Look at it a different way: if you'd invested that $10K/year you've been blowing on hardware, how much more money would you have today? How about that $800/month car payment too?
I don’t understand a world where spending $1k/mo on business equipment that is used to earn dozens of times more than that is crazy. It’s barely more than my minuscule office space costs.
My insurance is the vast majority of that $800, fwiw.
Having a 10% faster laptop does not enhance your ability to earn money in any meaningful way. Just like driving around in a luxury car doesn't enhance your ability to travel from point A to point B in any meaningful way.
It's okay to like spending money on nice things, it's your money and you get to decide what matters to you. What you're getting hate for here is claiming it's justified in some way.
Yes, you don't want to under spend on your tools to the point where you suffer. But, I think you are missing the flip side. I can do my work comfortably with 32GB RAM, but my 1% a year budget could get me more. But, why not pocket it.
The goal is the right tool for the job, not the best tool you can afford.
Do you expect Sam Altman to come on stage and tell you the whole thing is a giant house of cards when the entire western economy seems to be propped up by AI? I wonder whose "sober" analysis you would accept, because surely the people that are making money hand over fist will never admit it.
Seems to me like any criticism of AI is always handwaved away with the same arguments. Either it's companies who missed the AI wave, or the models are improving incredibly quickly so if it's shit today you just have to wait one more year, or if you're not seeing 100x improvements in productivity you must be using it wrong.
reply