> Claude is very useful but it's not yet anywhere near as good as a human software developer. Like an excitable puppy it needs to be kept on a short leash.
The skill of "a human software developer" is in fact a very wide distribution, and your statement is true for a ever shrinking tail end of that
The thing that gets installed, if it is an executable, usually also has permissions to do scary things. Why is the installation process so scrutinized?
I think there's a fundamental psychological reason for this - people want to feel like some ritual has been performed that makes at least some level of superficial sense, after which they don't have to worry.
You see this in all the obvious examples of physical
security.
In the case of software it's the installation that's the ritual I guess. Complete trust must be conferred in the software itself by definition, so people just feel better knowing for near certain that the software installed is indeed 'the software itself'.
It would raise the same kind of alert for me if someone used wget to download a binary executable instead of a shell script.
The issue is not the specific form in which code is executed on your machine, but rather who is allowed by you to run code on your computer.
I don't trust arbitrary websites from the Internet, especially when they are not cryptographically protected against malicious tampering.
However, I do trust, for instance, the Debian maintainers, as I believe they have thoroughly vetted and tested the executables they distribute, with a cryptographic signature, to millions of users worldwide.
In Rust, you can always create a new tokio runtime and use that to call an async function from a sync function. Ditto with Python: just create a new asyncio event loop and call `run`. That's actually exactly what an Io object in Zig is, but with a new name.
Looking back at the original function coloring post [1], it says:
> It is better. I will take async-await over bare callbacks or futures any day of the week. But we’re lying to ourselves if we think all of our troubles are gone. As soon as you start trying to write higher-order functions, or reuse code, you’re right back to realizing color is still there, bleeding all over your codebase.
So if this is isomorphic to async/await, it does not "solve" the coloring problem as originally stated, but I'm starting to think it's not much of a problem at all. Some functions just have different signatures from other functions. It was only a huge problem for JavaScript because the ecosystem at large decided to change the type signatures of some giant portion of all functions at once, migrating from callbacks to async.
It could first judge whether the PR is frivolous, then try to review it, then flag a human if necessary.
The problem is that Github, or whatever system hosts the process, should actively prevent projects from being DDOS-ed with PR reviews since using AI costs real money.
It's been stated like a consultant giving architectural advice. The problem is that it is socially acceptable to use llms for absolutely anything and also in bulk. Before, you strove to live up to your own standards and people valued authenticity. Now it seems like we are all striving for the holy grail of conventional software engineering: The Average.
It is absolutely not socially acceptable, and people like yourself blithely declaring that it is is getting tiring. Maybe it’s socially acceptable in your particular circles to not give a single shit, take no pride in the slop you throw at people, and expect them to wade through it no questions asked? But not for the rest of us.
Maybe I didn't clearly state my point. That was a comment about my experience earlier here on HN, someone was asked whether or not they've used AI to write and their response was "why not use it if it's better than my own", if that is the reasoning that people give and they are not self-aware enough to be embarrassed about it, I think it must mean that there's a lot of people who think like that.
reply