Hacker Newsnew | past | comments | ask | show | jobs | submit | more wakawaka28's commentslogin

>What's wrong with a big end of day commit?

Hoo boy I guess you never tried to use `git blame` on years-old shit huh? Don't push a commit for every line, but for logical units like one particular feature or issue.

>But then comes somebody and decides to just flush your well curated history down the toilet (=delete it and start somewhere else from scratch) and then all the valuable metadata stored in the history is lost.

This doesn't just accidentally happen. There are tools to migrate repositories and to flush ancient commits in huge repositories. If you curate your commit history, this is probably never necessary, or may only become necessary after decades.

>Maybe consider putting your energy into a good documentation inside the repository.

Commit messages are documentation for code, basically. `git blame` associates the messages with individual lines and lets you step through all the revisions.

>I would love to have more projects with documentations which cover the timeline and ideas during development, instead of having to extract these information from metadata - which is what commit messages are, in the end.

The commit messages are for detailed information, not so much for architectural or API documentation. This doesn't mean you should get rid of commit metadata! Eventually, you will find a situation where you wonder what the hell you or someone else was doing, and the commit message will be a critical piece of the puzzle. You can also leave JIRA links or whatever in the message, although that adds a dependency on JIRA for more details.


>A disadvantage of git add -p is that it allows you to create a commit that never existed on the development system, and, hence, cannot have been tested.

The things you commit can always be different from what you test. In fact, if you know what you are doing, it is a feature to be able to create commits like this without testing. I certainly don't need to test my "Fixed typo in comments" commits...

>How do people handle that? One way would be to follow it up with git stash, running tests, and, if necessary, git amend, but that can get cumbersome soon.

Well, your CI system should be testing for you. But if you must test it locally, `git rebase` can either pause between steps or execute commands between steps. Finish making commits. If you made, say, 3 commits, then do `git rebase -i HEAD~3` and do break/exec as desired.


I've thought like this for a while, even before AI. I figure that if I do write, it will either be ripped off, outdone by a person, or suspected of having been ripped off from somewhere. There are also topics I would not feel comfortable blogging about because they are not PC. I think it is worth developing the skill of writing however. I haven't found a real way to solve these issues 100%, but it occurred to me that the odds of me having these problems would be lower if I paywalled my stuff or at least required registration for access. Free registration would not work if everyone did it the same way, but perhaps there is a way to have your own system that no crawler would bother trying to interface with.

There are already tax schemes for productive enterprise, and this is not the first time people have been displaced by technology. It happens all the time. Also, does it matter if it's AI doing the production vs overseas labor? If you're worried that people won't be able to afford to buy the output of the AI, that kind of implies that they can work for cheaper than the AI (and can thus outcompete it, at least on average). In the long run, things will reach a new and probably more abundant state. In the short or medium term, we may have some pain and need to strategize how to help people adapt.

If I had to guess, it is just extra work to do it twice, and you may not need to wait for it for some use cases. The better way would be to add a flag or alternative function to make the sync a blocking operation in the first place.

No, you are witnessing democracy in action. People can disagree about "facts" and that is ok. People need to stop trying to impose things like this on others, have conversations with people they disagree with, and otherwise mind their business instead of advocating for a different group to impose on everyone whatever they think is true, disregarding individual autonomy and basic liberties.

I think you're right, but technically too many constraints doesn't mean the spec is wrong: some may be redundant, and that can be ok or even helpful. A lack of contradictions doesn't mean it's right either. I would argue that the problem of not knowing you got all the constraints specified is the same as not knowing if all generic requirements are specified. It's more work to formally specify anything and the constraints are more difficult to casually interpret, but in either case doneness is more of a continuum than a binary attribute.

I don't see why two LLMs together (or one, alternating between tasks) could not separately develop a spec and an implementation. The human input could be a set of abstract requirements, and both systems interact and cross-check each other to meet the abstract requirements, perhaps with some code/spec reviews by humans. I really don't see it ever working without one or more humans in the loop, if only to confirm that what is being done is actually what the human(s) intended. The humans would ideally be able to say as little as possible to get what they want. Unless/until we get powerful AGI, we will need to have technical human reviewers.

> I really don't see it ever working without one or more humans in the loop, if only to confirm that what is being done is actually what the human(s) intended.

That is precisely my point.


>The trouble with formal specification, from someone who used to do it, is that only for some problems is the specification simpler than the code.

I think most problems that one would encounter professionally would be difficult to formally specify. Also, how do you formally specify a GUI?

>The proofs are separate from the code. The notations are often different. There's not enough automation. And, worst of all, the people who do this stuff are way into formalism.

I think we have to ask what exactly are we trying to formally verify. There are many kinds of errors that can be caught by a formal verification system (including some that are in the formal spec only, which have no impact on the results). It may actually be a benefit to have proofs separate from code, if they can be reconciled mechanically and definitively. Then you have essentially two specs, and can cross-reference them until you get them both to agree.


Those are all good outcomes, up to a point. But if this stuff works TOO well, most or maybe all of us will have to start looking at other career options. Whatever autonomy you think you have in deciding what the AI does, that can ultimately be trained as well, and it will be the more people use it.

I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry.


I get the feeling, but I can't help but also say it sounds like how I imagine professional portrait artists felt about the photograph. Or scribes about audio recordings. Or any other occupation that similarly got more or less replaced by a technological advance.

Those jobs still exist, but by large are either very niche or work using that tech in some way.

It is not wrong to feel down about the risk of so much time, training, etc rapidly losing value. But it also isn't wrong that change isn't bad, and sometimes that includes adjusting how we use our skills and/or developing new ones. Nobody gets to be elite forever, they will be replaced and become common or unneeded eventually. So it's probably more helpful for yourself and those that may want to rely on you to be forward-thinking rather than complaining. Doesn't mean you have to become pro-AI, but may be helpful to be pragmatic and work where it can't.

As to work supply... I figure that will always be a problem as long as money is the main point of work. If people could just work where they specialize without so much concern for issues like not starving, maybe it would be a different. I dunno.


> I personally don't like it when others who don't know how to code are able to get results using AI.

Sounds like for many programmers AI is the new Visual Basic 6 :-P


It's worse than that lol. At least with VB 6 and similar scripting languages, there is still code getting written. Now we have complete morons who think they're software developers because they got some AI to shit out an app for them. This is going to affect how people view the profession of software engineering all around.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: