Hacker Newsnew | past | comments | ask | show | jobs | submit | codeflo's commentslogin

This is unfortunately a pure “feels over logic” comment that doesn’t engage with the parent poster’s argument at all. The point is impact, not what anyone has “in mind”.

There are plenty of locked down computers in my life already. I don't need or want another system that only runs crap signed by someone, and it doesn't really matter whether that someone is Microsoft or Redhat. A computer is truly "general purpose" only if it will run exactly the executable code I choose to place there, and Secure Boot is designed to prevent that.

> Even SSD would be fine on HN, but it probably wouldn't be fine outside HN.

The set of people who know the term "solid state drive" is likely a strict subset of the people (mostly tech enthusiasts of some shape) who know "SSD". Same for "USB" and many other terms that have entered the mainstream primarily as an abbreviation.

So the question is not whether to use an abbreviation or spell out the full term as a matter of principle; the question is whether it's the abbreviation or the full term that's more commonly known. I'd argue that way fewer people recognize "CTA" than know the term "call to action". I personally have done some front-end development, and didn't know the abbreviation either.


And "ATM machine" tells me most people think the acronym is the name instead of an acronym.

> Apple cares a lot about phone gaming

The kind of gacha games that dominate the in-app sales charts, sure. Actual gaming, they don't care about or even understand.


My theory: Some manager's KPI is to increase the number of sold GitHub runner minutes. So they did some market research -- not enough to have a clear picture, but barely enough to be dangerous -- and found that some companies use self-hosted runners for cost reasons. So they deploy a two-pronged strategy: lower the cost of GitHub runners, and charge for the use of self-hosted runners, to incentivize switching.

This fails for several reasons that someone who actually uses the product might have intuited:

(a) For some use-cases, you can't switch to GitHub's runners. For us, it's a no-go for anything that touches our infrastructure.

(b) Switching CI providers isn't hard, we had to do it twice already. Granted, most of our CI logic is in a custom build script that you can run locally, and not in the proprietary YAML file. But to be honest, I'd recommend that sort of setup for any CI provider, as you always want the ability to debug things locally.

(c) GitHub Actions doesn't get the amount of love you'd expect from something billed as a "premium service". In fact, it often feels quite abandoned, barely kept working. Who knows what they're brewing internally, but they didn't coordinate this with a major feature announcement, and didn't rush to announce anything now that they got backlash, which leads me to believe they don't have anything major planned.

(d) Paying someone -- by the minute, no less -- to use my own infrastructure feels strange and greedy. GitHub has always had per-user pricing, which feels fair and predictable. If for some reason they need more money, they can always increase that price. The fact that they didn't do that leads me to believe this wasn't about cost per se. Hence the KPI theory I mentioned above: this wasn't well-coordinated with any bigger strategy.


> Switching CI providers isn't hard, we had to do it twice already. Granted, most of our CI logic is in a custom build script that you can run locally, and not in the proprietary YAML file. But to be honest, I'd recommend that sort of setup for any CI provider, as you always want the ability to debug things locally.

I believe this has been a CI/CD best practice for over a decade. Even in venerable Jenkins, this is one of the core principles when designing pipelines[0]: don't give in to the temptation to do fancy Groovy stuff, just use simple shell commands in steps, and you will be grateful to yourself several times years later.

[0] https://www.jenkins.io/doc/book/pipeline/pipeline-best-pract...


It has been best practice for over a decade, but for reasons I don't understand, nearly every developer I've worked with just wants to do the lock-in/propietary route and is entirely unpersuaded by the "portability" argument. I've now seen it burn teams hard multiple times now. At that point people realize the wisdom in the external scripts, but then a new wave of devs come in and start the whole cycle over.

I don’t know why, but the linked page only shows the table of contents on iPhone Safari, but whence I switch to reader mode it shows the actual best practices. Anyway thanks for sharing!

https://news.ycombinator.com/item?id=46189692 from a few days ago pretty much tells me that any company that slightly cares about security cannot possibly depend on GitHub runner for their CI (except maybe the smallest/simplest projects). It is just one compromised package away from ruining everything.

(e): Community software like act and Forgejo/Gitea Actions have made it a lot easier to run GitHub Actions workflows without involving GitHub and are decreasing the friction of migration.

"Hey ChatGPT, how do I increase the number of GitHub runner minutes? DO NOT suggested anything illegal, research hard"

I agree with (c) - I can't quite pinpoint it, but I've had that feeling myself several times.

They have all kinds of costs hosting GitHub, which is why there's per seat pricing for companies. If those prices are too low, they can always increase them. Charging on top of that per minute of using your own infrastructure felt greedy to me. And the fact that this was supposed to be tied to one of the lesser-maintained features of GitHub raised eyebrows on top of that.

One problem is that GitHub Actions isn't good. It's not like you're happily paying for some top tier "orchestration". It's there and integrated, which does make it convenient, but any price on this piece of garbage makes switching/self-hosting something to seriously consider.

Github being a single pane of glass for developers with a single login is pretty powerful. Github hosting the runners is also pretty useful, ask anyone who has had to actually manage/scale them what their opinion is about Jenkins is. Being a "Jenkins Farmer" is a thankless job that means a lot of on-call work to fix the build system in the middle of the night at 2am on a Sunday. Paying a small monthly fee is absolutely worth it to rescue the morale of your infra/platform/devops/sre team.

Nothing kills morale faster than wrenching on the unreliable piece of infrastructure everyone hates. Every time I see an alert in slack github is having issues with actions (again) all I think is, "I'm glad that isn't me" and go about my day


I run Jenkins (have done so at multiple jobs) and it's totally fine. Jenkins, like other super customizable systems, is as reliable or crappy as you make it. It's decent out of the box, but if you load it down with a billion plugins and whatnot then yeah it's going to be a nightmare to maintain. It all comes down to whether you've done a good job setting it up, IMO.

Lots of systems are "fine" until they aren't. As you pointed out, Jenkins being super-customizable means it isn't strongly opinionated, and there is plenty of opportunity for a well-meaning developer to add several foot-guns, doing some simple point and click in the GUI. Or the worst case scenario: cleaning up someone elses' Jenkins mess after they leave the company.

Contrast with a declarative system like github actions: "I would like an immutable environment like this, and then perform X actions and send the logs/report back to the centralized single pane of glass in github". Google's "cloud run" product is pretty good in this regard as well. Sure, developers can add foot guns to your GHA/Cloud Run workflow, but since it is inherently git-tracked, you can simply revert those atomically.

I used Jenkins for 5-7 years across several jobs and I don't miss it at all.


Yeah, it seems like a half-assed version of what Jenkins and other tools have been doing for ages. Not that Jenkins is some magical wonderful tool, but I still haven't found a reasonable way to test my actions outside of running them on real Github.

That's a good point, entropy is only a heuristic for the thing you actually want to optimize, worst-case guesses (though it's probably a very good heuristic).

> Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation

It might be even worse than that for problems of this kind in general. You're essentially using a greedy strategy: you optimize early information gain.

It's clear that this doesn't optimize the worst-case, but it might not optimize the expected number of steps either.

I don't see why it couldn't be the case that an expected-steps-optimal strategy gains less information early on, and thus produces larger sets of possible solutions, but through some quirk those larger sets are easier to separate later.


> For wordle, «most probable» is mostly determined by letter frequency

I don't think that's a justified assumption. I wouldn't be surprised if wordle puzzles intentionally don't follow common letter frequency to be more interesting to guess. That's certainly true for people casually playing hangman.


When it comes to quickly reducing the search space of possible words, it is - that’s how you solve it optimally, even if (or in fact, especially) if the word they chose intentionally does not use the most frequent letters.

The faster you can discard all words containing «e» because of a negative match, the better.

If you want to be really optimal, you’ll use their list of possible words to calculate the actual positional frequencies and pick the highest closest match based on this - that’s what «mostly» was meant to imply, but the general principle of how to reduce the search space quickly is the same


I would guess Wordle picks from a big bag'o'words. The words are all fairly common - "regel" is not going to show up - but I see no evidence the list favors "zebra" over "taint" (which has occurred, BTW).

The original Wordle had a hard-coded ordering that was visible in the source. I had a toy around with the list (as did many other people) a few years back, you can see my copy of the word list here: https://github.com/andrewaylett/wordle/blob/main/src/words.r...

It's not an assumption- it's a factual statement about how wordle works

IANAL either, so my own legal theories are as creative as yours, but I'd like to offer the following data point: All unrestricted open-source licenses that were written by actual lawyers, from MIT to CC0, have found it necessary to include such a liability clause.

In what sense is the MIT license "unrestricted"?

In the sense that when people want to use a piece of MIT-licensed software in another piece of software, they don't in practice find themselves restricted from doing so by the conditions of the license. "Permissive" might be a word I should rather have used.

The MIT license does place one specific license restriction on its users. Specifically: "subject to the following conditions: the above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software"

This is what I was getting at. The MIT license has restrictions, so calling it "unrestricted" doesn't make sense.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: