Sprites pricing is based on usage, not reserved capacity, so depending on what you're doing I think it can actually be cheaper than Shellbox. You'll have to stay below 1GB of memory and have the CPU be mostly idle, which I'm not sure common workloads will.
Nope, unless they changed this recently. It's an ssh-like way to connect and get a console/terminal, but it's not ssh, and there is no transfer capability
It depends how high you value your headaches, and how high, your org's downtime. Github not working accrues over the hourly rate of every developer affected, which is likely $70-$100 a hour. 10 hours of outage in a year affecting a team of 10 would cost north of $70k, enough to hire a part-time SRE dedicated just to tend to your Gitlab installation.
Your favorite search engine or LLM will show you in a second, it's really easy.
The problem is that it's not enough. The fact that Github uses Git specifically is a technical detail; it could use mercurial equally easily, as Bitbucket used to. Github Actions, OWNERS files, PRs and review tools, issue tracker, wiki are all not Git features.
Not a chance. I think you need to spend some time in low ball corporate IT. It's just monkeys throwing faeces at the wall. We only just levered them off subversion...
(I use Fossil 100% offline for personal projects for ref)
I mean, there are solutions, but none of them seems to have a large enough mindshare and efficiency. (Even though Github's code review tools are pretty spartan.)
They can be. A PR can be made and code review conducted by submitting a patch to a mailing list. That's how the kernel and, I think, git itself is developed.
CI/CD is really a methodology. It just means integrating/deploying stuff as soon as its ready. So you just need maintainers to be able to run the test suite and deploy, which seems like a really basic thing.
> If the "half a million tons" figure were accurate, a single 1 GW data center would consume 1.7% of the world's annual copper supply. If we built 30 GW of capacity—a reasonable projection for the AI build-out—that sector alone would theoretically absorb almost half of all the copper mined on Earth.
Quickly doing such "back of an envelope" calculations, and calling out things that seem outlandish, could be a useful function of an AI assistant.
Using your brain is so vastly more energy efficient, we might just only need half of that 30 GW capacity if fewer people had these leftpad-style knee-jerk reactions.
A Gemini query uses about a kilojoule. The brain runs at 20 W (though the whole human costs 100 W). So, the human is less energy if you can get it done in under 50 seconds.
Each person uses about 100W (2000kcal/24h=96W). Running all of humanity takes about 775GW.
Sure, using or not using your brain is a negligible energy difference, so if you aren't using it you really should, for energy efficiency's sake. But I don't think the claim that our brains are more energy efficient is obviously true on its own. The issue is more about induced demand from having all this external "thinking" capacity on your fingertips
Is there an AI system with functionality at or equal to a human brain that operates on less than 100W? Its currently the most efficient model we have. You compare all of humanity's energy expenditure, but to make the comparison, you need to consider the cost of replicating all that compute with AI (assuming we had an AGI at human level in all regards, or a set of AIs that when operated together could replace all human intelligence).
So, this is rather complex because you can turn AI energy usage to nearly zero when not in use. Humans have this problem of needing to consume a large amount of resources for 18-24 years with very little useful output during that time, and have to be kept running 24/7 otherwise you lose your investment. And even then there is a lot of risk they are going to be gibbering idiots and represent a net loss of your resource expenditure.
For this I have a modern Modest Proposal they we use young children as feed stock for biofuel generation before they become a resource sink. Not only do you save the child from a life of being a wage slave, you can now power your AI data center. I propose we call this the Matrix Efficiency Saving System (MESS).
No one will ever agree on when AI systems have equivalent functionality to a human brain. But lots of jobs consist of things a computer can now do for less than 100W.
Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation.
> Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation.
Add to that the tier-n dependencies this urban lifestyle has—massive supply chains sprawling across the planet, for example involving thousands upon thousands of people and goods involved in making your morning coffee happen.
Wikipedia quoted global primary energy production at 19.6 TW, or about 2400W/person. Which is obviously not even close to equally distributed. Per-country it gets complicated quickly, but naively taking the total from [1] brings the US to 9kW per person.
And that's ignoring sources like food from agriculture, including the food we feed our food.
To be fair, AI servers also use a lot more energy than their raw power demand if we use the same metrics. But after accounting for everything, an American and an 8xH100 server might end up in about the same ballpark
Which is not meant as an argument for replacing Americans with AI servers, but it puts AI power demand into context
Obviously we don't have AGI so we can't compare many tasks. But on tasks where AI does perform at comparable levels (certain subsets of writing, greenfield coding and art) it performs fairly well. They use more power but are also much faster, and that about cancels out. There are plenty of studies that try to put numbers on the exact tradeoff, usually focused more on CO2. Plenty that find AI better by some absurd degree (800 times more efficient at 3d modelling, 130 to 1500 times more efficient at writing, or 300 to 3000 times more efficient at illustrating [1]). The one I'd trust the most is [2] where GPT4 was 5-19 times less CO2 efficient than humans at solving coding challenges
I did some math for this particular case by asking Google’s Gemini Pro 3 (via AI studio) to evaluate the press release. Nvidia has since edited the release to remove the “tons of copper” claim, but it evaluated the other numbers at a reported API cost of about 3.8 cents. If the stated pricing just recovers energy cost, that implies 1500kJ of energy as a maximum (less if other costs are recovered in the pricing). A human thinking for 10 minutes would use sbout 6kJ of direct energy.
I agree with your point about induced demand. The “win” wouldn’t be looking at a single press release with already-suspect numbers, but rather looking at essentially all press releases of note, a task not generally valuable enough to devote people towards.
That being said, we normally consider it progress when we can use mechanical or electrical energy to replace or augment human work.
While I don't know how more or less efficient it is, WolframAlpha works well for these sorts of approximations, and shows its work more clearly than the AI chatbots I've used.
Nobody on HN is a bigger AI stan than I am -- well, maybe that SimonW guy, I guess -- but the truth is that problems involving unit conversions are among the riskiest things you can ask an LLM to handle for you.
It's not hard to imagine why, as the embedding vectors for terms like pounds/kilograms and feet/yards/meters are not going to be far from each other. Extreme caution is called for.
I edited the post with a speculation, but it's just a guess, really. In the training data, different units are going to share near-identical grammatical roles and positions in sentences. Unless some care is taken to force the embedding vectors for units like "pounds" and "kilograms" to point in different directions, their tokens may end up being sampled more or less interchangeably.
Gas-law calculations were where I first encountered this bit of scariness. It was quite a while ago, and I imagine the behavior has been RLHF'ed or otherwise tweaked to be less of a problem by now. Still, worth watching out for.
> In the training data, different units are going to share near-identical grammatical roles and positions in sentences.
Yes, but I would also expect the training data to include tons of examples of students doing unit-conversion homework, resources explaining the concept, etc. (So I would expect the embedding space to naturally include dimensions that represent some kind of metric-system-ness, because of data talking about the metric system.) And I understand the LLMs can somehow do arithmetic reasonably well (though it matters for some reason how big the numbers are, so presumably the internal logic is rather different from textbook algorithms), even without tool use.
It's almost always the engineers, analysts and MBA spreadsheet pushers and other people removed from the physical consequences outputting these mistakes because it's way easier to not notice a misplaced decimal or incorrect value when you deal in pure numbers and know what they "should" be than you are the person actually figuring out how to make it happen the difference between needing 26666666.667 and 266666666.667 <units> of <widget> is pretty meaningful. Engineers don't output these mistakes as often as analysts or whatever because they work in organizations that invest more in catching them, not because they make them all that much less.
Whether talking weight or bulk a decimal place is approximately the difference between needing a wheelbarrow, a truck, a semi truck, a freight train and a ship.
Around here, asking "does this number make sense?" when coming across a figure is second nature, reinforced since early in engineering school. The couple of engineers from the US that I know behave similarly, which makes sense because when your job is to solve practical problems and design stuff, precision matters.
> difference between needing 26666666.667 and 266666666.667 <units> of <widget> is pretty meaningful
To be fair, that’s why we’d use 2.6666666667e7 and 2.66666666667e8, which makes it easier to think about orders of magnitude. Processes, tools and methods must be adapted to reduce the risk of making a mistake.
There are multiple overlapping specifications for things like X.509. There are the RFCs (3280 and 5280 are the "main" ones) which OpenSSL generally targets, while the Web PKI generally tries to conform to the CABF BRs (which are almost a perfect superset of RFC 5280).
RFC 5280 isn't huge, but it isn't small either. The CABF BRs are massive, and contain a lot of "policy" requirements that CAs can be dinged for violating at issuance time, but that validators (e.g. browsers) don't typically validate. So there's a lot of flexibility around what a validator should or shouldn't do.
The spec is often such a confused mess that even the people who wrote it are surprised by what it requires. One example was when someone on the PKIX list spent some time explaining to X.509 standards people what it was that their own standard required, which they had been unaware of until then.
Technically yes because I saved the messages, which I saw as a fine illustration of the state of the PKI standards mess. However I'd have to figure out which search term to use to locate them again ("X.509" probably won't cut it). I'll see what I can do.
reply