The financial engineering with the Twitter/X takeover was already pretty bold, but Tesla would probably still be a chunk an order of magnitude larger than that.
There is some nuance to this. Adding comments to the stated goal "Everyone who interacts with Debian source code (1) should be able to do so (2) entirely in git:
(1) should be able does not imply must, people are free to continue to use whatever tools they see fit
(2) Most of Debian work is of course already git-based, via Salsa [1], Debian's self-hosted GitLab instance. This is more about what is stored in git, how it relates to a source package (= what .debs are built from). For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases, rather than using upstream branches directly.
> For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases
I really wish all the various open source packaging systems would get rid of the concept of source tarballs to the extent possible, especially when those tarballs are not sourced directly from upstream. For example:
- Fedora has a “lookaside cache”, and packagers upload tarballs to it. In theory they come from git as indicated by the source rpm, but I don’t think anything verifies this.
- Python packages build a source tarball. In theory, the new best practice is for a GitHub action to build the package and for a complex mess to attest that really came from GitHub Actions.
- I’ve never made a Debian package, but AFAICT the maintainer kind of does whatever they want.
IMO this is all absurd. If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that. If it stores a copy of the source, that copy should be cryptographically traceable to the commit in question, which is straightforward: the commit hash is a hash over a bunch of data including the full source!
This was one of the "lessons learnt" from the XZ incident. One of the (many) steps they took to avoid scrutiny was modifications that existed in the real tarball but not the repo.
For lots of software projects, a release tarball is not just a gzipped repo checked out at a specific commit. So this would only work for some packages.
A simple version of this might be a repo with a single file of code in a language that needs compilation, versus, and the tarball with one compiled binary.
Just having a deterministic binary can be non-trivial, let alone a way to confirm "this output came from that source" without recompiling everything again from scratch.
For most well designed projects, a source tarball can be generated cleanly from the source tree. Sure, the canonical build process goes (source tarball) -> artifact, but there’s an alternative build process (source tree) -> artifact that uses the source tarball as an intermediate.
In Python, there is a somewhat clearly defined source tarball. uv build will happily built the source tarball and the wheel from the source tree, and uv build --from <appropriate parameter here> will build the wheel from the source tarball.
And I think it’s disappointing that one uploads source tarballs and wheels to PyPI instead of uploading an attested source tree and having PyPI do the build, at least in simple cases.
In traditional C projects, there’s often some script in the source tree that runs it into the source tarball tree (autogen.sh is pretty common). There is no fundamental reason that a package repository like Debian or Fedora’s couldn’t build from the source tree and even use properly pinned versions of autotools, etc. And it’s really disappointing that the closest widely used thing to a proper C/C++ hermetic build system is Dockerfile, and Dockerfile gets approximately none of the details right. Maybe Nix could do better? C and C++ really need something like Cargo.
Launchpad does this for everything, as does sbuild/buildd in debian land. They generally make it work by both: running the build system in a neutered VM (network access generally not permitted during builds, or limited to only a debian/ubuntu/PPA package mirror), and going to some degree of invasive process/patching to make build systems work without just-in-time network access.
SUSE and Fedora both do something similar I believe, but I'm not really familiar with the implementation details of those two systems.
I’m only familiar with the Fedora system. The build is hermetic, but the source input come from fedpkg new-sources, which runs on the client used by the package developer.
This seems no worse than GitHub Actions executing whatever random code people upload.
It’s not so hard to do a pretty good job, and you can have layers of security. Start with a throwaway VM, which highly competent vendors like AWS will sell you at a somewhat reasonable price. Run as a locked-down unprivileged user inside the container. Then use a tool like gVisor.
Also… most pure Python packages can, in theory, be built without executing any code. The artifacts just have some files globbed up as configured in pyproject.toml. Unfortunately, the spec defines the process in terms of installing a build backend and then running it, but one could pin a couple of trustworthy build backends versions and constraint them to configurations where they literally just copy things. I think uv-build might be in this category. At the very least I haven’t found any evidence that current uv-build versions can do anything nontrivial unless generation of .pyc files is enabled.
If it isn't at least a gzip of a subset of the files of a specific commit of a specific repo, someone's definition of "source" would appear to need work.
Shallow clones are a thing. And it’s fairly straightforward to create a tarball that includes enough hashes to verify the hash chain all the way to the commit hash. (In fact, I once kludged that up several years ago, and maybe I should dust it off. The tarball extracted just like a regular tarball but had all the git objects needed hiding inside in a way that tar would ignore.)
I don't actually see why you'd need to verify the hash chain anyway. The point of a source tarball, as I understand it, is to be sure of what source you're building, and to be able to audit that source. The development path would seem to be the developer's concern, not the maintainer's.
> The point of a source tarball, as I understand it, is to be sure of what source you're building
Perhaps, in the rather narrow sense that you can download a Fedora source tarball and look inside yourself.
My claim is that upstream developers produce actual official outputs: git commits and sometimes release tarballs. (But note that release tarballs on GitHub are often a mess and not really desired by the developer.). And I further think that verification that a system like Fedora or Debian or PyPI is building from correct sources should involve byte-for-byte comparison of the source tree and that, at least in the common case, there should be no opportunity for a user of one of these systems to upload sources that do not match the claimed upstream sources.
The sadly common workflow where a packager clones a source tree, runs some scripts, and uploads the result as a “source tarball” is, IMO, wrong.
I’m not sure why this would make a difference. The only thing special about the head is that there is a little file (that is not, itself, versioned) saying that a particular commit is the head.
> If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that.
shoutout AUR, I’m trying arch for the first time (Omarchy) and wasn’t planning on using the AUR, but realized how useful it is when 3 of the tools I wanted to try were distributed differently. AUR made it insanely easy… (namely had issues with Obsidian and Google Antigravity)
This is a misunderstanding of what Git does. Git is a Merkle hash tree, content-addressed, immutable/append-only filesystem, with commits as objects that bind a filesystem root by its hash. The diffs that make up a commit are not really its contents -- they are computed as needed. Now most of the time it's best to think of Git as a patch quilting porcelain, but it's really more than that, and while you can get very far with the patch quilting porcelain model, at some point you need to understand that it goes deeper.
That point is not reached during packaging though.
I prefer rebasing git histories over messing with the patch quilting that debian packaging standards use(d to use).
Though last I had to use the debian packaging mechanisms, I roundtripped them into git for working on them. I lost nothing during the export.
A lot of this coincides with my own experiments I did to pass-through consumer AMD GPUs into VMs [1], which the Debian ROCm Team uses in their CI.
The Debian package rocm-qemu-support ships scripts that facilitate most of this. I've since generalized this by adding NVIDIA support, but I haven't uploaded the new gpuisol-qemu package [2] to the official Archive yet. It still needs some polishing.
Just dumping this here, to add more references (especially the further reading section, the Gentoo and Arch wikis had a lot of helpful data).
Coincidentally, the first issue (referencing Navi 21) was the one I started these experiments with, and this turned out to be pretty informative.
Our Navi 21 would almost always go AWOL after a test run had been completed, requiring a full reboot. At some point, I noticed that this only happened when our test runner was driving the test; I never had an issue when testing interactively. I eventually realized that our test driver was simply killing the VM when the test was done, which is fine for a CPU-based test, but this messed with the GPU's state. When working interactively, I was always shutting down the host cleanly, which apparently resolved this. A patch to our test runner to cleanly shut down VMs fixed this.
And I've had no luck with iGPUs, as referenced by the second issue.
From what I understand, I don't think that consumer AMD GPUs can/will ever be fully supported, because the GPU reset mechanisms of older cards are so complex. That's why things like vendor-reset [3] exist, which apparently duplicate a lot of the in-kernel driver code but ultimately only twiddle some bits.
This was worked around ages ago in OpenBSD, and the workaround was already included in Debian (and by extension, Ubuntu) when I started maintaining it in 2010 (I no longer do).
Here's a link to a patch [1] from a version from when the package still kept standalone patches.
It's been a long long time and I honestly don't remember the details, but Debian's cron(8) still says [2]: Special considerations exist when the clock is changed by less than 3 hours, for example at the beginning and end of daylight savings time. If the time has moved forwards, those jobs which would have run in the time that was skipped will be run soon after the change. Conversely, if the time has moved backwards by less than 3 hours, those jobs that fall into the repeated time will not be re-run.
Edit: According to this bug report [3], this workaround first entered Debian in 1999.
Most likely, yes. The author mentions vixie-cron, which was the name of the project before Paul Pixie joined/founded(?) ISC, and it was released as ISC from after.
Debian's fork is still based on vixie-cron, but it couldn't have been the one because of the aforementioned patch.
There is precedent: SpaceShipOne was successfully launched from an airplane [1].
The great force downward is (mostly) irrelevant if there is nothing below. Just hang the rocket between two towers over a void, with the atmosphere below.
Most carriers have a rule that on clear days you always hand fly the landing.
This is a competence you do not want to lose.
It's also the case that you can have a whole approach setup in your flight computer and at the last minute the controller gives you a runway change. You could drop your head down and start typing a bunch info the FMC but you're generally better off just disabling auto pilot and manually making the adjustment.
But two interesting data points from the Wikipedia article I linked are that the first aircraft certification for ILS Cat III was in 1968, and Cat IIIB in 1975.
And IIRC by the 1980s, autoland was already a pretty common feature.
This is a bit misleading as the decline reported is year-on-year and of course sales haven't fully recovered yet.
Quarter-on-quarter or month-by-month would have been much more interesting, could have shown a change in trend. Especially after Musk's departure from the administration.
I'm out of the loop here. Recovered from what? A few months ago in similar threads the going theory was that people were waiting for the new Y, and that's why there was a slump. The new Y is out and available, no? Are they just not up to full rate production?
At this point it's much more interesting how well the Robotaxi is doing. It's still not driving like Waymo (I didn't like how it is desregarding the rules), but in a few years if it is able to scale up, it will scale much much faster and cheaper.
The shell treats the first line as a comment. It executes the second line, which eventually exec's the binary so the rest of the file do not matter to the shell.
And the compiler treats the first line as a preprocessor directive, so it ignores the second line.
I initially misread/mistook the first line for a shebang.
There was another 1981 paper that went into this by Boehm "Software Engineering Economics", but I can't find the details right now.
This NASA publication [1] cites a number of studies and the cost increase factors they estimate for various stages of development, including the Boehm study.
The financial engineering with the Twitter/X takeover was already pretty bold, but Tesla would probably still be a chunk an order of magnitude larger than that.
reply