Hacker Newsnew | past | comments | ask | show | jobs | submit | mlugg's commentslogin

Zig team member here: we've migrated to Forgejo Actions [0], which is a system built into Forgejo (the Git forge used by Codeberg) which is very similar to GitHub Actions. In fact, while 1-to-1 compatibility is a non-goal, it's almost compatible---many GHA workflows will run with minimal (or no!) changes, and most Actions written for GHA will work fine (e.g. my setup-zig Action [1] worked without changes). I don't necessarily love the design of GitHub Actions, and obviously that's all inherited in Forgejo Actions, but the issues I have with GitHub's implementation are pretty much all solved in Forgejo (plus they're receptive to PRs if you do need to improve something!). Codeberg offer a couple of free hosted runners (x86_64-linux), though they have quite aggressive usage limits (understandably, since Codeberg can't just throw money at free compute for everyone!) so self-hosting is probably kind of necessary for big-ish projects. That's pretty easy though: the runner [2] is trivial to build (including cross-compiling) and run, and is on the whole just a much more solid piece of software, so it's already been very painless compared to what it was like to self-host GitHub's runner. On the whole, Forgejo Actions has really just felt like a much more refined and cared-for version of GitHub Actions; I'm quite happy with it so far.

[0]: https://forgejo.org/docs/latest/user/actions/reference/ [1]: https://codeberg.org/mlugg/setup-zig/ [2]: https://code.forgejo.org/forgejo/runner/


GitHub's API has extremely aggressive rate limits which make migrating large numbers of existing issues and PRs off of the platform borderline impossible. AIUI, this is why Gitea's main repo is on GitHub: they couldn't figure out a way to cleanly migrate! The tinfoil hat in me absolutely sees this as an attempt at vendor lock-in on GitHub's end.


Forgejo Actions is what Zig has migrated to. It's very similar to GitHub Actions; the downside of that is that you inherit questionable design choices, but the big upside is that migration is super easy. While they don't target 1:1 compatibility, things are similar enough that you basically only need to tweak workflow files very slightly. Our experience so far is that it fixes most of our serious problems with GitHub Actions; in particular, their runner software is significantly easier to deploy and configure, has much better target support (GitHub's runner is essentially impossible to use outside of x86_64/aarch64 linux/windows/macos; we tried to patch it to support riscv64-linux and got stuck on some nonsensical problems on GitHub's side!), and actually accepts contributions & responds to issues. My issues with the GitHub Actions' backend & web interface (of which I have many) are pretty much all gone, too, with no new issues taking their place.


This is extremely misleading. "Membership" is about direct contribution to and influence over the non-profit; it'd be somewhat analagous to being a GitHub shareholder. The very first question on Codeberg's FAQ [0] makes this abundantly clear, as does the "Join" page [1]. I don't see any part of the website you could go to to get any other impression.

[0]: https://docs.codeberg.org/getting-started/faq/#what-do-i-nee...

[1]: https://join.codeberg.org/


This was the very first thing I noticed when we (the Zig team) started seriously trialing Codeberg. Honestly, the transition was worth it just for the ability to navigate the website without a 3-5 second wait every time I click a link.


Codeberg performance is not good today - 12 seconds per click before any update. Not sure if they're able to scale.


I think this thread caused a bit of a hug of death; I too was seeing pretty bad page loads earlier today, but that seems to have sorted itself out. Understandable imo, because Codeberg simply haven't had to deal with this level of traffic so far. I'm optimistic that they'll be able to scale as (thanks to projects like Zig making the switch) their needs grow.


> This has been pointed out to them many times, and it's seemingly not something they're willing to fix.

On the exact page you're on is a link to an issue [0] acknowledging that the CAPTCHA is inaccessible and expressing that they plan to drop it (albeit with no concrete time-frame). I don't at all understand your argument that Codeberg must be slow at replying to emails (the "manual fallback path") because Wikimedia are; these are two completely unrelated entities and I don't see why you would make inferences about one from the other.

[0]: https://codeberg.org/Codeberg/Community/issues/1797


PRs are not optional: there is no way to disable them on GitHub. I can't be sure that this is intentional, but it certainly works out well for them that this is one of many properties which make it quite difficult to migrate away from the platform.


There's technically a way[1], but you'd have to do it every 6 months which is not great.

https://docs.github.com/en/communities/moderating-comments-a...


Yeah, that's actually what we've done on the Zig GitHub repository. However, it doesn't stop pushes to existing PRs, which isn't ideal; and, yes, it's quite hard to escape the conclusion that there being no "until I turn it back on" option is intentional.


It's completely intentional, and goes back to when GitHub was founded. GitHub was intended as a collaborative software development platform, not "look but don't touch".


I suppose you can fork a repository if you want to collaborate with others though. Reviewing pull requests and engaging with a community is a lot of work and has possible legal ramifications; in many cases it’s faster to just do things yourself. Some teams/companies deliberately refuse outside contributions for this reason.


You can close them and limit discussion to contributors I guess? Not ideal but at least they wouldn’t appear in the pull requests tab.

Alternatively you can use a bot or a GitHub Action to automatically change the description and title of the pull request to something like “[PRs are not allowed and deleted automatically]”. But yeah not a perfect solution either…


Yikes, the PRs on the Linux repo are quite terrible. At least there's a bot to auto-reply with the correct procedure.

https://github.com/torvalds/linux/pull/1370


I guess you could make a bot that closes any opened PR with a message that PRs are not accepted on Github and a link to the contribution docs.


Not quite:

* Global variables still exist and can be stored to / loaded from by any code

* Only convention stops a function from constructing its own `Io`

* Only convention stops a function from reaching directly into low-level primitives (e.g. syscalls or libc FFI)

However, in practice, we've found that such conventions tend to be fairly well-respected in most Zig code. I anticipate `Io` being no different. So, if you see a function which doesn't take `Io`, you can be pretty confident (particularly if it's in a somewhat reputable codebase!) that it's not interacting with the system (e.g. doing filesystem accesses, opening sockets, sleeping the thread).


What about random number generation; is that something that will also fall under Io?


I think random numbers can safely be considered non blocking.


I mean... you use `await` if you've used `async`. It's your choice whether or not you do; and if you don't want to, your callers and callees can still freely `async` and `await` if they want to. I don't understand the point you're trying to make here.

To be clear, where many languages require you to write `const x = await foo()` every time you want to call an async function, in Zig that's just `const x = foo()`. This is a key part of the colorless design; you can't be required to acknowledge that a function is async in order to use it. You'll only use `await` if you first use `async` to explicitly say "I want to run this asynchronously with other code here if possible". If you need the result immediately, that's just a function call. Either way, your caller can make its own choice to call you or other functions as `async`, or not to; as can your callees.


> in Zig that's just ...

Well, no. In zig that's `const x = foo(io)`.

The moment you take or even know about an io, your function is automatically "generic" over the IO interface.

Using stackless coroutines and green threads results in a completely different codegen.

I just noticed this part of the article:

> Stackless Coroutines > > This implementation won’t be available immediately like the previous ones because it depends on reintroducing a special function calling convention and rewriting function bodies into state machines that don’t require an explicit stack to run. > > This execution model is compatible with WASM and other platforms where stack swapping is not available or desireable.

I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.


> Well, no. In zig that's `const x = foo(io)`.

If `foo` needs to do IO, sure. Or, more typically (as I mentioned in a different comment), it's something like `const x = something.foo()`, and `foo` can get its `Io` instance from `something` (in the Zig compiler this would be a `Compilation` or a `Zcu` or a `Sema` or something like that).

> Using stackless coroutines and green threads results in a completely different codegen.

Sure, but that's abstracted away from you. To be clear, stackless coroutines are the only case where the codegen of callers is affected, which is why they require a language feature. Even if your application uses two `Io` implementations for some reason, one of which is based on stackless coroutines, functions using the API are not duplicated.

> I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.

Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior -- just like passing a pointer allocated with one `Allocator` into the `free` of a different `Allocator` does. This really isn't a problem. Even with allocators, it's pretty rare for people to mess this up, and with allocators you often do have multiple of them available in one place (e.g. a gpa and an arena). In contrast, it will be extraordinarily rare to have more than one `Io` lying around. Even if you do mess it up, the IB will probably just trip a safety check, so it shouldn't take you too long to realise what you've done.


I find these two statements to be contradictory

> Sure, but that's abstracted away from you

> Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior

Thinking about it more, you've possibly added even more colors. Each executor adds a different color and while each function is color-agnostic (but not colorless) futures aren't.

> it will be extraordinarily rare to have more than one `Io`

Will it? I can immediately think of a use case where a program might want to block for files on disk, but defer fetching from network to some background async executor.


Also, I do find it funny that we went from "Zig has completely defeated function coloring" to "Zig has colored objects".


but that's not even the case, because it's certainly possible to write a function that receives an object that holds onto an io (and uses it in its vtable calls) that equally well receives an object that doesn't have anything to do with io [0]. The consumers of those objects don't have to care, so there's no coloring.

[0] and this isn't even really a theoretical matter, having colorblind object passing is extremely useful for say, mocking. Oh, I have a database lookup/remote API call, which obviously requires io, but i want fast tests and I can mock it with an object with preseeded values/expects -- hey, that doesn't require IO.


I think in practice the caller still needs to know.

If I call `a.foo()` but `a` has and is using a stackless coroutine IO but the caller is being executed from a green thread IO then as was said before, I'm hitting UB.

But, I do like that you could skip/mock IO for instance. That's pretty neat.


here is example code. you wont "use the wrong io".

    const VTable = struct {
      f: &fn (*VTable) void,
    };

    const A = struct {
      io: IO,
      v: VTable = .{ .f = &A.uses_io },
      fn uses_io(this: *VTable) void {
        const self: *A = @fieldParentPtr(.v, this);
        self.io.some_io_fn(...);
      }
    };

    const B = struct{v: VTable = .{.f = &void_fn}};
    fn void_fn(_: *VTable) void {}

    pub fn calls_vtable(v: VTable) {
      v.f()
    }


> it depends on reintroducing a special function calling convention

This is an internal implementation detail rather than a fact which is usually exposed to the user. This is essentially just explaining that the Zig compiler needs to figure out which functions are async and lower them differently.

We do have an explicit calling convention, `CallingConvention.async`. This was necessary in the old implementation of async functions in order to make runtime function pointer calls work; the idea was that you would cast your `fn () void` to a `fn () callconv(.async) void`, and then you could call the resulting `*const fn () callconv(.async) void` at runtime with the `@asyncCall` builtin function. This was one of the biggest flaws in the design; you could argue that it introduced a form of coloring, but in practice it just made vtables incredibly undesirable to use, because (since nobody was actually doing the `@asyncCall` machinery in their vtable implementations) they effectively just didn't support async.

We're solving this with a new language feature [0]. The idea here is that when you have a virtual function -- for a simple example, let's say `alloc: *const fn (usize) ?[*]u8` -- you instead give it a "restricted function pointer type", e.g. `const AllocFn = @Restricted(*const fn (usize) ?[*]u8);` with `alloc: AllocFn`. The magic bit is that the compiler will track the full set of comptime-known function pointers which are coerced to `AllocFn`, so that it can know the full set of possible `alloc` functions; so, when a call to one is encountered, it knows whether or not the callee is an async function (in the "stackless async" sense). Even if some `alloc` implementations are async and some are not, the compiler can literally lower `vtable.alloc(123)` to `switch (vtable.alloc) { impl1 => impl1(123), impl2 => impl2(123), ... }`; that is, it can look at the pointer, and determine from that whether it needs to dispatch a synchronous or async call.

The end goal is that most function pointers in Zig should be used as restricted function pointers. We'll probably keep normal function pointers around, but they ideally won't be used at all often. If normal function pointers are kept, we might keep `CallingConvention.async` around, giving a way to call them as async functions if you really want to; but to be honest, my personal opinion is that we probably shouldn't do that. We end up with the constraint that unrestricted pointers to functions where the compiler has inferred the function as async (in a stackless sense) cannot become runtime-known, as that would lead to the compiler losing track of the calling convention it is using internally. This would be a very rare case provided we adequately encourage restricted function pointers. Hell, perhaps we'd just ban all unrestricted default-callconv function pointers from becoming runtime-known.

Note also that stackless coroutines do some with a couple of inherent limitations: in particular, they don't play nicely with FFI (you can't suspend across an FFI boundary; in other words, a function with a well-defined calling convention like the C calling convention is not allowed to be inferred as async). This is a limitation which seems perfectly acceptable, and yet I'm very confident that it will impact significantly more code than the calling convention thing might.

TL;DR: depending on where the design ends up, the "calling convention" mentioned is either entirely, or almost entirely, just an implementation detail. Even in the "almost entirely" case, it will be exceptionally rare for anyone to write code which could be affected by it, to the point that I don't think it's a case worth seriously worrying about unless it proves itself to actually be an issue in practice.

[0]: https://github.com/ziglang/zig/issues/23367


From my experience, the calling convention was, in 0.9.x, just an implementation detail, until it wasn't. I think I may still reserve judgment for when async is fully implemented. Then I'll torture it again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: