Nope. Only Firefox and Chrome have it, in their latest versions. No Safari or Edge support yet. So this article is a bit premature (unless you use the polyfill.)
I think there are several advantages of stack allocation:
* freeing stack allocated memory is O(1) with a small constant factor: simply set the stack pointer to a new location. In a generational garbage collector, like OCaml, minor garbage collection is O(amount of retained memory) with a larger constant factor.
* judiciously stack allocating memory can improve data locality.
* unboxed data takes up less space, again improving locality.
Overall, I think this about improving constant factors---which makes a big difference in practice!
Well... I enjoyed the author's enthusiasm. What I read was interesting, but I didn't read all of it. Why not? I wasn't sure where it was going. I think there is a tighter post to be written that explains the new formulation without so much wandering around in abstractions. I also think the post uses jargon where it isn't necessary. I could just about follow it, but it made reading unnecesarily hard work. For me a better post would explain the new selective functor in the most concrete terms possible, and only then talk about the abstract things it is related to.
I hear you but I think you are simply asking for an entirely different blog post. I don't think Verity's aim here is to give an introduction to `Selective`, but rather to introduce a formalization for it; something which has been notably missing for those who think about these sorts of things.
I understand the original Selective Functor, so an introduction to that is not what I'm after. I want to understand this new formalization, because it's the kind of thing I use, but I'm not a theoretician. If the goal of this post is simply to explain the formalization to the small number of people who are already deep into (category) theory, I guess it does a fine job. However, I think a better post would be more accessible.
I think the blog post does a good job describing the idea of Selective ("finite-case" etc.) but for me it falls apart shortly afterwards. If I was writing it, from what I understood I would start with the overview, then describe `CaseTree`, and then go into what abstractions this is an instance of.
As a small example of how I think the writing could be improved, take this sentence:
"This is in contrast to applicative functors, which have no “arrow of time”: their structure can be dualized to run effects in reverse because it has no control flow required by the interface."
This uses jargon where it's not necessary. There is no need to mention duality, and the "arrow of time" isn't very helpful unless you've had some fairly specific education. I feel it's sufficient to say that applicatives don't represent any particular control-flow and therefore can be run in any order.
Using scare quotes everywhere just makes it read like the author is engaging in bad faith. And I don't think they really address the issue.
The discussion in the "The Problem of the Individual-Element Mindset" section seems fairly arrogant, and ignorant of the economic realities of why people don't use manual memory management. "Individual-Element code" is not stupid, as they claim, but optimizing for other criteria than performance.
Their core arguments seem to be 1) I don't want to program in a way that excludes null pointers and 2) non-nullable references preclude arena-based memory management.
Regarding 1) you cannot make any useful statements. Their preference is their preference. That's fine and it's a fair argument as far as I'm concerned; they can create the language they want to create.
Regarding 2), you can easily distinguish nullable and non-nullable references in the type system. At the more experimental end are type systems that address these problems more directly. OxCaml has the concept of modes (https://oxcaml.org/documentation/modes/intro/) that track when something has happened to a value. So using modes you can track whether a value is initialized, and thus prevent using before initializing. Capture checking in Scala (https://docs.scala-lang.org/scala3/reference/experimental/cc...) is similar, and can prevent use-after-free (and maybe use before initialization? I'm not sure.) So it's not like this cannot be done safely, and I believe OxCaml at least is used in production code.
I mean, yeah, type erasure does give parametricity, but, you can instead design your language so that you monomorphize but insist on parametricity anyway. If you write stable Rust your implementations get monomorphized but you aren't allowed to specialize them - the stable language doesn't provide a way to write two distinct versions of the polymorphic function.
And if you only regard parametricity as valuable rather than essential then you can choose to relax that and say OK, you're allowed to specialize but if you do then you're no longer parametric and the resulting lovely consequences go away, leaving it to the programmers to decide whether parametricity is worth it here.
I don't understand your first paragraph. Monomorphization and parametricity are not in conflict; the compiler has access to information that the language may hide from the programmer. As an existance proof, MLTon monomorphizes arrays while Standard ML is very definitely parametric: http://www.mlton.org/Features
I agree that maintaining parametricity or not is a design decision. However, recent languages that break it (e.g. Zig) don't seem to understand what they're doing in this regard. At least I've never seen a design justification for this, but I have seen criticism of their approach. Given that type classes and their ilk (implicit parameters; modular implicits) give the benefits of ad-hoc polymorphism while mantaining parametricity, and are well established enough to the point that Java is considering adding them (https://www.youtube.com/watch?v=Gz7Or9C0TpM), I don't see any compelling reason to drop parametricity.
My point was that you don't need to erase types to get parametricity. It may be that my terminology is faulty, and that in fact what Rust is doing here does constitute "erasing" the types, in that case what describes the distinction between say a Rust function which is polymorphic over a function to be invoked, and a Rust function which merely takes a function pointer as a parameter and then invokes it ? I would say the latter is type erased.
The Scala solution is the same as Haskell. for comprehensions are the same thing as do notation. The future is probably effect systems, so writing direct style code instead of using monads.
It's interesting that effect system-ish ideas are in Zig and Odin as well. Odin has "context". There was a blog post saying it's basically for passing around a memory allocator (IIRC), which I think is a failure of imagination. Zig's new IO model is essentially pass around the IO implementation. Both capture some of the core ideas of effect systems, without the type system work that make effect systems extensible and more pleasant to use.
One problem with the simulation route is that games in the D&D lineage are usually wildly unbalanced. A, say, level 5 monster could run through endless level 1 NPCs. Also, much of the machinery of our world (e.g. commerce) doesn't really work when then there are incredibly dangerous and malevolent critters scattered throughout.
It's more about the combat model. Everyone is a fanatic who fights until death, despite any casualties their friends and allies have suffered. And weapons and other attacks are mostly harmless. They deal limited damage measured in hit points, which does not affect the combat effectiveness of the target, heals quickly, and does not leave any lasting effects.
In a different combat model, an equally unbalanced monster would avoid unnecessary fights agains groups of armed opponents. Not because it's afraid it would lose, but due to the risk of permanent injuries. Determined defenders could then try to take advantage of that behavior to drive the monster away.
Yup. Same reason I feel safe hiking in big cat territory. You look like a large predator. Only things that consider a large predator as possible prey will seek conflict--and most of the US has no such animal.
The cats know they would win, but a predator at our size range might injure them and keep them from getting their next meal. Thus it's virtually certain they will not attack--and the news supports this. People get hurt when the animal feels it needs to defend itself.
This is one of the things I like about Kenshi. Losing a fight doesn't mean you died in a fight necessarily. Sometimes you're just knocked out and the enemy is satisfied and moves on. Sometimes you're just knocked out and made into a slave which gives you another story arch to follow and challenges to overcome.
The key concept in "parametric polymorphism", which is what programming language nerds mean by generics, is "parametricity" [1]. This is basically the idea that all calls to a generic function should act in the same way regardless of the type of the actual concrete calls. The very first example breaks parametricity, as it multiplies for float, adds for i32, and isn't defined for other types.
It's implementation has the same issues as generics in Zig, which is also not parametric.
It's ok to explore other points in the design space, but the language designer should be aware of what they're doing and the tradeoffs involved. In the case of adhoc (non-parametric) polymorphism, there is a lot of work on type classes to draw on.
I don’t see how that Wikipedia page supports your claim “The key concept in "parametric polymorphism", which is what programming language nerds mean by generics, is "parametricity"”. That page doesn’t even contain the character sequence “generic”.
IMO, https://en.wikipedia.org/wiki/Generic_programming is more appropriate. It talks of “data types to-be-specified-later”, something that this and C generic lack. That’s one of the reasons that I wrote “I _somewhat_ disagree”.
Also, I don’t see how one would define “act in the same way”. A function that fully acts in the same way regardless of the types of its arguments cannot do much with its arguments.
For example, a function “/” doesn’t act in exactly the same way on floats and integers in many languages (5.0/2.0 may return 2,5 while 5/2 returns 2; if you say it should return 2,5 instead you’re having a function from T×T to T for floats but a function from T×T to U for ints; why would you call that “act in the same way”?), “+” may or may not wrap around depending on actual type, etc.
Geometric proofs are really accessible. You don't need any algebra to prove Pythagoras' theorem, or that the sum of the inner angles of a triangle is 180 degrees, for example. Compass and straight-edge construction of simple figures is also fun.
reply