That depends on the language. I have used (and implemented) languages where arrays are modeled as a function from an index space to some expression. During compilation, this is used to drive various optimisations. For those arrays that need a run-time representation, they may be stored in the classic way (a dense region of memory accessed with offsets computed from indexes), but also in more complicated ways, such as some kind of tree structure. These are still, semantically, arrays at the language level.
The post explains that 'a[i]' can easily enough be written as 'a i'. Your suggestions do not resemble the current function application syntax in the language discussed in the post. The question is not whether a terse slice syntax can exist (clearly it can), but whether a syntactic similarity between indexing and application can also be extended to a syntactic similarity between slicing and application.
Depending on how you look at things, functions can also be mutated at run-time. Most impure languages allow you to define a function that has some internal state and changes it whenever it is applied. In C you would use 'static' variables, but languages with closures allow for a more robust approach. Scheme textbooks are full of examples that use this trick to define counters or other objects via closures. You can well argue that these functions do not "mutate", they merely access some data that is mutated, but there is no observable difference from the perspective of the caller.
Functions are rarely used this way, it’s bad practice. Typically when this is done the mutating state is scoped in an object and the function is not called a “function” it’s called a method.
In some sense, Go does not allow you to change the major version. Packages with the same name but different major versions are treated as different packages.
It is basically dependent types, but there is a specific and intentional omission (no true dependent products) that interacts with another feature (the ability to hide sizes) that ultimately causes the mess. I elaborated on it here: https://futhark-lang.org/blog/2025-09-26-the-biggest-semanti...
This blog post showcases V in a positive light. I suppose it is good that people can have productive experiences with it now, although I don't see from this post why it is a significant improvement on Go.
The problems discussed (performance, compiler fragility) are somewhat worrying though. My impression is still that V is not particularly robust and focuses on flashy things instead of getting the basics right. I must admit that it is however still hard to look at V objectively, given the near-fradulent presentation it had when it was first announced.
"near-fradulent presentation it had when it was first announced"?
It's not just in the past, the lies are still here. A very simple to explain example: https://vlang.io/ proudly says "No null (allowed in unsafe code)", while going to V playground and typing
x := []&int { len: 10, cap: 0 }; println(x[4])
still prints "&nil" (note how there is no unsafe in sight).
The V team are either intentionally misleading people or have only vague idea about how languages are designed. Stay away.
Such code will generate warnings, "arrays of references need to be initialized right away, therefore `len:` cannot be used (unless inside `unsafe`, or if you also use `init:`)". By the way, the other optional parameter, besides `len` and `cap`, is `init` (in the documentation too). The person is being told to use unsafe or do something else.
Warnings are given to allow the programmer to experiment or solve by other methods. Beta means language still in development. Lastly and for V, the warnings mean that in production mode (-prod flag), that kind of code will not compile.
What are you talking about? Rust was always very clear about which features are implemented and which are not.
Not to mention their early design docs were extremely great - I remember reading them and being impressed. And when designs change, it's all there, with posts about what was changed and why.
Compare to V's autofree-without-GC debacle - I reemember reading about this and thinking "no way they can do it, they need something innovative like Rust's model and they have vague handwaving". And guess what? They could not. Some time after after they silently added third-party GC to the language - no blogpost, no announcement, you have to go to wayback machine to even know that they promised autofree without GC.
I'd say Vlang's communication style is approximately opposite compared to Rust. Don't put them next to each other.
The greatest value brought by compiler optimisations is removing the overhead of convenience. Sometimes that is about avoiding the boxing that is a necessity in many high level languages, but in other cases it serves to allow a more modular programming style without overhead. Stream fusion is a good example: it lets you structure your program as small and composable units, without the cost of manifesting intermediate results. That is not merely about avoiding the inherent inefficiency of e.g. Haskell, but about permitting different styles of programming, and the argument is that a low level language simply cannot allow such a style (without overhead), because the required optimisations are not practical to implement.
It is not hard to remember what int division is about, when your types are ints in code. It also comes up almost never, and isn't what floating-point rounding error means. You aren't multiplying money 99% of the time, and when you are, you don't care about exacting precision (e.g. 20% discount). Floating-point rounding error, on the other hand, is about how 0.1 + 0.2 != 0.3.
The problem with fixed point is in its, well, fixed point. You assign a fixed number of bits to the fractional part of the number. This gives you the same absolute precision everywhere, but the relative precision (distance to the next highest or lowest number) is worse for small numbers - which is a problem, because those tend to be pretty important. It's just overall a less efficient use of the bit encoding space (not just performance-wise, but also in the accuracy of the results you get back). Remember that fixed point does not mean absence of rounding errors, and if you use binary fixed point, you still cannot represent many decimal fractions such as 0.1.
Ah okay, fair enough. But what sort of transcendental functions would you use for HFT?
I guess I understood GGGGP's comment about using fixed point for interacting with currency to be about accounting. I'd expect floating point to be used for trading algorithms, but that's mostly statistics and I presume you'd switch back to fixed point before making trades etc.
reply