Hacker Newsnew | past | comments | ask | show | jobs | submit | more codeflo's commentslogin

Before Minecraft, basically all voxel engines used some form of non-axis-aligned normals to hide the sharp blocks. Those engines did this either through explicit normal mapping, or at the very least, by deriving intermediate angles from the Marching Cubes algorithm. Nowadays, the blocky look has become stylish, and I don't think it really even occurs to people that they could try to make the voxels smooth.


Voxels have been around since the 1980s. The smoothness came from that beautiful CRT and its inability to display crisp images. Normals weren’t really used until early 90s and used heavily by games like Comanche by NovaLogic.

The reason why Minecraft voxels are blocks is because Notch (Markus Persson) famously said he was “Not good at art”. He didn’t implement the triangulation and kept them unit blocks. Games that had voxels AND were triangulated that came before Minecraft were Red Faction, Delta Force, Outcast just to name a few.

The point is, voxels aren’t anything special, no more than a texel, or a vertex, or a splat, a normal, or a uv. It’s just a representation of 3D space (occupied/not occupied) and can just as easily be used for culling as it can for rendering. The Minecraft style because popular because it reminds people of pixels, it reminded people of legos, and Minecraft was so popular


It depends on how the voxels relate to the gameplay.

Regardless of the original intent, in Minecraft the voxel grid itself is a very important aspect of the core gameplay loop. Smoothing the voxel visual representation disguises the boundaries between individual logical voxels and makes certain gameplay elements more difficult or frustrating for the player. When the visuals closely (or exactly) match the underlying voxel grid, it's easy for the player to see which specific voxel is holding back some lava or if they're standing on the voxel they're about to break.

In Minecraft you can, for example, clearly count how many voxels wide something is from a distance, because the voxels are visually obvious.

In Red Faction, you're never concerned with placing or breaking very specific voxels in very specific locations, so it's not an issue.


So your point is, Minecraft uses voxels on a unit scale. Red faction uses voxels differently, so Minecraft wins?

I get the appeal of Minecraft but Notch didn’t invent this stuff as much as you would love to believe. He simply used it in a way that made it easy to understand. To the point where people like you are explaining it to me like I have never played it. I have. I was one of the first testers.

Almost all of Minecraft is ripped off other games. World creation, dwarf fortress. Mining, dig dug. The only original thing was The Creeper.


This seems like a needlessly antagonistic response? GP was only pointing out that the voxel shape is fundamentally important to Minecraft. It's not just a matter of Notch's artistic talent, as you said.

Anyway I don't think anybody is saying Notch invented this stuff or Minecraft was the first to do certain things. But it's probably worth pointing out that, ripped off or no, those other games haven't become remotely close to the popularity of Minecraft, so Notch clearly did something right... maybe the Creepers are why?


> So your point is, Minecraft uses voxels on a unit scale. Red faction uses voxels differently, so Minecraft wins?

What? That’s not my point at all.


> it reminded people of legos,

I don't think this should be understated. LEGO are easy and fun to build with and don't require a lot of artistic talent. The same goes for block-based games like Minecraft.


I think marching cubes is still decently popular in games with modifiable terrain, we just stopped referring to it as voxels


To my eyes, this author doesn't write like ChatGPT at all. Too many people focus on the em-dashes as the giveaway for ChatGPT use, but they're a weak signal at best. The problem is that the real signs are more subtle, and the em-dash is very meme-able, so of course, armies of idiots hunt down any user of em-dashes.

Update: To illustrate this, here's a comparison of a paragraph from this article:

> It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.

And ChatGPT's "improvement":

> This is a new frontier of an old struggle: the struggle to be seen, to be understood, to be granted the easy presumption of humanity that others receive without question. My writing is not the product of a machine. It is the product of history—my history. It carries the echo of a colonial legacy, bears the imprint of a rigorous education, and stands as evidence of the labor required to master the official language of my own country.

Yes, there's an additional em-dash, but what stands out to me more is the grandiosity. Though I have to admit, it's closer than I would have thought before trying it out; maybe the author does have a point.


The article is engaging. That's true of practically zero GPT output. Particularly once it stretches beyond a single paragraph.

As a reader, I persistently feel like I just zoned out. I didn't. It's just the mind responding to having absorbed zero information despite reading a lot of–at face value–text that seems like it was written with purpose.


The telltale is using lots of words to say nothing at all. LLMs excel at this sort of puffery and some humans do the same.


You're doing it the wrong way imo, if you ask gpt to improve a sentence that's already very polished it will only add grandiosity because what else it could do? For a proper comparison you'd have to give it the most raw form of the thought and see how it would phrase it.

The main difference in the author's writing to LLM I see is that the flourish and the structure mentioned is used meaningfully, they circle around a bit too much for my taste but it's not nearly as boring as reading ai slop which usually stretch a simple idea over several paragraphs


Why can't the LLM refrain from improving a sentence that's already really good? Sometimes I wish the LLM would just tell me, "You asked me to improve this sentence, but it's already great and I don't see anything to change. Any 'improvement' would actually make it worse. Are you sure you want to continue?"


> Why can't the LLM refrain from improving a sentence that's already really good?

Because you told it to improve it. Modern LLMs are trained to follow instructions unquestioningly, they will never tell you "you told me to do X but I don't think I should", they'll just do it even if it's unnecessary.

If you want the LLM to avoid making changes that it thinks are unnecessary, you need to explicitly give it the option to do so in your prompt.


That may be what most or all current LLMs do by default, but it isn't self-evident that it's what LLMs inherently must do.

A reasonable human, given the same task, wouldn't just make arbitrary changes to an already-well-composed sentence with no identified typos and hope for the best. They would clarify that the sentence is already generally high-quality, then ask probing questions about any perceived issues and the context in and ends to which it must become "better".


Reasonable humans understand the request at hand. LLMs just output something that looks like it will satisfy the user. It's a happy accident when the output is useful.


Sure, but that doesn't prove anything about the properties of the output. Change a few words, and this could be an argument against the possibility of what we now refer to as LLMs (which do, of course, exist).


They aren't trained to follow instructions "unquestioningly", since that would violate the safety rules, and would also be useless: https://en.wikipedia.org/wiki/Work-to-rule


This is not true. My LLM will tell me it already did what I told it to do.


For me the ChatGPT one is worse due to factual inaccuracies like the "presumption of humanity" which in the human version is "afforded so easily to others" - fair enough and with the LLM "presumption of humanity that others receive without question" which is not true - lots of people get questioned.

Beyond the stylistics bits "history—my history" which I don't really mind what make it bad to me is detachment from reality.


I've almost always used the different dash types as they're meant to be used. I don't care that LLMs write like that — we have punctuation for a reason.

We were also taught in Content Lab at uni to prefer short, punchy sentences. No passive voice, etc. So academia is in some ways pushing that same style of writing.


Armies of idiots hunt down em dashes because they're too stupid to understand the proper use of them.


They are probably like me: if punctuation isn't on my keyboard, I don't use it.


[AltGr][Shift][-]

Without shift it's an en dash (–), with shift an em dash (—). Default X11 mapping for a German keyboard layout, zero config of mine.


>They are probably like me: if punctuation isn't on my keyboard, I don't use it.

LPT: on Android, pressing and holding a punctuation key on the on-screen keyboard reveals additional variations of it — like the em-dash, for example.

This is the №1 feature I expect everyone to know about (and explore!), but, alas, it doesn't appear to be the case even on Hackernews¹.

On Windows, pressing Win+. pops up an on-screen character keyboard with all the symbols one may need (including math symbols and emojis).

MacOS has a similar functionality IIRC.

And let's not forget that software like MS Word automatically correct dashes to em-dashes when appropriate — and some people may simply prefer typing text in a word processor and copy-pasting from it.

Anyway...

_____

¹ For example, holding "1" yields the superscript version, enabling one to format footnotes properly with less effort than using references in brackets², yet few people choose to do that.

² E.g. [2]


I one of the reasons I love macOS: it is if you hold option.


⸘WHAT‽


Neat, I didn't know there was an upside down interrobang.


Yeah, this is what I don't understand, surely people aren't "using" em dashes deliberately. I assumed MS word was just inserting them automatically when the user used a minus symbol between two words. Kind of like angled quotes.


I've been using em dashes for much longer than transformers have existed. It's easily accessible on at least the Android and macOS keyboards.


I use them when they're easy to type. For me, that's on Android, macOS, and anywhere I've configured a compose key.

Angled quotes I use only on systems on which I've configured a compose key, or Android when I'm typing Chinese.

I don't like any kind of auto-replacement with physical keyboards, so I turn off "smart quotes" on macOS.

Anyway I use characters like that all the time, but it's never auto-replace.


> surely people aren't "using" em dashes deliberately

I've had a "trigger finger" for Alt+0151 on Windows since 2010 at least.


When I worked in company that did content marketing and had a lot of writers, one of the coffee mugs they gave to us had Alt+0151 in it!

Em-Dash was really popular with professional writers.


> surely people aren't "using" em dashes deliberately

I am, it's on the default German X11 keyboard layout. Same for · × ÷ …

And that's without going to the trusty compose key (Caps Lock for me)… wonders like ½ and H₂O await!


update: I read that word will place an em dash if you use two dashes "--"


I'm used to simply using a single dash - and I am surprised that anyone who isn't an AI would feel strongly enough to insist upon the em dash character that they would use them deliberately. I will admit the use of a dash (really an em dash in disguise) in that previous sentence felt clunky, but I just felt I needed to illustrate. I mostly write text in text boxes where a dash or pair of dashes will not be converted to an em dash when appropriate, and I often have double dashes (--long-option-here) auto-converted to emdashes when it is inappropriate, so I really dislike the em dash and basically don't use it. Doesn't really seem to be a useful character in English.


Once you notice the pattern, you see it everywhere:

> Stability isn’t declared; it emerges from the sum of small, consistent forces.

> These aren’t academic exercises — they’re physics that prevent the impossible.

> You don’t defend against the impossible. You design a world where the impossible has no syntax.

> They don’t restrain motion; they guide it.

I don't just ignore this article. I flag it.


Please elaborate. I thought the article was interesting and would love a contrasting take.

Edit: thank you for the answers, I don't know how I missed that em dash clue.


Not just the em dash, but the pervasive "not X, but Y" construction.

(oh no am i one of them?)


it has a very LLM style of writing


It is a sign of ChatGPT's style.


Are the rumors still hinting at a VR-only experience as they did a couple of years ago when Half-Life: Alyx released, or is that no longer the speculation? Because that would be unfortunate for me, I'd have to play with a bucket in hand.


From interviews with the Alyx devs, it really sounded like the only reason they didn't call it HL3 was fear of not living up to the name.

Given the org structure at Valve, it's going to take someone with massive hubris to say "I can be the one to lead the HL3 project."

That or Gabe getting off his megayacht to lead it (or tell someone their project is worthy of being called HL3).


They decided to make it a prequel for fear of not living up to the name, the decision was made much earlier. If you're at all familiar with the contents of Alyx and the Half-Life franchise it wouldn't have made any sense to call it HL3.


In a narrow sense, it did move along the story point at the end of HL2 (I won’t explain how because there’s no way to do so without massive spoilers). But yeah, it would be weird to call in HL3 just because of that.


Ah, haven't gotten the chance to play it yet. But the same implication - we'll need someone at Valve with a big enough ego to take up the mantle.


I don't think that's how the people at Valve think. These people usually have lifetime dreams to work at Valve well before they do because they admired those early games so much, which if you know the story were held up to very high standards internally and repeatedly. They spent their lives being hyper critical of their own craft and admire Valve because they don't a release subpar product as a rule. So collectively they'll have a LOT of collective ego tied up in whatever that product is, but it's not like "I own this", that would be highly anathema.


I think that fits with how they think. HL3 has mythical status. We've already seen Valve devs make a major game in the HL universe and intentionally avoid "making HL3" due to this mythos. It takes a lot of confidence to say "Hey guys, I'm putting together the team to make Sistine Chapel 2."

(To be clear, I'm not saying it's a matter of ownership and personal brand. But someone needs to start the project and form a team around it. I don't think they're worried about personal brand, it's more an issue of reverence for the franchise.)


pretty sure they don't have a totally flat org structure anymore but I might be wrong


Valve recently said outright that they have no VR titles in development.

https://www.roadtovr.com/valve-no-first-party-vr-game-in-dev...


They also said there was nothing coming for SteamDeck in terms of better hardware about a week before they launched the OLED.


Did they? AFAICT what they actually said was not to expect a faster Steam Deck any time soon, which was true, because the OLED version had basically the same performance as the original and in the two years since they still haven't released anything faster.

https://www.theverge.com/2023/9/21/23884863/valve-steam-deck...


OLED has the same HW as the LCD, with only very minor differences


Maybe they finished it...


I believe for the next Half-Life, latest rumors indicate it is actually back to 2D. During the press event last month, they were also pretty clear that no VR game is currently in development at Valve.

A huge missed opportunity imo, but maybe playing HL3 on a theater sized screen is nice enough.


I'm sure they've tried making it hybrid, aka VR optional. I'm curious if they'd be able to make it work. If not, I don't expect a VR only HL game again.


Some rumors from ~1yr ago indicated they were looking into making it an asymmetric co-op game where one player would be Gordon Freeman on PC and one would be Alyx in VR. Of course, they could have dropped that by now.


Calling Half-Life 2D somehow feels right and wrong at the same time but I get what you mean.


Leaks disproved this in 2023. HLX is a single player non-vr PC game.


Seems unlikely with the steam machine coming? I haven't heard any sign of it specifically being frame only


YouTube Premium was originally called YouTube Red. Grandparent poster may have made a Freudian slip. :)


I know, I was just being... sassy. Partly because I didn't actually need to google it.


I’ll never forget how out of touch they are :)


The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.

Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.


You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.


> and technology stopped being the object of interest in those spaces.

That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.

The "something" that happened was ads. They poisoned all the fun and interest out of technology.

Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.


A lot of fun new technology gets shouted down by reactionaries who think everything's a scam.

Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.

In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.

I expect this to keep happening with any new tech that has the misfortune to get significant hype.


It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.

But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.


> It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.

Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.


I agree actually, I said rare not nonexistent.

But note that I, as the user, want to block the ads. If I can easily do so (and I usually can) it’s fine. But the moment I can’t, in that very small way, I am disempowered.

And that’s usually where the junk shows up: in what way is the software not as good as it could be both because it needs to show ads and because it wants it to be hard to disable them (the second is worse).

The thesis is that jank is not quite imperfect software (it will never be perfect!) but rather something which is clearly not at a local minimum, and it’s pretty hard to have a local minimum with ads (even if the global ecosystem requires them for sustainability; something something evolutionarily stable something something always defect).

On a secondary point, when ads are locally optimal, we call it an effective sponsorship. Especially interesting when you don’t know that it’s an ad. How many times have you paid to see something with an agenda? Note that’s not a bad thing; I’d say every decent work of art needs at least some agenda. But it’s interesting because ads generally are not, in this sense, art; though on the flip side I’ve seen sponsorships on YouTube which are genuinely as if not more entertaining than the video itself and are still clearly sponsored and hence not deceptive.


> Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want.

But they'd prefer if it was shorts.


No, they wouldn't. On Youtube, for example, videos were consistently trending longer over time, and you used to see frequent explainers (https://www.wired.com/story/youtube-video-extra-long/) on why this was happening and how Youtube benefits from it. Short-form videos are harder to monetize and reduce retention, but users demand them so strongly that most platforms have built a dedicated experience for them to compete with TikTok.


If that was true, I would be able to turn off shorts from my recommendation feed.


You can. It’s not a hermetic seal, I assume because they live in the same database as normal videos, but if you’re thinking of the separate “shorts” section there’s a triple dot option to turn it off.


I've clicked this triple dots many times. I never saw such an option. I saw "show fewer shorts", and even that seems to be temporary.


> That happened because technology stopped being fun.

Exactly and I'm sure it was our naivete to think otherwise. As software became more common, it grew, regulations came in, corporate greed took over and "normies" started to use.

As a result, now everything is filled in subscriptions, ads, cookie banners and junk.

Let's also not kid ourselves but an entire generation of "bootcamp" devs joined the industry in the quest of making money. This group never shared any particular interest in technology, software or hardware.


The ads are just a symptom. The tsunami of money pouring in was the corrosive force. Funny enough - I remain hopeful on AI as a skill multiplier. I think that’ll be hugely empowering for the real doers with the concrete skill sets to create good software that people actually want to use. I hope we see a new generation of engineer-entrepreneurs that opt to bootstrap over predatory VCs. I’d rather we see a million vibrant small software businesses employing a dozen people over more “unicorns”.


>The "something" that happened was ads. They poisoned all the fun and interest out of technology.

Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.

Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.

It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.


I don't think that just because something becomes accessible to a lot more people that it devalues the experience.

To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!

* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.


my guess is something like detailed in this article: https://meaningness.com/geeks-mops-sociopaths


Obviously?


"Matter" can in practice also mean "Matter over Wi-Fi", and lots of vendors use it that way.


That’s my issue with it: iot devices shouldn’t have access to the internet by default. With Matter it’s possible. No one is going to create outbound firewall rules for these things.


I think it's only a matter of time before it's the same for Thread + Matter. Currently they get an ULA IPv6 address on (most?) border routers and you can ping the devices on the local network. It will be too attractive extend the standard to permit phoning home for 'analytics to improve the product' (I don't think this is possible yet with the current standard? But hard to tell.).


It's easily imaginable that there are new CPU features that would help with building an efficient Java VM, if that's the CPU's primary purpose. Just from the top of my head, one might want a form of finer-grainer memory virtualization that could enable very cheap concurrent garbage collection.

But having Java bytecode as the actual instruction set architecture doesn't sound too useful. It's true that any modern processor has a "compilation step" into microcode anyway, so in an abstract sense, that might as well be some kind of bytecode. But given the high-level nature of Java's bytecode instructions in particular, there are certainly some optimizations that are easy to do in a software JIT, and that just aren't practical to do in hardware during instruction decode.

What I can imagine is a purpose-built CPU that would make the JIT's job a lot easier and faster than compiling for x86 or ARM. Such a machine wouldn't execute raw Java bytecode, rather, something a tiny bit more low-level.


> What I can imagine is a purpose-built CPU that would make the JIT's job a lot easier and faster than compiling for x86 or ARM. Such a machine wouldn't execute raw Java bytecode, rather, something a tiny bit more low-level.

This is approximately exactly what Azul Systems did, doing a bog-standard RISC with hardware GC barriers and transactional memory. Cliff Click gave an excellent talk on it [0] and makes your argument around 20:14.

[0] https://www.youtube.com/watch?v=5uljtqyBLxI


I imagine that's where the request for finer grained virtualization comes from


Running Java workloads is very important for most CPUs these days, and both ARM and Intel consult with the Java team on new features (although Java's needs aren't much different from those of C++). But while you're right that with modern JITs, executing Java bytecode directly isn't too helpful, our concurrent collectors are already very efficient (they could, perhaps, take advantage of new address masking features).

I think there's some disconnect between how people imagine GCs work and how the JVMs newest garbage collectors actually work. Rather than exacting a performance cost, they're more often a performance boost compared to more manual or eager memory management techniques, especially for the workloads of large, concurrent servers. The only real cost is in memory footprint, but even that is often misunderstood, as covered beautifully in this recent ISMM talk (that I would recommend to anyone interested in memory management of any kind): https://youtu.be/mLNFVNXbw7I. The key is that moving-tracing collectors can turn available RAM into CPU cycles, and some memory management techniques under-utilise available RAM.


So, the guys at Azul actually had this sort of business plan back in 2005, but they found that it was unsustainable and turned their attention to the software side, where they have done great work. I remember having a discussion with someone about Java processors and my common was just “Lisp machines.” It’s very difficult to outperform code running on commodity processor architectures. That train is so big and moving so fast, you really have to pick your niche (e.g. GPUs) to deliver something that outperforms it. Too much investment ($$$ and brainpower) flowing that direction. Even if you’re successful for one generation, you need to grow sales and have multiple designs in the pipeline at once. It’s nearly impossible.

That said, I do see opportunities to add “assistance hardware” to commodity architectures. Given the massive shift to managed runtimes, all of which use GC, over the last couple decades, it’s shocking to me that nobody has added a “store barrier” instruction or something like that. You don’t need to process Java in hardware or even do full GC in hardware, but there are little helps you could give that would make a big difference, similar to what was done with “multimedia” and crypto instructions in x86 originally.


> The only real cost is in memory footprint

There are also load and store barriers which add work when accessing objects from the heap. In many cases, adding work in the parallel path is good if it allows you to avoid single-threaded sections, but not in all cases. Single-threaded programs with a lot of reads can be pretty significantly impacted by barriers,

https://rodrigo-bruno.github.io/mentoring/77998-Carlos-Gonca...

The Parallel GC is still useful sometimes!


Sure, but other forms of memory management are costly, too. Even if you allocate everything from the OS upfront and then pool stuff, you still need to spend some computational work on the pool [1]. Working with bounded memory necessarily requires spending at least some CPU on memory management. It's not that the alternative to barriers is zero CPU spent on memory management.

> The Parallel GC is still useful sometimes!

Certainly for batch-processing programs.

BTW, the paper you linked is already at least somewhat out of date, as it's from 2021. The implementation of the GCs in the JDK changes very quickly. The newest GC in the JDK (and one that may be appropriate for a very large portion of programs) didn't even exist back then, and even G1 has changed a lot since. (Many performance evaluations of HotSpot implementation details may be out of date after two years.)

[1]: The cheapest, which is similar in some ways to moving-tracing collectors, especially in how it can convert RAM to CPU, is arenas, but they can have other kinds of costs.


The difference with manual memory management or parallel GC is that concurrent GCs create a performance penalty on every reads and writes (modulo what the JIT can elide). That performance penalty is absolutely measurable even with the most recent GCs. If you look at the assembly produced for the same code running with ZGC and Parallel, you’ll see that read instructions translate to way more cpu instructions in the former. We were just looking at a bug (in our code) at work this week, on Java 25 that was exposed by the new G1 barrier late expansion.

Different applications will see different overall performance changes (positive or negative) with different GCs. I agree with you that most applications (especially realistic multi threaded ones representative of the kind of work that people do on the JVM) benefit from the amazing GC technology that the JVM brings. It is absolutely not the case however that the only negative impact is on memory footprint.


> The difference with manual memory management or parallel GC is that concurrent GCs create a performance penalty on every reads and writes

Not on every read and write, but it could be on every load and store of a reference (i.e. reading a reference from the heap to a register or writing a reference from a register to the heap). But what difference does it make where exactly the cost is? What matters is how much CPU is spent on memory management (directly or indirectly) in total and how much latency memory management can add. You are right that the low-latency collectors do use up more CPU overall than a parallel STW collector, but so does manual memory management (unless you use arenas well).


> It's true that any modern processor has a "compilation step" into microcode anyway, so in an abstract sense, that might as well be some kind of bytecode.

This.

> What I can imagine is a purpose-built CPU that would make the JIT's job a lot easier and faster than compiling for x86 or ARM. Such a machine wouldn't execute raw Java bytecode, rather, something a tiny bit more low-level.

My prediction is that eventually a lot of software will be written in such a way that it runs in "kernel mode" using a memory-safe VM to avoid context switches, so reading/writing to pipes, and accessing pages corresponding to files reduces down to function calls, which easily happen billions of times per second, as opposed to "system calls" or page faults which only happen 10 or 20 million times per second due to context switching.

This is basically what eBPF is used for today. I don't know if it will expand to be the VM that I'm predicting, or if kernel WASM [1] or something else will take over.

From there, it seems logical that CPU manufacturers would provide compilers ("CPU drivers"?) that turn bytecode into "microcode" or whatever the CPU circuitry expects to be in the CPU during execution, skipping the ISA. This compilation could be done in the form of JIT, though it could also be done AOT, either during installation (I believe ART in Android already does something similar [0], though it currently emits standard ISA code such as aarch64) or at the start of execution when it finds that there's no compilation cache entry for the bytecode blob (the cache could be in memory or on disk, managed by the OS).

Doing some of the compilation to "microcode" in regular software before execution rather than using special CPU code during execution should allow for more advanced optimisations. If there are patterns where this is not the case (eg, where branch prediction depends on runtime feedback), the compilation output can still emit something analogous to what the ISAs represent today. The other advantage is of course that CPU manufacturers are more free to perform hardware-specific optimisations, because the compiler isn't targeting a common ISA.

Anyway, these are my crazy predictions.

[0] https://source.android.com/docs/core/runtime/jit-compiler

[1] https://github.com/wasmerio/kernel-wasm (outdated)


I know it prolly isn't the most practical, but neither were lisp machines and we still love them


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: