Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another thing I found impressive is how fast Carmack grew in term of technical skills.

In 1989 and 1990 he was still programming Apple ][ tile engine games, which are Ultima spinoffs. Not bad but not a new thing either. Starting from 1991 when he joined Softdisk it took him a few months to figure out smooth horizontal scrolling, and in just a bit more than one year, around late 1992 he managed to go from it to a mature ray casting engine, and you can see his evolution from hovertank to wolfenstein. And it took him another year to go from Wolfenstein to shadow caster to Doom. Quake took longer but it was still impressive considering he has to got it before anyone else solved it.

I mean this guy learns really fast. It's not only the rendering engine remind you, he had to do a lot of other parts of the engine too.



I see a bunch of "games were simpler" comments.

Yes, but then again you had to create the whole thing from scratch. Nowadays there are libraries that will do basically anything you can dream of. There's middleware. Whole game engines.

That era? Want to blit on the screen? Do it yourself. Want to load an asset? Not many pixmap formats to choose from. You may have to write the code to load it yourself. Want to play some beeps? Better control the PC speaker frequencies by turning it on and off (without spending too many cycles on it). Sound cards didn't make the problem any simpler.

You can't even code on one window and watch the results on another window. Unless you had loads of cash like Carmack at some point and could buy a NeXT. It's code, run the compiler, run the game. If it crashes it might take the entire machine with it. Reboot and try to figure out what went wrong. You _might_ be able to get a debugger but they were extremely primitive - and no memory protection so the debugger (which would likely modify the code and add side effects) could crash too.

Yeah, simpler games. But glacially slow to code anything. And very little publicly available information.

You try figuring out a highly performant ray casting engine without the internet. Or implement smooth scrolling after entire companies had tried and failed.


> You try figuring out a highly performant ray casting engine without the internet.

Not that it detracts from what he accomplished in any way, Carmack's alternative to the internet seems to have been books and research papers.

From GEBB DOOM:

"John started searching around for 3D research papers. He had several VHS tapes of math conferences, and compendiums of graphics papers from conferences because game books were a rare thing back then, and there was nothing printed that could help us create the engine we were building – he had to figure out where to get information that was not directly applicable to games and figure out how to adapt it to his problem.

Bruce Naylor’s May 1993 AT&T Bell Labs paper was titled "Constructing Good Partitioning Trees" and was published in the proceedings of Graphics Interface ’93. John had this book in his collection. Bruce’s explanation of BSPs was mostly to cull backfaces from 3D models, but the algorithm seemed like the right direction, so John adapted it for Wolfenstein 3D. — John Romero"


This is still the way things work now. A lot of techniques new to games come from earlier academic research there’s even cross-pollination back from industry as well.


Didn't a particularly noteworthy event happen during the production of the movie, "Interstellar"?

Edit:

Yes, and here is one of many stories:

https://www.google.com/amp/s/www.wired.com/2014/10/astrophys...


I believe only wolf 3d snes used bsp. I may be misremembering.


DooM uses BSP trees for the levels and Quake uses them as well but in 3D. The main difference was that DooM could make the BSPs on the fly (2D) while Quake needed the BSPs to be compiled ahead of time. The executable had to be run, along with a visualization optimizer and a lighting program, in order to get the compiled map file for Quake.


Smooth scrolling on the PC was a particularly difficult problem because of design decisions made when creating PC expansion card type video systems.

Smooth scrolling, in general, was known on other hardware. That was either having enough CPU throughput and a display resolution combination that would allow the CPU to drive the display fast enough to make it happen, and or with the assistance of video display chips forming video subsystems capable of smooth scrolling by offering the kind of control needed for success, all while also leaving enough of the system resources available for a game to be made featuring the effect.


I’ve been making games since I was a kid in the 80s and whilst what you say is kind of true hardware and interfaces to it were spectacularly more simple as well. There was also more public info than you might suspect as making games was a popular thing to do then as well.


> You can't even code on one window and watch the results on another window. Unless you had loads of cash like Carmack at some point and could buy a NeXT.

Yes, you could. PCs from the beginning could support dual monitor setups with MDA plus any of the graphics cards, and this wasn't an uncommon setup for a long time because MDA had sharper text than any of the IBM color graphics standards until VGA, which had a text mode with slightly better resolution.

> And very little publicly available information.

There was lots of publicly available information. Less of it was free, but it was in books; for the less esoteric stuff they were in decently sized sections of computer books in general bookstores and computer hardware/software stores, magazines you would subscribe to, and for the more esoteric stuff there was stuff that was hard to find outside of (especially university) libraries. It was more work to find some information, but the signal to noise ratio was better.


You would buy books like this one,

https://www.amazon.com/PC-Intern-Encyclopedia-Programming-De...

Borland debuggers were quite good.


I think it is also implicitly biased.

Back in the 80s on Apple ][ (Both Johns started on Apple ][), whoever wants to develop competent games would switch to assembly ASAP. Whoever could not figure it out would drop out eventually. Eventually, whoever managed to join professional gamedev (be it Origin or Softdisk) HAS to know a lot of low-level stuffs and be fluent to assembly, Pascal and perhaps C. Example: Carmack had to code the rendering engine and network engine for DOOM, probably singlehandedly.

Today's world is a lot easier so more percentage are retained. It's a lot easier to write C#, use Unity and get out a demo in say a few days/weeks. No low level programming skills are needed, at all.

This creates a bias when we compare "classic professional programmers" versus "modern professional programmers". They simply don't need the same set of knowledge.


I find constraints are good. Limitations inspire creativity. It might be easier because the problem is obvious, difficult but obvious. Modern day programming with 1k nodejs module dependencies is a nightmare.


> You can't even code on one window and watch the results on another window.

I am quite sure you could hook up two or more computers in a network for that if you wanted to have separate windows.


Quake and beyond was when he and id brought in Michael Abrash, and IIRC Abrash helped a lot with the low level graphics of the Quake engine. Not to say Carmack couldn't handle it or it was beyond him but Abrash, a legend in the graphics programming world at the time, definitely helped move technology forward.


From what I remember reading back then, Carmack was already a huge fan of Abrash's tech articles in magazines before Quake, so he had already been learning from Abrash for a long time before they worked together.


Abrash’s book is great.


Not a lot of people but other people were building similar games at the same time. Carmack gets a lot of recognition because the games were spectacularly successful. For example System Shock was released just a little after Doom and was technically superior. Ultima Underworld was released the year before and had an engine better than Wolf3D. The latter implemented texture mapping first and actually inspired Carmack to do the same.


>technically superior.

Curious what makes you say that. While it was full 3D, it ran so much slower than DooM that, to me, "technically superior" is very subjective.

>Ultima Underworld was released the year before and had an engine better than Wolf3D

This one is definitely true but with a caveat - Wolf3D could be played fullscreen while Ultima's first person window was only a small part of the fullscreen window. The rest was an inventory and other things that allowed it to run faster since the actual rendered area was significantly smaller.


Hey, just noticed this reply, HN doesn't notify people so you're lucky I skimmed down the first page of my own comments! :D

I agree technically superior is a subjective notion, particularly as it relates to games and how good a game is. I do think broadly that a full 3D engine is more complex than a 2.5D engine like Doom. There were quite a few new challenges in terms of making Quake for example. System Shock and Doom are also very different games in terms of their complexity. So I'd justify my opinion based on the game and the technology underpinning it being more complex.

For Ultima that becomes more of a production choice. Do they optimize the renderer for fullscreen resolution or build the game. Again a significantly more complex game than Wolf3D so I can understand doing the latter.

None of this takes away from the achievements of the respective teams but we do well to remember what actually happened rather than just lionizing one man.


>we do well to remember what actually happened rather than just lionizing one man

This statement alone was worth the price of admission. As a fan of game engine history (highly recommend Fabian's books: https://fabiensanglard.net/), I have to remember that all of these things were built on the shoulders of giants. It wasn't just one person that contributed, even if it may be only their code that lives on.


Yup I have his books and am old enough to have been around for all these games back in the day although I didn’t start modding games until Duke3D and Quake. I was actually poking around in the source of Doom the other day to see how it handled vector normalization in fixed point (answer convert the vector to direction vector then to angles then back to a length one vector by virtue of the conversion). It’s very fun to go back and pootle through.


It seems we have something in common! Cheers!


A lot of the skills were very similar back then, with almost everything just about how to blit 2D data to the screen in the shortest amount of time. At the time period you're talking about communities were starting to appear, certainly on Usenet and BBSes where coding techniques were discussed and traded. There was a huge amount of competition too between coders back then to get more and more out of the limited hardware we had to work with.

And you had to code everything yourself from scratch. There was no throwing a bunch of vertices and textures at some hardware and getting back a fully rendered scene. It was a fun era. I know I did the same as Carmack and went from writing 2D platformers to writing Phong-shaded, perspective-corrected texture-mapped worlds in under a year.

Everything that Carmack did had been done before, but Carmack made it cool. He had way more vision and imagination than the guys that had done it before.


> Everything that Carmack did had been done before,

Except for the smooth scrolling (that everyone thought impossible on that era hardware)?

And the incredibly performant Doom engine?


The smooth scrolling was easy, the EGA had more or less the same hardware capabilities as a C64 in terms of smooth scrolling. It's just that few or no games had tried to use it. For VGA, Mode X was the real game changer because it allowed to use both double buffering and smooth scrolling at the same time. A later 2D platform game that took Keen's smooth scrolling to the next level is Jazz Jackrabbit.

But as CPUs got more powerful, Carmack recognized that you didn't need hardware help anymore and that removed constraints on what games he could program. Keen to Wolf3D to Doom is when id games went from good to wow to insane.


The graphics bus was also a huge constraint. Using dirty triangles to bit bang is effective but tedious.


Agreed. I have a lot of respect for programmers of that era. Nowadays 3d engines are so complex that it probably takes a lot of practices for someone to reach the level of UE firefighter: https://allarsblog.com/2018/03/17/confessions-of-an-unreal-e...


things were simpler, so the scope of what small teams could achieve was greater... but there was also complexity in different ways. (weak memory protection so lots of reboots, buggier tooling, near and far pointers, memory and cpu constraints)

i think maybe the most exciting part of that time was the rate of innovation in hardware. new capabilities were opening up every few years and you didn't need an enormous team or tons of capital to push the envelope.


Synthesis is underrated. Yes, front-to-back BSP tree rendering "had been done before" by like two academics, who didn't use it to make a game, let alone a great game.


Read "Masters of Doom"

To solve a new problem space, Carmack would buy a bunch of college text books, then check into a rando motel for a few weeks, and just dissect all the topics and books, until he'd come up with the design of, for instances a new ray tracing engine.

Crazy smart, focuses, and talended.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: