I'm not sure I understand the criticism. You have three functions of type (int, int) -> bool. Where the type system accepts one, it accepts the others. I fail to see the issue here? How is the type system supposed to distinguish them? It's up to you, the developer, to do it. If you have two ints x and y, and you mess up and pass y to a function where you wanted to write x, the compiler is never going to catch it for you either.
I have three different functions, and what you wrote is not their type, that's their signature. Actually they are really only two different functions - a < b and b > a are the same thing for this integer type - but whether or not the optimiser knows that will not always be obvious.
Yes, the C++ type system doesn't express this, that's the defect (well, it's the consequence here of a larger defect).
Once they have different types there are two different interesting things we can do, we could just coerce them into a function pointer type based on the signature which is what you've seen in C++ and seem to assume is just naturally the only possibility. Or, we can use parametric polymorphism.
It's sad to think that everyone had to work so hard to solve a problem whose solution is 27 years old. One day we'll just be able to run `ip -6 addr show scope global` and get the correct answer without wasting everyone's bandwidth and compute.
While what you're saying is true, I get the feeling that you want to say that this is in contrast to LaTeX3, which isn't "plain old LaTeX 2e". Except... that it is. The project to have a new engine has been scrapped a long time ago. LaTeX3 is just another layer above LaTeX 2e. With all its cruft.
I'm a math professor. I've written articles, books, lecture notes, exams, exercise sheets, presentations... with latex. Hell, even a custom class for a journal I participate in.
I hate this language with a passion. The design choices may have made sense in the 80s when 128 kB of RAM was considered high-end, "tooling" was an unknown term, and modern parser design an academic matter. If I have to read "Runaway argument" and sift through a hundred lines of log to find an error again I will have a stroke.
LaTeX3? I have great admiration for the work they've done. But they have made at least two mistakes.
1. The fundamental mistake of insisting on backwards compatibility. Latex is choke-full of historical cruft. How many times have I read things to the effect of:
"Oh, you want an inline list? And you're using the inline-list package?! You poor fool! You should be using inllst3 with the xtabl option. Holy shit, you're using hyperref too?? (Spoiler alert: everyone uses fucking hyperref.) You cretin. Well, you should load these three other packages in this specific order. Then paste these esoteric commands:
No, I'm not going to explain what the commands do. Figure it out. Documentation? In the texbook. It's not online, you expect Knuth to work for free? Go buy it on amazon, you freeloader."
Don't get me started on how to get arXiv to accept your biblatex files.
Throw all this into the trash and start anew. There's no other good way forward.
2. The superficial on creating a theoretically beautiful and consistent syntax that is designed for computers, not for humans.
Oh, I'm with you on a lot of things here, and I've just typeset my PhD manuscript with it, being a longtime (10 years) user.
LaTeX does not feel like a programming language. It is first and foremost designed as a macro language, which is nice for writing text, not that much for implementing algorithms.
I quite like what lualatex is doing, including a lua interpreter to add some simple code to your document, and make writing packages easier.
I admit I haven't used yet, being afraid of compatibility issues. I know a friend also uses a package to interoperate with python.
This doesn't completely solve my main gripes with LaTeX though, including the slow compilation speed (while 95% could be cached), and the bad error messages, as well as the brittleness of the programming part (I ended up typesetting my manuscript in 9pt because I forgot a comma after the previous documentclass argument).
And the adoption is slow, packages stick to the most compatible baseline.
Maybe the future really is to restrict LaTeX use to minor parts of the text, like Markdown (pandoc?). But there's still some value in having a plugin system that doesn't require additional dependencies.
The only thing I really don’t like about LaTeX is the slow compilations, which could I guess be solved by caching or breaking up my file.
OTOH, I’m of sure to what extent this slow compilation is actually just a symptom of something bad in the language. I’ve included a bunch of packages in my file, which probably slow down compilation, due to the fact that they have to deal with ancient cruft.
I have written three books in Latex and I also hate it. I am thinking of a fourth one but I don't think I can take another of these night trying to figure out why the lines in a table don't connect.
I would try Lyx. No need to deal with Latex errors and its equation editor is actually worth using (unlike literally every other equation editor).
TeXmacs is another option. No idea if it's any good - I always ignored it based on the name because it sounds like some kind of Emacs based Latex editor but it's actually nothing to do with Emacs and not based on Latex. Terrible name. Might be good though.
I may be crazy, but I would love a functional language that compiles to LaTeX, BibTeX, and pgfplots--like the Elm language, but for typesetting. At least that way, once a document compiles, we could get some guarantees about its behavior, no matter what data we feed to it. Overflows would be handled however we define them, and the document would still produce something.
The reason I say this is because every document I produce in the TeX family needs to be tweaked slightly depending on the text and data I compile with it, or else it either won't compile or something will get visually screwed up. At least with strong typing I can get useful error messages and better control of the behavior throughout the whole document.
There's a modern way of doing things now that's much simpler. Just use markdown, and then convert to PDF with pandoc.
Oh, right, you want to be able to comment on it in a nice way, and not everyone in the group is super familiar with git, so you can just use a self hosted overleaf for the comments. Oh, that's actually for latex, so just use markdown->latex->pdf, and now everyone can comment easily.
Oh shoot, nevermind, the latex comments aren't the original source, so the comments won't sync right. Easy, just make a custom script to make the comments as part of a git commit to the repo with the markdown, in a format that can re-apply the comments when they're pulled and synced with the latex. Sure, that will work :/.
You want easy version control of image placement too? No problem, markdown can work with css as well, so just add that into your markdown file wherever needed.
Later, I wrote KeenWrite, which is both a GUI and command-line application for converting Markdown documents to PDF. KeenWrite separates the content layer from the presentation layer and uses ConTeXt to do so. I've tried to keep my software backwards compatible with pandoc.
Ha, glad to see I'm not the only one. LaTeX math is great, the rest should be thrown out. Something like Asciidoctor is perfectly fine for creating “serious” documents and supports LaTeX.
The appeal of word isn't the WYSIWYG aspect. It's that... it. just. works. You want a table? There's only one way to make a table, and it works. You want to insert an image? It f*cking works. You want to format running headers a certain way? Believe it or not, it works! No weird incompatibilities between packages. No delving through the depth of 1990-era latex library code to figure out where the weird space comes from. Everything just works! And there's documentation online! Written for humans!
Trust me, writing code is not the issue with latex. I've written C++ code for embedded MCUs. I've taught python to undergrads. The pain does not even compare.
The disadvantage of Word is that it has absolutely terrible output and quite a lot of use cases are hard or impossible to use in it.
Don't let the name fool you. TeXmacs won't make you write (La)TeX. You'll get tables that work, images that work, hyperlinks that work, running headers that work, math that works, kerning that words, hyphenation and justification that work. (From math forward are major failure points in Word)
>The disadvantage of Word is that it has absolutely terrible output
This is a moot point if most people are reluctant to pick up latex and those who do are greeted by often unreadable, esoteric code that produces error messages such as "badness". This is the problem with latex and its derivatives, not their output.
TeXmacs isn't a derivative of LaTeX. It has an unfortunate name. It is a WYSIWYG editor that produces comparable output. It does support some subset of (La)TeX backslash commands as convenience, but requires none of them.
Basically you get all the advantages of Word, all the advantages of (La)TeX, and none of the disadvantages of either.
Do you have anything to contribute beyond rehashing memes? I can have a go at it too: better not miss a bracket on line 2000 of your document, otherwise latex will complain about a runaway argument in a different file and you'll have to sift through the whole thing to find where the error is.
But you at least get an error before you print your whole work! Like, the two is absolutely incomparable.
The sad part of Word is that even professionals mess it up constantly, because you have many ways to reach the same goal, and that every addition affects the whole document, or in some fortunate cases only the part till the next page break. But silent breakage is the worst, as is quite clear from any kind of programming background. Imo, Word just doesn’t scale due to this.
You might like Typst. It is a new language, new tooling that actually learned its lessons, and while its community won’t magically replace all the cruft that came from the LaTeX world overnight, I think it might have already managed to get a critical mass and it can be the next chapter of scientific papers (among other use cases)
There's a variant of arXiv.org called ar5iv.org which renders papers in HTML5 instead of PDF. It is very handy when reading papers in a browser and it is generated from the same LaTeX files. You just need to replace the "x" with "5" in the URL:
Why would I want completely broken layouting instead of artfully crafted one? PDF does its job absolutely well, sure it won’t accommodate to your screen but there is no automatic way of laying out the content in a way that it makes sense both semantically, and from a design perspective. That figure 10 pages down is completely useless, I want to see it when it is referenced in the text. I really have a hard time reading anything that requires more formatting that a novel in anything else than properly rendered PDF.
Latex doesn't require you to learn how to multiply two lengths! The beauty of LaTex is that it makes the simple things easy and hard things possible. If you don't like programming in TeX, great, just use a package that does what you want or use the defaults of the language. People who really care about the fine details of how the document looks can count on the TeX engine to do that (after reading Knuth's book). Everybody else can still take advantage of the great quality of the results.
Why would they owe you such detailed explanations? You're asking for a full-on incident report. These take days to write and there's no reason for the public at large to need it.
> there's no reason for the public at large to need it
As a member of said public, I would be curious to know. There's no need for taxpayer-funded agencies to operate in a cloak of darkness.
Most everything done by government should by default be open to the public, with an exceedingly high bar that must be met to be otherwise. Otherwise, you run into nonsensical things like how some details around the assassination of a president 60 years ago are still classified on "national security" grounds.
which of these is 'operating in a cloak of darkness':
- NASA informs the public immediately, and then makes the details available later after they've had time to compile the news and information into a format useful for the public
- NASA waits to inform the public until said report is finished
or perhaps you're after option c:
- NASA's network drives are open to the www in read-only mode, because, you know, 'open by default' entails realtime information (even though he doesn't actually care 99.9999% of the time. yet, someone should deliver this functionality, without it costing the taxpayer extra).
NASA routinely makes a LOT of data open to the public. Like, you can get very detailed JWST data directly from NASA. Probably far more detailed than you'd ever care to, because NASA does care about exactly your concern.
Actually, many agencies publish very detailed data if you care to look.
"This is light on info but they're making a report later." would be a non-darkness answer.
But do you have reason to believe they're working on a detailed public report?
Because if they're not, then you missed option "NASA informs the public immediately, but never makes the details available" which would be unfortunate.
Also they probably already answered a lot of these questions internally during the last week, so it wouldn't hurt to put some of that information out.
I'm not here demanding an immediate report, but it is a publicly-funded agency with a goal of furthering the world's scientific understanding... and a detailed public writeup is not exactly a huge lift compared to all the other things they accomplish.
I'm also the sort of person who thinks that all code written with public money should be open source.
It's always a good thing for technical information about incidents like this to be made accessible to the public. NASA is a publicly funded organization and as such they do have a responsibility towards us.
Of course there are operational details that we don't need to be made aware of, but for an incident as big as this there's no reason to at least know how it happened and what could be changed to prevent it from happening again.
Yeah, the pen dropping is a bit over the top, but as of now the claim is that this situation is planed for and will resolve itself. A report now wont tell us anything of significance. It will get interesting if the realignment fails.
> Oversight and accountability to the citizenry is a foundational principle in a functioning democracy.
Is micromanaging what you're claiming is a strawman in my position? I'm not claiming you are saying the military doesn't need oversight, I'm probing with a concrete example where you draw the line on what constitutes a reasonable threshold of accountability. Note my statements were framed as questions to get clarification; that's not a strawman.
Your micromanaging claim is however another strawman statement. I guess I could use clarification on your point. Your equating to micromanaging is misapplied IMO. "Micromanaging" would be a direct democratic vote on most or all issues, IMO. That's not what's being asked for here here. What seems to be asked for is transparency. Access to information is not the same as having authority to make all decisions. But it is paramount in a government when people elect representatives who make decisions (or appoint those who do). The big issue I'm asking is: where is the reasonable 'trust, no need to verify' stance when it comes to public/govt work? Can we just trust tens of millions of dollars on construction projects, but not when it gets to hundreds of millions? What about aerospace? Do we say it's fine to go ahead with limited accountability when it comes to billion-dollar robotic missions, but not when there's a safety-critical application?
>A report now wont tell us anything of significance.
What makes you so confident? A report can tell us if processes were followed appropriately and, if not, if anyone was held accountable for not following them. I'd say that is pretty significant if you care about governmental fraud, waste, and abuse.
I guess you and I are being downvoted because people on HN can’t tolerate engineers being questioned. Hey guys, everyone makes mistakes and it’s an important part of scientific advancement to understand and share that knowledge.
It takes time and effort to prepare such a document for public release. Government agencies produce all kinds of reports which are of minimal interest to the public. Making the documents available on demand via FOIA is a reasonable way to ensure that time and money isn't wasted.
Normally this makes sense, because you're asking why money was wasted. But, in this case if it's permanently bricked you will actually save money, because if Voyager 2 is bricked the team working on it is now redundant. It's not like they had an incentive to be incompetent and waste money - very much the opposite.
You calculation only makes sense if you put zero value on operating a probe that far out in the galaxy - in which case you should be asking why there was a team working on it in the first place.
But that value is not zero, and replacing it costs quite a bit - both money and time. Asking how and why this happened is a valid inquiry.
Under the assumption that it is bricked, the value is indeed now zero.
I think where we differ is that you are assuming it will be replaced, but I don't think it will be. It's way past its design life so it was going to expire at some point.
For science, I would want to do an enquiry anyway - I'm just commenting on the financial/accountability aspect.
As a spacecraft navigation engineer, I guarantee you said post-mortem is already being written, and is probably going to be posted "publicly" anyway on some deep corner of the NASA website
This is the right call, let the people of the NASA focus on what is really important, and not waste time on PR.
It's pretty obvious that the people who managed to extend the lifetime of Voyager are very smart, based on all the tricks they had to do.
They are remotely configuring an old-tech device that is billions of kilometers away, with insane lag, and uncertainty that the underlying hardware is even responding properly.
Absolutely anything could have gone wrong at this stage.
They'll anyway investigate internally what happened, in order to hopefully, find a solution.
There is no need to spend resources to make the material public, if the goal is mostly to satisfy curiosity (though it's interesting).
There's a difference between a post-mortem and a public post-mortem. Nasa is pioneering technology that shouldn't all be public. If you really think the same post-mortem would be published in public and internally, you should not be commenting on HackerNews because it's forbidden below 13 years old.
It definitely got written up internally. Making it public is just a matter of taking that, sticking it into a pdf, and hitting the publish button. A few hours' worth of additional work at most.
Because I’m an annoyingly precocious child of thirteen and this is how you capture my interest and enable my future glittering career in deep space telemetry engineering.
The context is a discussion of what explanations NASA owes in a brief public statement. Saying he'd like to know does not clearly denote that he is changing the parameters of the conversation to talk about something else.
Ah, so you're equating mild dissatisfaction (and truly, it is incredibly mild, that's some beige entitlement alright) with demand and a sense of entitlement. I see what went wrong now! Thank you.
As the ultimate progenitor of this tangent I hereby validate thefurdrake’s interpretation. My remarks were intentionally worded to form an inquiring statement of observations and preferences, not a demand for action on the basis of obligation, and the attempt to derive an unstated and unintended sentiment of vituperative entitlement is, indeed, gross.
The unsubtle misparaphrasing of Mark Twain was included as a comedic flourish to provide a light-hearted framing of the comments, but upon review of the subsequent debate, I concede it’s possible that for some, any allusion to statecraft stimulates the adversarial lobes.
Jesus, I bet you're also one of those people that are fine with mass surveillance because it's ok because your have nothing to hide. It's people like you who set the bar so low that we can't have nice things. Sheesh
I think we're stretching the definition of "fad" a lot here. Editor being one of the most popular for 10 years (not yet there, but will be for sure) can't be considered "fad".
>Similar to those who were all-in on Eclipse, Atom, Sublime?
But all these (except sublime) including vscode are basically the same type of IDE. Yes, implementations change, but their share of the "market" remains about the same.
The two URLs I provided don't convince you that this isn't a fad? Of all the SWEs I know, every single one who previously used Eclipse, Atom, or Sublime is now using VSCode. Some of my Emacs & Vim friends have switched, too. I can't imagine what would change this trend in the next 5-10 years. Perhaps a new editor with an integrated LLM coding assistant that's FAR ahead of all competitors? Except VSCode is currently leading there, too.
VSCode is here to stay. After eight years it is by far the most popular programming editor and only becoming more so. The sun was already setting on Sublime and Atom by this long after their initial release.
Indeed, but VSCode is "sticky" in the way those other editors are not. It now has a critical mass of developer interest such that language and framework developers now put out tooling for it first (if it's not the only editor with official support). Other editors will have to make do with third-party tooling from a dwindling pool of contributors, falling farther and farther behind the state of the art.
I agree that sublime is a fad and maybe atom. Sublime’s biggest mistake imho way their proprietary model combined with a plugin system, it’d probably be better to either make it free or make it very ease to use. I don’t see eclipse or VSCode as fads though. To my knowledge eclipse is the only way to develop Java code on Linux platforms without paying for richer tools (let’s face it, eMacs or vim doesn’t have much support for Java). Also Microsoft seems very invested in VSCode
VSCode is not a fad - it has huge support from Microsoft, is the most popular code editor by far, and is a very good tool that's beginner friendly and allows for customisation. It is already almost a decade old. Visual Studio has been around since 1997 so there's a good chance VSCode will be around for a good few years too.
Eclipse and Sublime do still exist and receive improvements. And VS Code is at the moment far too big to fade away in the next years, maybe even decade.
I used Emacs for all development work from about 1990 to 2018. About 28 years of development work in a dozen different programming languages. I also used Emacs for reading Usenet News and my email. I also used it for calendaring, planning, tracking work and a few other things.
Gnus made Emacs a great Usenet news reader, but I could never get it to fit with my email regimen. And eventually Usenet died too. For about a decade I used a mail client that someone else originally wrote, but which I modified over the years to do email like I wanted to. But this became really tedious to maintain alone so I have up.
Along the way I tried various IDEs like the tools from IntelliJ and Eclipse. None of them took. In fact, I figured it had to be me, so I promised myself to spend a few months every 2-3 years or so trying to get used to IDEs. (I still can't stand the IntelliJ and Eclipse tools).
I think what made me switch to VSC was that I eventually grew tired of the constant annoyance of having to fix my setup so it would be reasonably useful for Go development. Emacs support for the Go language server was slow and shaky, and eventually I got so tired of having to make a patchwork of Emacs packages work that I gave Visual Studio Code a second chance.
And the second time around it stuck. I'm not entirely sure why it stuck, but it did. It's been 5 years and I'm still using it. And I'm still not entirely sure why. I think it is mostly that I'm a programmer - not an editor enthusiast. Emacs was a pain in the neck to keep running when I switched. I liked Emacs, but VSC has better support for just about any language you care to program in.
I ditched Emacs because I'm interested in writing code. I'm not interested in spending a day figuring out how to keep barely working Go support running so I can get work done. It's a tool. And when maintaining the tool starts eating into my productivity, it isn't worth the effort.
The problem with Emacs is that it doesn't have a sufficiently large community. Which in turn means that if you are interested in creating tooling, your efforts will have a much bigger payoff if you choose something millions of other developers use. This is a vicious cycle.
Last I checked, VSC had about 14 million users. Out of a global population of 25'ish million developers. That's slightly over half the global developer population. That's a pretty big market. I think a fair guess (given the stack overflow surveys) is that less than a million people use Emacs as their primary development environment. Which I find surprisingly high, but it should give people hope.
Is VSC a fad? Who gives a crap? What matters is that, for me, and for a lot of other people, it is a better tool than Emacs. Because it is. If VSC is replaced by something else that works even better, and VSC disappears: who cares. Better tools is a good thing. Getting overly attached to tools that offer less is just weird and unproductive.
Maybe VSC will worsen over time. In which case I'll move to whatever is a better alternative when the cost of moving feels lower than the cost of continuing to use it. Just like what happened when I stopped using Emacs.
I'm assuming that "you'll be back" means I will start using Emacs again for programming.
Well, for that to happen Emacs would have to provide a better programming experience for the environments I care about than VSC does. Right now it doesn't. If or when it does, I might consider it again.
I did the exercise of installing Emacs 29.1 yesterday and trying to set up a Go programming environment from scratch. After about one hour of head scratching I had something that barely worked and I wasn't entirely sure how to get it all the way there. That's not a good user experience. It is going to be an even worse experience for someone who hasn't been an Emacs user for a few decades.
It would make me happy if Emacs was a better alternative. It just isn't. If someone has the time and commitment to fix that I'll certainly consider it, but we know that at best, that's going to be years away.
configuring vanilla emacs from scratch is certainly a lot of work. There's various distributions (Doom emacs, Spacemacs) that while not perfect have a pretty decent 'new user' experience without too much work. I don't use Go too much but I've done some projects in it. I just uncommented (go +lsp) in Doom's init.el, ran doom sync, restart emacs, and everything just worked, I didn't feel that need to configure anything further.
There will always be an editor which is an order of magnitude more popular than emacs or vim, because it is an order of magnitude better. It doesn’t much matter which one it is. People will switch to it, and be productive. And when something better comes along, they will move to that.
I strongly disagree that any editor will be an order of magnitude better than Emacs or Vim. Some new editors will be better in certain ways, or add cool new features before Emacs/Vim do, but there's nothing completely better than them. I doubt there ever will be.
Emacs requires fiddling. Once you get past a certain age, you no longer want to fiddle with your tools; they should just work. A tool that just works may well be therefore orders of magnitude better than Emacs.
I think that downplays how much fiddling other tools take. I've never used another similarly powerful editor that didn't require similar amounts of configuration to be useful. And if getting some random plugin working means you have to tweak its code, it's almost certainly going to be fast and easier in Emacs than in the other. With Emacs, you can generally just replace a function after its been loaded. In other things, you may be forking a repo, editing/compiling it, and figuring out how to get the editor to load your version of it.
> In other things, you may be forking a repo, editing/compiling it, and figuring out how to get the editor to load your version of it.
This is more a theoretical problem in VSCode. If you just want to start programming using a mainstream language or framework, in VSCode it's literally a matter of saying 'code .', downloading whatever extensions it recommends for the language it autodetected, and starting work with autocomplete, debugging support, and all that ready to go.
Read @bborud's comments in this thread. He had been using Emacs for even longer than I, and he switched to VSCode and is not looking back. Because Emacs requires fiddling to keep running and VSCode does not. And that makes VSCode better.
The experience with distributions like Spacemacs, doom emacs or lazynvim really isn't that much worse than vscode, you select the language you want to code, it installs a reasonable set of plugins and you get to work.
You need an SSH server running and the remote machine to be accessible (through NAT, firewall) to connect there with SSH. With tunnels, it's inverted: the remote machine needs to be able to connect to the central server, which is typically much easier; then you can connect to it.
That's on screen readers developers. And I don't know when that comment you seem to remember dates back to, but I have had several blind students in my math lectures over the past decade. They seemed did fine.