Hacker Newsnew | past | comments | ask | show | jobs | submit | cxr's commentslogin

"Five years ago" was 2020. What you're asking is, "Have you, at some point in 2025, needed an email from 2019, 2018, etc. (i.e. from some time before that)?"

The answer: Yes, of course. (And I don't understand why anyone other than, say, a university undergrad or someone younger should find that answer surprising.)


Near 50 and I can't say I need any email from 3 years ago; this is conservative, I'm probably ok with not needing emails from a year ago. Just like I don't need chat history either. What are you storing in your emails that you need to keep it beyond this? Genuinely interested. Genuinely have no clue why you'd be storing emails any length of time.

I'm not "storing" anything in my emails other than email. <https://www.fastmail.com/blog/email-is-your-electronic-memor...>*

It's hard to imagine that at 50 years old a person posting on HN has never stumbled upon a mailing list archive before. Assuming you have and you don't have any issue understanding the value in that, what's the difficulty in understanding the value of the same or similar type of correspondence that happens in private?

You know all those letters of correspondence we have between two individuals in, say, Revolutionary War-era America or Industrial Age Britain? Those all come from collections where a museum has acquired one or the other's "papers": collections that necessarily had to already exist pre-accession—and did exist because those folks didn't just throw those things away as a matter of course. It doesn't exactly take any effort to do the equivalent today. (In fact, the opposite is true—it actually takes effort to get rid of them.) That was the entire rationale and value proposition of Gmail and its 1GB-and-increasing quota at the time of its launch.

* This is not an endorsement of Fastmail as a company or its CEO, the author of the blog post (which was, one should note, authored in 2018).


Stop fucking editorializing the fucking submission titles.

Two oversights in this article:

- Failure to mention Netscape Enterprise Server (NodeJS is not responsible for expanding "the language's scope[…] far beyond the browser"—it was on the server from almost the very beginning; the author cites Brendan's 2011 blog post[1] which namechecks Rhino, but then leaves this out)

- Failure to mention JS running on the James Webb Space Telescope (Brendan's post also namechecks Nombas, but doesn't go into much detail about it; Brent Noorda covered this in an update to the Nombas section of his site[2] in 2022)

1. <https://brendaneich.com/2011/06/>

2. <https://brent-noorda.com/nombas/us/index.htm>


I think the first JavaScript book I bought, circa 1998(?) briefly mentioned server-side JavaScript, and then until Node came out, I never saw it again. It's fair to say Node took server-side JavaScript from an obscure curiosity to the behemoth it is now.

JS on the server seemed to me to be a solution looking for a problem. We already had plenty of arguably adequate server side languages. JS was the weird language you were forced to code in for the browser so why on earth would you want to use it elsewhere? Well I suppose the answer was "because there's a zillion people who know how to use it". But that wasn't true until it was.

No, the answer was that Node.js will run circles around Apache, because Apache was built before async was discovered[0]...

0: https://www.youtube.com/watch?v=bzkRVzciAZg


It’s good to know the differences between libuv based event loops, threads, and processes. I think your comment simplifies that too much.

You might enjoy the linked video

> why on earth would you want to use it elsewhere

Mostly because I don't want to reimplement my data structures (and data structure manipulation functions) just because they're being operated on in another machine. I want to represent those values and relationships once and be able to manipulate state on both the client and the server with the same operations.

My projects end up having one `src` directory with `client`, `server`, `shared` subdirectories.


In sorry, bit I’ve implemented Rhino way back in 2005 or before on a server

What?

> The spirit of the GPLv2 was about contributing software improvements back to the community.

It may be the case that when all is settled, the courts determine that the letter of the license means others' obligations are limited to what the judge in the Vizio case wrote. And Linus can speak authoritatively about his intent when he agreed to license kernel under GPL.

But I think that it's pretty clear—including and especially the very wordy Preamble—not to mention the motivating circumstances that led to the establishment of GNU and the FSF, the type of advocacy they engage in that led up to the drafting/publication of the license, and everything since, that the spirit of the GPL is very much in line with exactly the sort of activism the SFC has undertaken against vendors restricting the owners of their devices from using them how they want.


Since a company building it themselves hasn't gotten it in the form of a binary from someone else that they're just passing along to you and their use is commercial, they don't satisfy either condition of GPLv2 3(c), but they'd need to satisfy both in order to be able to exercise that option.

That's not what they're saying.

On the shelves are three insulin pumps: one with a 5-year warranty, one at a bargain barrel price that comes with no warranty, and one accompanied by a written offer allowing you to obtain the source code (and, subject to the terms of the GPL, prepare your own derivative works) at no additional charge any time within the next three years.

Weighing your options, you go with pump #3. You write to the company asking for the GPL source. They say "nix". They're in breach.


The GPLv2 under which Linux is licensed does not prohibit that insulin pump from bricking itself if you tried to install "your own derivative work" that wasn't signed by the manufacturer.

This is not only possible but also prudent for a device which can also kill you.


Possibly true, but irrelevant to the post to which you are replying.

The argument is over providing you the source code.


> "type extension" (i.e. inheritance) is used to create extensible message hierarchies, which then are polymorphically handled by procedures accepting a VAR parameter of the most general message type. In this handler, the messages are dispatched by the IS operator or the WITH or CASE statement. This has similar effects as e.g achievable by sum types, and interestingly is closer to e.g. Alan Kay's view of OO

That seems off. Kay's comments have always struck me as in line with Brad Cox's views. Cox's book uses this example a lot as a poor/insufficient substitute for dynamic dispatch.


> Kay's comments have always struck me as in line with Brad Cox's views

It is unlikely that the two would have agreed on this. Kay's view is actually based on messages, as implemented in Erlang, for example, and to some extent in Smalltalk-72. Cox, on the other hand, has implemented the object and dispatch model of Smalltalk 80, that Ingalls invented and published in 1978, almost exactly, even with the same method lookup caching.

> Cox's book uses this example a lot as a poor/insufficient substitute for dynamic dispatch.

The Wirth and Smalltalk approach are both fundamentally "late-bound search" mechanisms, differing mainly in whether the search state is optimizable by a central engine (VM) or fixed in the user's explicit control flow (Handler). This is a classic example of dualism in computer science, specifically the Expression Problem (or the Data/Operation duality). You are simply traversing a 2D matrix of (Types × Operations), just choosing a different axis as primary. It's mathematically isomorphic, both are performing a directed graph traversal to find the code that matches (CurrentType, CurrentMessage). Only the ergonomics and possibility for caching differ. Smalltalk hides the dispatch loop in the VM, which can do caching and the dispatch effort goes from O(N) to O(1) in time. Wirth exposes the dispatch loop in the WITH statements with a dispatch effort continuously O(N).


Perhaps I'm misreading a different (opposite/orthogonal) intent from what you meant when you wrote the quoted passage in your initial comment. Some form of dynamism is required, else it fails the "extreme late binding" criterion that Kay insists is fundamental to his view of OO.

I'm not familiar enough with Smalltalk-72 or what Ingalls did that makes it so different from the Smalltalk-80 that Cox read about in Byte.

> differing mainly in whether the search state is optimizable by a central engine (VM) or fixed in the user's explicit control flow (Handler). This is a classic example of dualism in computer science, specifically the Expression Problem (or the Data/Operation duality). You are simply traversing a 2D matrix of (Types × Operations), just choosing a different axis as primary.

If you are doing whole-system development and have control over the entire thing (a "closed world" system), then it is that simple. But whether it's an open world or a closed world changes things.

Cox is fond of a simple example that he repeats in his book (fairly early on—it's on something like page 9) to demonstrate that dynamic dispatch is fundamentally necessary because it means you don't have to have panoptic control/involvement over the objects in a system (with all types known at compile time). If you're programming every operation with switch statements that select code paths based on objects' type tags, not only do you have to go visit all N routines where each of those N operations are implemented to update them when introducing a single type, but it also requires a priori knowledge of all types to be baked into the system that you release at the time of release, whereas OO on a live system means that you can introduce new types even after the initial system has shipped to the user.


Sure. Late binding in Smalltalk-80 means that a bytecode method is selected via a hash table per class and the address of the internalized selector string (atom) as a key. In Oberon, procedures are natively compiled, but each module is dynamically loaded; a module can implement a handler for a message and be separately compiled and loaded by name, so again late binding.

If you're interested in the difference between Smalltalk versions, I recommend Ingall's most recent paper: https://dl.acm.org/doi/10.1145/3386335. In contrast to Smalltalk >= 76, ST-72 had no bytecode, but sent tokens (synchronously) to object instances for parsing (which was called "message passing" by its authors).

> If you are doing whole-system development and have control over the entire thing, then it is that simple

The dispatch mechanism and the described duality is the same, whether whole-system or not.

> Kay's view of OO is a matter of "open world" versus "closed world"

Smalltalk was always a "closed world" (you are always in the same image, but code can be compiled on the fly at runtime), and all calls were synchronous. Since the Oberon compiler and system treats each module as a dynamic loadable entity and supports loading by name, it actually supports the "open world" approach. Interestingly, Kay's view is likely best represented in Erlang, where there are true messages sent asynchronously.

> If you're programming every operation with switch statements that select code paths based on objects' type tags

As mentioned, Oberon traverses the "2D matrix" from the other side. Each module may or may not handle a message in a WITH (i.e. switch by type) statement, but modules per se ar dynamic. So the "a priori knowledge" only applies to the message type.


That's been possible with Moddable/Kinoma's XS engine, which is standards compliant with ES6 and beyond.

<https://www.moddable.com/faq#comparison>

If you take a look at the MicroQuickJS README, you can see that it's not a full implementation of even ES5, and it's incompatible in several ways.

Just being able to run JS also isn't going to automatically give you any bindings for the environment.


It wouldn't fix the issue of semantics, but "language skins"[1][2] are an underexplored area of programming language development.

People go through all this effort to separate parsing and lexing, but never exploit the ability to just plug in a different lexer that allows for e.g. "{" and "}" tokens instead of "then" and "end", or vice versa.

1. <https://hn.algolia.com/?type=comment&prefix=true&query=cxr%2...>

2. <https://old.reddit.com/r/Oberon/comments/1pcmw8n/is_this_sac...>


Not "never exploit"; Reason and BuckleScript are examples of different "language skins" for OCaml.

The problem with "skins" is that they create variety where people strive for uniformity to lower the cognitive load. OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.


> OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.

That's one of my hopes for the future of the industry: people will be able to just choose the code style and even syntax family (which you're calling skin) they prefer when editing code, and it will be saved in whatever is the "default" for the language (or even something like the Unison Language: store the AST directly which allows cool stuff like de-duplicating definitions and content-addressable code - an idea I first found out on the amazing talk by Joe Armstrong, "The mess we're in" [1]).

Rust, in particular, would perhaps benefit a lot given how a lot of people hate its syntax... but also Lua for people who just can't stand the Pascal-like syntax and really need their C-like braces to be happy.

[1] https://www.youtube.com/watch?v=lKXe3HUG2l4


Also consider translation to non-English languages, including different writing and syntax systems (e.g. Arabic or Japanese).

Some languages have tools for more or less straightforward skinning.

Clojure to Tamil: https://github.com/echeran/clj-thamil/blob/master/src/clj_th...

C++ to distorted Russian: https://sizeof.livejournal.com/23169.html


> transparent switching between skins (about as easy as changing the tab sizes)

One of my pet "not today but some day" project ideas. In my case, I wanted to give Python/Gdscript syntax to any & all the curly languages (a potential boon to all users of non-Anglo keyboard layouts), one by one, via VSCode extension that implements a virtual filesystem over the real one which translates back & forth the syntaxes during the load/edit/save cycle. Then the whole live LSP background running for the underlying real source files and resurfacing that in the same extension with line-number matchings etc.

Anyone, please steal this idea and run with it, I'm too short on time for it for now =)


I want to do the opposite: Give curly braces to all the indentation based languages. Explicit is better than implicit, auto format is better than guessing why some block of code was executed outside my if statement.

Indentation is just as explicit as braces.

I wanted to give Python/Gdscript syntax to any & all the curly languages (a potential boon to all users of non-Anglo keyboard layouts)

Neo makes it really easy to type those

https://neo-layout.org


People fight about tab sizes all the time though.

That's precisely the point of using tabs for indentation: you don't need to fight over it, because it's a local display preference that does not affect the source code at all, so everyone can just configure whatever they prefer locally without affecting other people.

The idea of "skins" is apparently to push that even further by abstracting the concrete syntax.


> you don't need to fight over it, because it's a local display preference

This has limits.

Files produced with tab=2 and others with tab=8, might have quite different result regarding nesting.

(pain is still on the menu)


I don't see why? Your window width will presumably be tailored to accommodate common scenarios in your preferred tab width.

More than that, in the general case for common C like languages things should almost never be nested more than a few levels deep. That's usually a sign of poorly designed and difficult to maintain code.

Lisps are a notable exception here, but due to limitations (arguably poor design) with how the most common editors handle lines that contain a mix of tabs and spaces you're pretty much forced to use only spaces when writing in that family of languages. If anything that language family serves as case in point - code written with an indentation width that isn't to one's preference becomes much more tedious to adapt due to alternating levels of alignment and indentation all being encoded as spaces (ie loss of information which automated tools could otherwise use).


I find it tends to be a structural thing, Tabs for indenting are fine, hell I prefer tabs for indenting. But use tabs for spacing and columnar layout and the format tends to break on tab width changes. Honestly not a huge deal but as such I tend to avoid tabs for layout work.

I love the idea of "tabs for indents, spaces for alignment", but I don't even bring it up anymore because it (the combination of the two) sets so many people off. I also like the idea of elastic tabs, but that requires editor buy-in.

All that being said, I've very much a "as long as everyone working on the code does it the same, I'll be fine" sort of person. We use spaces for everything, with defined indent levels, where I am, and it works just fine.


I completely agree, hence my point about Lisps. In terms of the abstraction a tab communicates a layer of indentation, with blocks at different indentation levels being explicitly decoupled in terms of alignment.

Unfortunately the discussion tends to be somewhat complicated by the occasional (usually automated) code formatting convention that (imo mistakenly) attempts to change the level of indentation in scenarios where you might reasonably want to align an element with the preceding line. For example, IDEs for C like languages that will add an additional tab when splitting function arguments across multiple lines. Fortunately such cases are easily resolved but their mere existence lends itself to objections.


Do you mean that files produced with "wide" tabs might have hard newlines embedded more readily in longer lines? Or that maybe people writing with "narrow" tabs might be comfortable writing 6-deep if/else trees that wrap when somebody with their tabs set to wider opens the same file?

One day Brython (python with braces allowing copy paste code to autoindent) will be well supported by LSPs and world peace will ensure

  SyntaxError: not a chance

What editor are you using that does not have a way to paste code with proper indentation?

VB.Net is mostly a reskin of C# with a few extras to smooth the transition from VB.

Lowering the barrier to create your own syntax seems like a bad thing though. C.f. perl.

Eschewing with lockfile-based package management schemes actually takes less work.

<https://news.ycombinator.com/item?id=46008744>


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: