I feel like the toy examples in the article might only make sense to people who already know why they want this feature. The examples don't make it at all obvious to me why I would want this feature. The new version in the examples have more code, more indirection, and more magic (in the sense that it relies directly on a specific property of the runtime). Could anyone here help me understand a more robust example where `try...finally` just won't work (or at least would be patently less readable/etc)?
The verbosity comes because they are demonstrating both how to write the library code to support the feature, and how to consume it. But in reality a lot of the time someone else will have written the library code for you.
In this (still contrived) example, we end up having to do nested try/finally blocks.
Before:
let totalSize = 0;
let fileListHandle;
try {
fileListHandle = await open("file-list.txt", "r");
for await (const line of fileListHandle.readLines()) {
let lineFileHandle;
try {
lineFileHandle = await open(lineFileHandle, "r");
totalSize += await lineFileHandle.read().bytesRead;
} finally {
await lineFileHandle?.close();
}
}
} finally {
await fileListHandle?.close();
}
console.log(totalSize);
After:
let totalSize = 0;
try {
await using fileListHandle = getFileHandle("file-list.txt", "r");
for await (const line of fileListHandle.readLines()) {
await using lineFileHandle = getFileHandle(lineFileHandle, "r");
totalSize += await lineFileHandle.read().bytesRead;
}
}
console.log(totalSize);
Thank you for this example; it wasn't clear to me reading the article, but this is the main problem I was hope being solved. Will make writing tests much smoother.
As a consumer of the library, how will I know which calls need to have the using keyword? Is it a case of having to rely on the documentation stating so, or is there some other tell that something contains a Symbol.dispose function?
using typescript, you can imagine a little squiggle in vscode under an open without using "using", typescript can know that is object has a disposable symbol.
I wonder now if React / Vue / Svelte / SolidJs (and all the others) could use this to cleanup things as a finer grained way of handling unmounting of nodes, for example
Not likely. This is essentially syntactic sugar for `try ... finally`, the resource disposal is scope based. Node unmounting is linked to a different (longer) lifetime system than plain javascript scope.
It’s possible that there are UI systems where this would work but not the ones you listed above.
Yeah, the scope-based disposal jumped out at me from the examples. I suppose that means that anywhere you still have a reference to the resource alive, referenced by an object in the main node loop, that thing would never be automatically destroyed, right? You'd have to manually release it anyway. On the other hand, does this trigger if anything returned from that function goes out of scope? Or not until everything returned from it does?
I think you’re confusing scope and reachability. Maintaining a reference to an object has nothing to with whether or when it’s disposed in this TC39 language enhancement. Such a system _does_ exist in Object finalizers, but it’s hard to use correctly, especially in a language where it’s very easy to inadvertently retain references via closures. Resource disposal of this type needs to be much more predictable and can’t be left to the whims of the runtime. The docs on finalizers and WeakRefs are full of warnings not to expect them to be predictable or reliable.
With this new using syntax, resources are disposed of when the object they are tied to goes out of _lexical scope_, which doesn’t need to worry about the runtime or object lifetimes at all. This example from the TC39 proposal makes it pretty clear:
function * g() {
using handle = acquireFileHandle(); // block-scoped critical resource
} // cleanup
{
using obj = g(); // block-scoped declaration
const r = obj.next();
} // calls finally blocks in `g`
Maybe this is a dumb question, but for something to be reachable - i.e. not marked / swept by a garbage collector - doesn't it need a reference in the active scope? Weak references exist specifically to allow event handlers to be dereferenced at lazy intervals, but that's not what I'm talking about. What I mean is, if the above function returned a database connection to the Nodejs main loop, which stuffed it into a pool array of connections, wouldn't that still remain in scope for the remainder of the program unless it were explicitly deleted?
> if the above function returned a database connection to the Nodejs main loop, which stuffed it into a pool array of connections, wouldn't that still remain in scope for the remainder of the program unless it were explicitly deleted?
No, since these are block-scoped the original variable goes out of scope when the block it was declared in ends. The underlying _value_ that the variable is a reference to certainly can escape the block in a number of ways (assignments to existing variables or properties, closures), but this system doesn’t care about any of that, it’s directly equivalent to using try and finally, and finally blocks execute when you would expect them to.
By abstracting away the countless ways a function can handle cleanups, you are providing a single, uniform interface that anyone can implement. This concept is akin to the usage of `.then`, where you agree upon an interface that allows people to execute asynchronous work. The syntax sugar of async/await relies on the existence of a `then` key, demonstrating similar concept.
In the context of the parent question then it's the consistent standard syntax that makes things easier to read - something that indeed is impossible to see with an isolated example that, if anything, may look less clear than the syntax you're used to.
Scoped resources are essential for avoiding global singletons. The JS/Node ecosystem acts like (and often believes) globally-scoped singletons are a recommended pattern for things like database connections.
The reality is, there’s no better option at the moment.
Virtually every other ecosystem has concluded globals are not the best practice. (At least until we return to dependency injection containers where they are suddenly cool again but I digress.)
It can be useful take a step back and think about where best practices come from. What problems do they solve?
Singletons are bad in complicated, long-running processes, because you're in trouble if you want to have more than one of something, and cleanup can be a problem. A one-to-one relationship with the running process is problematic.
But JavaScript often runs in a disposable runtime environment that forces cleanup when terminated. For example, a web page or a web worker. Memory leaks usually aren't a problem and you can just treat it like arena allocation.
If you want more than one web page, it's very easy to do.
Similarly, if you're writing scripts using disposable Unix processes then a memory leak in a command isn't all that big a deal; you can sometimes get away with never freeing anything because the OS will do it.
NodeJS, however, is not running in a disposable runtime environment. It is also the location where things that need cleanup are likely to occur (database connections, for example).
For the browser context, I don’t have a good use case for ‘using’ (except for browser devs themselves where some code could be in JS now but impossible before).
Then again, people always surprise you with new use cases!
Singletons are considered 'bad' under general advice because they can make testing hard.
As with everything, there is a time and a place, but if you understand the tradeoffs to recognize that time and place you won't be soliciting random advice from the internet, and thus won't hear the 'good'.
I get the impression that the JavaScript world largely doesn't care much for testing, though.
When using singletons for DB connections in TS, I will generally have a singleton function that returns the resource. So getDatabaseConnection will return a databaseConnection.
It’s pretty simple, but it has solved the testing problem for me because within that getResource function I can check if the environment being ran in is a test environment. If so, return a mocked instance. If not, return the real instance.
It’s pretty rudimentary, but it’s solved our issues.
Note, you can also place said singleton as a single export in a module, and test anything using that module with a mock on the loader, which most of the newer testing options for JS support pretty easily.
I think from a threading perspective that’s a reasonable statement. But it’s also about creating manageable abstractions, very limited singleton usage ok, but it can easily get out of hand and lead to hard to reason about code.
I rarely write Javascript, so I'm not in tune with the community, but when I have I have had no trouble passing database handles and such things around like I'm used to in every other language I work with more frequently.
It appears all this does is avoids needing to manually call close (or equivalent)? While that is a nice addition, helping to avoid the situation where you forget, why does globals become the alternative? Isn't simply calling close manually the best option at the moment?
I've only had one scenario where I actually needed something like this (and I don't know if the `using` proposal helps). I have a library (Prolog interpreter) that uses a Wasm module originally written in C, so I have to manage the memory manually. If users iterate through or close a query (e.g. with `for await`) it's fine and cleans itself up. But, it's possible to to create an AsyncGenerator, never call .return() on it (which won't run the `finally` block), and have that reference garbage collected. I used a finalizer on a local variable in the async generator to work around this :)
The proposal[1] explains several cases where try…finally is verbose or even can result in subtle errors. The upshot for me is that this feature adds RAII to JavaScript. It makes resource management convenient and more maintainable. Seems like a no brainer to me.
Based on my reading of the spec, you can't actually return a resource, since the disposal semantics need to map exactly to calling [Symbol.dispose] in a finally block at the end of the current block. Also, unlike C#, you can have multiple using statements in the same block, which will be disposed of in LIFO order.
It's essentially the exact same thing as the using statement in C#, so if you search for that you should be able to find more info.
But as to your question "Could anyone here help me understand a more robust example where `try...finally` just won't work", using is just basically syntactic sugar for try/finally, but I'm very much in favor of syntax that gets rid of lots of verbose boilerplate that can obscure the real purpose of your code.
When writing a game in JS, it’s very important to minimize garbage collection - too much will cause periodic stuttering and freezes. To avoid GC you want to minimize runtime allocations and mostly allocate upfront and reuse that memory.
One way is to use object pooling, but doing so in JS can be brittle because you have to remember to manually call `Pool.release(obj)`, `obj.free()` or whatever method you’ve chosen to return an object to the pool.
If a developer forgets to do this, you could exhaust the object pool, or if it’s growable, cause a memory leak! In a game’s update loop that could happen very quickly.
With this new feature, you could grab a short-lived object from the pool and automatically return it to the pool at the end of the method or loop.
Example - imagine this is inside an update method called 60 times per second:
for (const enemy of enemies) {
using pos = Pool.getVec3();
// do stuff with pos
enemy.setPosition(pos);
} // pos is returned to pool automatically
You asked about try/catch/finally. The downsides for this use-case are:
* Big performance hit when you use it in a hot loop like this - the disposal could be happening ~10,000 times per second.
* Harder to remember to fill all your loops with try…finally, ugly to have double braces anytime you’re using a pooled object.
* It’s an abuse of syntax if you’re not actually catching any error.
Consider the case where you want to iterate over an input file, query data from a database with those inputs, and write the outputs as lines to an output file. The function can fail anywhere: between opening the input file, connecting to the database, or opening the output file. With try/finally, you need to keep the variables outside the scope of your try block and check whether each one has been set in the finally (and clean them up). I.e., if the database connection fails, you don't want to try closing the output file because you haven't opened it yet. The resulting code is sloppy.
The "best" way to do this in JS today is to have one function that opens and cleans up the input file (with a try/finally). That function calls another that opens and cleans up the db connection in the same way, and so on. That's verbose and makes your code nonlinear.
The new keyword brings Go's (and other languages') equivalent of 'defer' to JS. You don't need to worry about cleaning up, the 'using' keyword implies that it happens for you.
JavaScript used to be a little elegant in its simplicity. JavaScript now is getting complicated. Syntactic sugar, I think, is only there to build a moat around a tech to justify a salary premium and reduce the number of newcomers. You end up with a dozen arcane ways to do the same thing, and you have to be fluent with all of them in order to read your teammates code. It makes the language harder to use, while adding no new functionality
> Syntactic sugar, I think, is only there to build a moat around a tech to justify a salary premium and reduce the number of newcomers.
Ascribing that kind of motive to something as innocuous as syntactic sugar is silly at best. Most syntactic sugar I've seen is pretty clearly an attempt to make users' lives easier (whether successful or not is irrelevant).
I love how this all went full circle. You know, just a few years ago, there still was the narrative of how bad JS is. Especially on HN. "Designed in only ten days", people made fun of it for being such a half-assed language. Stuff like the "Wat" talk, making fun of left-pad, etc.
Now it's "JavaScript used to be a little elegant". Fantastic.
More seriously:
1. This is typescript. Feel free to use js without it. (EDIT: This is wrong :)
2. This is a relatively easy feature, and there are equivalent features in some other mainstream languages.
You don't have to like it, I'm not sure I do, but this feels so dramatic.
What? I'm the biggest TypeScript apologist today, and even I think that JavaScript used to be terrible! Remember `function`? Remember `var that = this`? Heck, remember `var`? Remember `.bind()` and having to bind all class methods? Remember `parseInt(08)`?
This is really true. I feel somehow the problem came when the functional programming gang hijacked it and forced ES6 on us. Promises were so counter intuitive they had to create shortly after async / wait...
Don't get me wrong there are alot of great things in ES6, but it was not quite the same language after...
How would you solve the async/await issue then? I think JS handles asynchronicity very elegantly. You just need to create a proper mental model around it. I don't want to go back to the old times of callback hell, that's for sure.
TFA mentions as much: TypeScript 5.2 will introduce a new keyword - 'using' - that you can use to dispose of anything with a Symbol.dispose function when it leaves scope. This is based on the TC39 proposal, which recently reached Stage 3, indicating that it is coming to JavaScript.
No, I am very familiar with JS and wasn't sure reading the article what bits were the new TC39 proposal and what bits were TS-specific. It seems, from having read it once quickly, that it's just TS adding support early to a feature that will come in JS, similar to how you could add support to async/await in your transpiler before they were commonplace by transforming them into promises back in the day. But that's definitely not a "TS new keyword".
Agreed with OP, the title is misleading. Typescript technically doesn't introduce any new runtime feature. This is basically a polyfill for an upcoming Javascript feature.
The overwhelming share of improvements made to Java, Javascript, Typescript etc over the past decade have been inspired by C#, so I can understand why the C# terminology was front of mind.
Can you elaborate more of what combining disposal and destructuring would look like?
This is a good proposal, but I'm worried about the choice of syntax. Simply swapping `const x = ...` for `using x = ...` feels like it's not a "weighty" enough piece of syntax for such a complicated automatic disposal feature. I understand that the idea is to make it very low-effort and easily applicable to multiple resources, but I think intuitively I wouldn't exist "using x =" to call any sort of complex code "automatically". Instead, I think the block-scoped proposal in the "withdrawn" section makes a lot more sense
using (x = ...) {
console.log(x.whatever)
}
Thoughts? I feel like the current proposal has a lot of the same issues I dislike with C++ (a focus on "terser" syntax obscures what is actually happening e.g. initializer lists don't actually tell you what constructor they're calling).
They took the syntax (almost) straight from C# where it works great. We had only block-scoped using for a long time and that sucked; the variable declaration style is definitely better and we readily adopted it when it became available.
C# had only block based using for a long time. It made a lot of code unreadable because the using statements introduced a ton clutter. The newer syntax is way better.
It seems fine, the increased scoping would just be bothersome.
The annoyance to me is that like all scope / defer constructs it gets wonky when paths split between a success and a failure path, and you do want the resource to escape in the former case. That's an area where RAII really shines.
> The annoyance to me is that like all scope / defer constructs it gets wonky when paths split between a success and a failure path, and you do want the resource to escape in the former case. That's an area where RAII really shines.
Is this even supported by the proposal in question though? I don't see anything here that indicates any sort of escape analysis or object tracking is performed—resources are always cleaned up lexically so `using foo = handle(); return foo` will always call foo to be cleaned up, just like the equivalent `finally` statement.
That's correct, there is no escape analysis or anything like it. Though there is a way to handle the "I want to return a resource" case: the DisposableStack class. You can call `stack.move()` to "move" ownership of a resource (and rely on the place you're passing it to to do cleanup). E.g.
using stack = new DisposableStack();
stack.use(resource());
if (condition) {
// the resources will _not_ get disposed automatically by this return
// it's the caller's responsibility to call `stack.dispose()`
return stack.move();
}
> Is this even supported by the proposal in question though?
No and that’s exactly the problem.
If you “using” a resource and you leak it, you’ve now handed off a disposed off and probably useless resource.
If you don’t, and don’t, you have a resource leak.
In trivial cases this is fine-ish, just do whichever is needed, but when there are conditional paths internally (sometimes you signal an error others you return normally) you can’t “using” at all, you’re back to try/except and disposing by hand.
RAII does not have that issue, it does the right thing every time.
A few languages solve this by having additional constructs for these specific scenario (e.g. errdefer), though that requires more direct integration with the language’s error reporting, and thus forbids (or at least does not work with) alternate ones.
The forced nesting that particular proposal makes it less ergonomic (especially for scopes with multiple such resources), and would bring very little benefit over a simple library-based approach (instead of a language feature), something like the following for your example:
I don't see anything to suggest "using" bindings can't be closed over just as with any other lexical binding. Indeed, your example does so. Nothing in the proposal would seem to prevent returning such a closure from a function that acquires a resource, or a collection of same, and being able to comfortably use the bound resources without losing the guarantee they'll be properly disposed once no longer in scope.
Granted it still could get screwed up. But if it doesn't, and assuming of course that I've understood it correctly to begin with, I suspect a comfortable way to think about it will be as a means of hinting to the GC how it should handle cases where free() alone doesn't suffice.
I think you two are actually on the same page— their critique is of the nested block version of the proposal that OP prefers, not the the actual stage 3 proposal.
So, like context managers in Python, except you only have `__exit__` and not `__enter__`.
What's missing from the article is the semantics when there is an error. In Python, `__exit__` is always called (with the exception as param), but depending of if you return True or False, you reraise or not the exception.
Anyway, it's a useful addition.
While you can craft an API with this behavior in JS by accepting two callbacks, it's nice to have a standardized way to do it. Plus making it an official feature will encourage the pattern, which nurture the culture for better API.
{
await using { connection } = getConnection();
// Do stuff with connection
} // Automatically closed!
The object returned by getConnection() is destructured to extract the field named connection and assign it to a local variable with the same name. The containing object presumably continues to exist anonymously so its Symbol.asyncDispose function will still be called when it goes out of scope.
Can anyone explain which language rule guarantees this behavior in C# and in TypeScript?
C# doesn't allow you to destructure a single property and as far as I can tell based on a quick test it doesn't allow you to combine a using declaration with destructuring. I agree that this is surprising syntax... as a C# dev my first interpretation of that code would be that connection is the thing being disposed since it's the variable being declared. I wonder if js/ts destructuring has always silently held a reference to the containing object or if they added that behavior to make this use case work?
I think that records don't generate a deconstruct method when they only have one property, but even if you manually define one you'll get an error on `var (varName) = ...`
If I understand your question right, for C# look at the following article, last paragraph regards object which leave the scope of the using (C# does not have the JS "var" but only the JS "let" behavior, so it is slightly different here).
I thought the same and I don't see motivation from it on the TC39 spec page [1] but I do see they have several examples that make use of the feature.
It makes me realize I've never really considered the memory implications of doing something like:
const { someSmallPart } = getSomeMassiveObject();
I don't really know the lifetime of "massive object". Probably a bad on my part for not knowing if Javascript garbage collection is allowed to notice that the only thing with a reference is `someSmallPart` and could therefore release any other parts of the massive object. If the answer to that is "no" then there is no problem with the above pattern.
If the answer is "yes", then things could get complicated. e.g.
async function getComplicatedObject() {
const db = await db.getConnection( ... );
const f = await file.open( ... );
return { db, f, [Symbol.asyncDispose]: () => {
db.close()
f.close()
}
}
{
await using { f } = getComplicatedObject();
// ... more stuff with awaits where the GC might decide `db` isn't used so it can clean
} // I would not expect a null ref error on `db` here when `asyncDispose` is called
I mean, you can handle the above even if the GC is allowed to be smart and discard `db` - it just makes it significantly more complicated and requires clever book-keeping.
Isn’t it natural that async dispose property would pin an entire object at `using` scope? If there’s no pin via using, then GC should be able to collect unused contents anytime.
I'm pretty sure that example won't work and is based on an old version.
But assuming a javascript implementation where it works, I don't know what you mean by "language rule". The spec says what "using" means, and it doesn't require any behavior the language doesn't already have. The old spec said "When try using is parsed with an Expression, an implicit block-scoped binding is created for the result of the expression.", and the new one says "When a using declaration is parsed with BindingIdentifier Initializer, the bindings created in the declaration are tracked for disposal at the end of the containing Block or Module". Then the spec has an example implementation written in javascript, with the value being added to a (non-user-accessible) list. https://github.com/tc39/proposal-explicit-resource-managemen...
It's neat, but if I understand it correctly, if you forget to write the "using" keyword the code will still work just fine but the dispose will never be called, resulting in a memory leak. I don't suppose there is a way to mark a function as being un-callable unless you do so with "using"?
That situation already exists today, but is inconsistent: some things need .close() and others .dispose() and yet others .destroy() and then there's things like .end() and .reset() and .disconnect() and .revoke() and .unsubscribe(). All of those and more already exist in JS code today (just off the top of my head) with notes in library READMEs or other documentation to call them or possibly face resource leaks. The `using` keyword suggests a common name for these sorts of functions, [Symbol.dispose](), and provides lovely syntax sugar for try/catch/finally to make sure such things get called even in failure cases, and even provides suggested built-in objects DisposableStack and AsyncDisposableStack to help in managing multiple ones and transitioning older APIs like the many names I've seen, mentioned here.
Even better, right now lint tools have to individually case rules specific to each library given the variety of names, but lint tools can make a nice global rule for all objects implementing [Symbol.dispose]() (just as they can warn when a Promise is left unawaited).
In C#, the garbage collector will eventually call the destructor that will dispose the object. It just won't happen deterministically with the code.
I assume there would be similar logic for JavaScript but I haven't looked at the specification.
It is not always a mistake to miss the using keyword -- if you're taking ownership of the object, then you will dispose of it manually (potentially in your own dispose method).
I'm not aware of a way to control the GC behavior of an object in JS. The memory will be freed, and that's all. If something else needed to be done, I don't think there's a way to do it. Happy to be proven wrong though.
How? WeakMap (and WeakSet, more directly) can detect if an object has been collected only if you have a reference to pass to it. If you have reference to the object in a variable or data structure, it won't have been eligible for collection in the first place.
There's a conversation I had with Ron Buckton, the proposal champion, mainly on this specific issue. [1]
Short answer: Yes, Disposable can leak if you forget "using" it. And it will leak if the Disposable is not guarded by advanced GC mechanisms like the FinalizationRegistry.
Unlike C# where it's relatively easier to utilize its GC to dispose undisposed resources [2], properly utilizing FinalizationRegistry to do the same thing in JavaScript is not that simple. In response to our conversation, Ron is proposing adding the use of FinalizationRegistry as a best practice note [3], but only for native handles. It's mainly meant for JS engine developers.
Most JS developers wrapping anything inside a Disposable would not go through the complexity of integrating with FinalizationRegistry, thus cannot gain the same level of memory-safety, and will leak if not "using" it.
IMO this design will cause a lot of problems, misuses and abuses. But making JS to look more like C# is on Microsoft's agenda so they are probably not going to change anything.
Memory leaks may not be possible but resource leaks can be.
Many languages have tried to attach "disposers" to garbage collection events. The more common term in this case is "finalizer". As far as I know, no language has ever had a general-purpose, robust solution that works so well that it can just be used. Many have tried, and they all end up either deprecated, determined by the community to be a footgun, or backed down to some specialized use case (like picking up after file handles that are GC'd but the expected thing is still that normal user code closes them and this is explicitly pitched as an undesirable fallback, not the thing you should do).
I think programmers mainly care about memory leaks because of the potential negative consequences of using up too much memory -- programs crash or fail, or services frequently restart, or you have to provision additional costly server resources to keep things running well.
If your program continuously allocates memory that it doesn't free and doesn't serve a useful purpose, it doesn't really matter if it's, e.g., being referenced by some cache or just completely unreferenced -- it's still causing the same problem and has the same solution (that is, release the memory when you're done with it -- whether that's by calling "free" or clearing the reference, or whatever your memory management system has)
You're pawning complexity off to the caller of `getConnection` with `defer` semantics. The only time `defer` is preferable to RAII is when the release of a resource to its corresponding acquire may have a different set of arguments depending on the context. The example you've given doesn't really seem that compelling.
acquireX/releaseX maps nicely to lexical scope and adding a language feature to provide sugar for it is pretty nice, imo/e. Defer doesn't really "get it."
The salient point from the parent comment was that “using” assumes the resource is an Object.
The core value prop of both “using” or “defer” is that a single scope can manage the lifetime of some resource while child scopes interact with that resource.
“Using” assumes that resource is a local object but that’s rarely actually the case - as in the example of a DB connection - it’s just how OOP represents it.
A good counter-example would be if I need to manage a resource via an API. Maybe one request creates the resource and another tears it down. (Eg, begin a transaction and then commit it). “Using requires me to implement a new class to represent this resource - that’s OOP at its finest - whereas “defer” allows just doing the thing (and works equally well for objects)
There is no need for you to create a new class to represent a resource. Javascript doesn't require objects to be instances of specific classes, so you can just return an object with the dispose method while only using it as a container. Even when using Typescript there is no need to add a new class, a new type declaration is more than sufficient.
Just because it's on objects doesn't mean it's OOP.
> “Using” assumes that resource is a local object but that’s rarely actually the case - as in the example of a DB connection - it’s just how OOP represents it.
Can you elaborate on this more? It would be hard to represent a DB connection without an object somewhere (or a closure that's effectively the same thing), and I don't understand how local or not comes into play.
> Using requires me to implement a new class to represent this resource
It just requires you to set a key. You could also have a function to add it or do it locally.
Locally setting up Symbol.dispose does require more boilerplate than `defer`. But if the dispose is built into the object then `using` is less boilerplate than `defer`. And if you use a function to add the disposal, `using` with an extra function call is also less boilerplate than `defer`.
The nice thing is that the `defer` pattern can be easily built on top of the `using` pattern, and indeed the `DisposableStack` and `AsyncDisposableStack` objects also included in the TC-39 proposal for built-ins directly support it:
let connection = getConnection()
using stack = new DisposableStack()
stack.defer(() => connection.dispose())
The proposal even notes the `defer` method name was borrowed from golang.
Though in practice I think the `adopt` method will be the nicer, more common pattern for transitional things before `[Symbol.dispose]` becomes more widely utilized:
using stack = new DisposableStack()
const connection = stack.adopt(getConnection(), c => c.dispose())
I sort of keep expecting we'll eventually have C++ where it's just:
Connection.
...and it gives you an instance in a local variable with the lowercase initial to match the type.
Like the "Churchill Martini" where he — according to legend — asked the bartender to pour the glass full of gin, and then he'd nod at the vermouth bottle on the shelf and say "Vermouth."
Yep, this merges countless of way to dispose a resource into a single interface.
Like what promise did to callback functions. (And promise almost take over plain callback these days)
Even you don't use the 'using' sugar. You may still be benefited from it if you need to implement some sort of resource management someday.
Instead of write a tear down function for every single type of resource in your code. Probably everyone will just agree upon that `Object[Symbol.dispose]()` will free this resource. And you only need to do it once and for all.
I'm honestly okay with either of these. try-finally has the unfortunate indentation tax. This has even led me to recreate the bracket pattern from haskell in JS/TS.
Really happy that this feature is finally seeing the light of day. When I switched my code from callbacks to promises in node 10 years ago, it seemed like the most natural thing to add that would finally resolve all the resource management fears that node has had with exceptions and error handling.
Well DI is a standard lib or even Framework thingy. Have not seen anyone except Google (PWA/Chromebook) advancing the standard lib in any structural way.
DI hast no space in the Standard lib. It is Framework space. It belongs into react, svelte or Angular. But not into a standard lib.
That was my first though, but a) it seems that the `using` syntax is not exactly the same as in C# (unless the old C# I used to use) and b) still I wouldn't mind, C# is a great language
And this is a good thing. I would love to see something like a proper C# instead of a hacky TypeScript. It would be so much more productive.
JavaScript is a cool dynamically-typed language though. C# is on another side of the spectrum: it is super cool by being fully strongly-typed. But TypeScript is something in the middle and this uncertainty gives pains.
I find that Dart rides that middle line much nicer than typescript does, at least for now. As of Dart 3, you can now have sound null safety, and sound type safety for 100% of your code, if you want. The recent addition of records also makes working with some of javascripts methods with multiple return types better, as that is one of the things dart is still missing, union types. Though, I tend to prefer dart over typescript just because of not having to deal with the node ecosystem at all.
Do you use Dart for anything other than Flutter though? I'm a huge fan of Dart, but I don't see it getting much server usage. Like, could I deploy a Dart CloudFlare worker?
yes, cloud flare even has a guide on their docs if you go to the Workers > Platform > Languages section. Also, dart 3 is previewing wasm support, and could be used through a wasm worker.
For personal projects or if you're a freelancer sure. Once you work in a company/startup you can't go and implement things in whatever you like.
It's way easier to convince any manager that Dart/Flutter is a good choice for a mobile app because of the tech advantages than in anything else because of the pool of available developers.
> "just make it so I don't have to do crazy checks on properties before casting to a class."
You do have 'instanceof' at runtime, so unless you mean some other type of 'class', casting to a class is really a non-issue (and you don't really 'cast'... since dynamic typing and all). If you mean some more advanced concept (e.g., structural type runtime checking), think how you would do it in statically typed languages (is it really better?).
Instance of doesn't work sometimes and you have to write explicit property checks. I forget the details because I'm by no means a typescript expert, but I remember being very frustrated by the fact I couldn't just use instance of.
It doesn't work when objects cross global scope boundary (e.g. you get an object from a frame), there's no fixing that, by design the namespaces are different and unrelated.
(1) imperative style mutation of the prototype is runtime dependent, and typescript can't really rely on runtime behavior.
(2) Foo has two incompatible types in JS, it is both `(s) => void` and `new (s) => Foo`, typescript by default will assume the first signature (which is usually what people want).
In any case, you can specify the types (though in 99% of the cases, just use a class):
While that is a fair statement, I've been reading a lot of Typescript code lately and it seems in 99% of cases the selected solution is to use top level functions and globals to avoid the nesting hell class introduces, albeit introducing the global hell in the process. In practice, 'just use a class' doesn't overcome the human element.
Javascript was on the right track originally, but then veered off into the weeds for some reason. I appreciate you pointing out how it can be done. Although the solution is very much second class citizen, sadly. This certainly isn't going to wean developers away from global hell. Hopefully Javascript/Typescript can make some strides in improving this going forward.
We are talking about runtime class creation with dynamic methods, while still adding proper static types. There aren't many language allow this level of flexibility.
You can probably disallow prototype use with existing tools if that's desired, but it is an advanced tool for advanced use cases (which still exist).
We are talking about how poor syntax pushes developers into some strange corners to avoid having to deal with the poor syntax. class ultimately gets you to a decent place, but forces the code into one big giant mess that nobody wants to look at ever again. Prototype function assignment gets you to the same place, with better syntax, but falls short for other reasons.
Given:
class Foo {
constructor(x) {
this.x = x
}
bar() {
return this.x
}
}
maybe it becomes:
object Foo(x) {
this.x = x
}
function Foo.bar() {
return this.x
}
instanceof only works for inherited types, usually expressed with the class keyword by TypeScript users. It's entirely worthless if you're not using them. Class is the only construct in TS that is both a type and a value.
This is exactly what bugs me about TypeScript. The language itself is fine—some annoying things, but generally they’re due to having to run on top of JS. But Microsoft had the chance to fix JS’s terrible standard library. And they didn’t!
What's missing in the standard library though? I hear this complain often, but when I use a library it's always for a specific use case and not something that I would expect to be in a standard library.
Imports through the browser do, but I'm wanting a single file bundle with all those imports so I'm not relying on a browser feature to deliver the content over N requests.
I know that's less of a concern nowadays with modern http, but guess who still supports ie11
I'm sorry you are stuck still supporting IE11. You could use an AMD loader and use imports in Typescript configured to transpile AMD modules. Most AMD loaders were tiny. The benefit to AMD modules in your case is that they don't need webpack to bundle, the (hideous) module format was designed to be bundled by raw concatenation and the Typescript compiler will still even do that for you with the --outFile parameter (or "outFile" tsconfig.json setting).
It feels like terrible advice to give someone, given ESM is far superior to AMD, and this would be going backwards in time, but in your case you are already blighted by the disease and it is advice that might staunch some of the bleeding.
Nominal types versus structural types is basically the main distinction at this point (if we ignore .net, etc). I like both languages, happy to see them converge while keeping their defining characteristics
They aren't structural because the following doesn't work in C#, but would work (with tweaks to make it valid code) in a structurally typed language such as TypeScript:
var transformed = media.Select(m => {
var parts = m.Split('/');
return new {
addedAtUtc = "2023-05-15T05:59:31.398Z",
originTripUid = parts[4],
path = $"{parts[4]}/{parts[5]}",
rank = "",
size = 103978,
stored = true,
type = "document",
};
}).ToList();
transformed.AddRange(otherMedia.Select(m => {
var parts = m.Split('/');
return new {
addedAtUtc = "2023-05-15T05:59:31.398Z",
originTripUid = parts[4],
path = $"{parts[4]}/{parts[5]}",
rank = "",
size = 103978,
stored = true,
type = "document",
documentType = parts[6]
};
}));
Adding "documentType" breaks in C# but would work in a structurally typed language as a structural type checker can see that the second result fulfills all of the necessary properties. Doing this in C# would require creating an interface and being explicit.
Sure, but that would be opt-in if it were ever implemented. I think both styles of type system have their pros and cons. I have loved the cross-pollination between C# and TS with great ideas flowing in both directions.
I get that; that's why I qualified it as a "functional perspective" because even though the underlying implementation is an auto-generated, auto-named type, the functional usage of it is a structural type (with limitations).
Your list of differences is correct but not really what I meant. The points you listed are mostly due to JS runtime constraints. The choice of nominative vs structural type system is purely a design choice by the typescript team.
Of course this does not change the fact that it is a reserved keyword, but "with" is prohibited in strict mode and as such also in any ES Module. I assume this also applies to any TS module regardless of the compiler.
But since TS is supposed to be a superset of JS, of course this does not change anything
> Symbol is a built-in object whose constructor returns a symbol primitive — also called a Symbol value or just a Symbol — that's guaranteed to be unique. Symbols are often used to add unique property keys to an object that won't collide with keys any other code might add to the object, and which are hidden from any mechanisms other code will typically use to access the object. That enables a form of weak encapsulation, or a weak form of information hiding.
> Prior to well-known Symbols, JavaScript used normal properties to implement certain built-in operations. For example, the JSON.stringify function will attempt to call each object's toJSON() method, and the String function will call the object's toString() and valueOf() methods. However, as more operations are added to the language, designating each operation a "magic property" can break backward compatibility and make the language's behavior harder to reason with. Well-known Symbols allow the customizations to be "invisible" from normal code, which typically only read string properties.
Essentially if they use dispose() like you mention, what happens if someone already has code that has dispose? And in future, avoid people accidentally calling these.
Kind of the other way round. If you have code that has a function named `dispose`, it will keep on working just fine, nothing needs to be renamed (not now, and not in the future when you want to start using `using`).
The symbol `Symbol.dispose` is special, unlike any normal string or identifier, because it didn't exist until the new standard, so existing code can't have been using it. This guarantees a level of backwards compatibility that wouldn't exist otherwise.
That's case for this particular feature, but the approach has been with know symbols and is how they introduce new features into the app.
Also, you are assuming the consumer of a class is same as someone that used / uses the class. What if the library author has dispose and someone wrongly use using with it?
This would break many sites on the web, if they roll this out as a native feature of JS. Importantly, JS generally cannot break backwards compatibility.
Backwards compatibility. This can't conflict with existing types that have `dispose` on them already. Symbols are private, meaning two symbols with the same name are not the same symbol. Symbols use object reference equality, not value equality. So `Symbol.dispose` is the only way to refer to the dispose method they're looking for, without conflicting with any existing fields called "dispose".
Symbols are used for all kinds of "built in" functions, and they were not "introduced" for this feature. As symbols are unique, they don't break/interfere with existing methods/properties. Therefore, if someone already has a dispose function named dispose, the new feature won't interfere with a method called dispose method and vice-versa.
"Well-known" symbols include things like Symbol.iterator that make "for...of" loops possible, Symbol.asyncIterator for async for loops, Symbol.hasInstance for instanceOf, Symbol.toStringTag for toString, and others.
I'd imagine dispose is a common function name and being dynamically typed there is potentially a lot of code out there that already has a "call dispose if it exists on this object" pattern.
My guess is backwards compatibility. There might be applications out there that have a dispose method for different reasons. Symbol.dispose is literally impossible to be already in use, since symbols are guaranteed to be unique.
I've been writing some C# lately and `using` has been very nice to use with file handles, buffers, etc., that might need to be cleaned up. But they only work nicely in C# because the C# stdlib has properly implemented `Dispose()` for File, StreamReader, et. al. Whenever I'm dealing with library types I have no idea if I'm supposed to dispose of them or not. We've got a similar problem here with JS where the node team has to implement `[Symbol.dispose]` in all the relevant classes (ugh) and who knows when that will land. And until then the runtime has to clean up disposable resources without the help of `using`. So what's the point?
These arguments can be also applied to promise and await. And they almost ultimately won these days. Sure, you need to implement the then function or whatever properly or it won't play nicely with code other wrote. But once enough people adopted it, it's almost no way to go back.
If you are a big library author and you don't bother to, probably other will just write a wrapper(request-promise for example) or make a clone with that buildin(there are so many and it's pointless to count). It's FOSS anyway.
My experience in C# has been that almost all library types that require resources to be released will implement the same IDisposable interface provided by the System namespace.
I'm sure there are examples of libraries that don't do this but I can't think of any off the top of my head and I'd assume that they're fairly rare.
In my opinion, this is a flawed proposal. The disposer should not be a method on the object, because it means it cannot pass around the resource without the risk that the recipient of the resource disposes it themselves. A safer alternative is available- [item, disposer]. See this issue: https://github.com/tc39/proposal-explicit-resource-managemen...
Meh, I don't think the more confusing interface defined in that issue warrants the added complexity. using should only be used for local variables, and the example you give is fundamentally complicated by who should be responsible for disposing of the object:
function inner(defaultResource) {
const resource = defaultResource ?? getResource();
using resource;
}
That is, if a defaultResource is passed in, the caller should be responsible for disposing it, while if it wasn't passed in, the inner function should be responsible for disposing it.
IMO cases like this are better handled with linting rules that forbid use of using on anything but an object directly returned by a function on that line.
In another language I think your proposal could be quite reasonable, but it requires using patterns that aren't common in JavaScript, which would complicate the language more than the alternate proposal.
The biggest problem I see is that to use the feature in a class you'd be forced to define a static factory method to produce the object-disposer tuple. To be effective, a static factory needs to be paired with a private constructor, but JS has no first-class support for those, which means you have to go to contortions with exceptions to make sure people don't accidentally instantiate a non-disposable instance of the class.
The existing proposal uses a well-establish way to interact with language features—the well-known symbol [0]. It's compatible with any way of defining an object, and doesn't require using any particular pattern that may be unnatural for a subset of the community.
Yes, the requirement that static constructors are used makes this substantially less ergonomic than it would otherwise be for the reasons you point out.
I'm of the opinion that this is a paper cut worth stomaching in favor of the addition of an idiom that provides genuine safety. The type for the class and the static conductor (ie a free function) could be exposed, but the class itself could be hidden.
Nice, so this is the equivalent of `with` and context mangers in Python. I like it.
I generally avoid new TS features (apart from typing that gets compiled away) until it looks like they are going to make their way into JavaScript, anyone know if thats being considered?
TypeScript only implements JS features that make it to Stage 3, the only exception being decorators in the past which never went beyond Stage 2 in their previous form.
Those first draft decorators never made it past Stage 1. That's also why Typescript required a compile-time flag that began with `--experimental` before you could use them. It's amazing how many projects put an `--experimental` flag into Production code. (Thanks, Angular.)
More importantly, it does not allow distinguishing between successful completion and a thrown exception. Which may or may not be such a great feature in the first place, but still.
Things rarely change at all once they're at stage 3. That's why TS waits until that stage before implementing. There might be a minor tweak in some edge cases but stage 3 is usually "done".
I was really excited about this feature because I recently tweeted about wanting something like `(*testing.T).Cleanup` from Go [1]. I think this is achievable now across test frameworks because of this language/runtime addition! I scribbled something together and I'm looking forward to testing it out when TypeScript 5.2 is out: https://gist.github.com/disintegrator/35b6a6a87dbad54fada430...
I get you, but it's both a TypeScript and JavaScript feature, and more importantly TypeScript is including it in its next official release whereas it is still not part of ECMAScript (though close, at stage 3).
So speaking specifically of a new TypeScript feature is justified here.
That has the same answer. I'm not sure what you're looking for.
The polyfill adds code at the start of the block to make an array, and code at the end of the block to call dispose on everything in the array. Then the transpiler just has to turn "using" into an append.
There's some other details to handle exceptions but that's the basics of how it works.
Edited: The code, polyfill or native, doesn't really know if things can be disposed. It just tries to do it, because you told it to. If the variable is null it skips it, otherwise it asserts that the dispose function exists at using time, and blindly runs it at the end of block.
What happens if you save a reference to (or return, etc.) the variable you declared with "using"? Then it will end up automatically disposed, but from the programmers point of view it should still be usable (not disposed).
> from the programmers point of view it should still be usable (not disposed)
Then that programmer had their hopes too high. The basic "using" keyword is not for variables you want to move between scopes. Javascript doesn't do reference counting.
For saving/returning, you need to use explicit instances of DisposableStack and .move()
"Using" -- a poor choice of keyword. It has been used for many years in languages like C++, C# (and now Rust) for namespace manipulation. Using the same keyword to declare an object destructor becomes confusing.
For a language like Typescript which borrows so much of its syntax from C/C++ it would be best to either make the same syntax mean the same thing, or instead make up new syntax. Call it "with" or "defer", or something else instead.
Also the syntax "Symbol.dispose" to denote a symbol seems overly verbose. Why not use a single prefix character like Lisp/Ruby ":symbol".
It’s not, though the author and champion of the TC39 proposal is a Microsoft employee and has generally championed features that have a close analogue to those from C# (eg. cancellation tokens).
"with" was already taken by a misfeature of JavaScript, so… I too thought they were introducing something related to the "using" of C++, but the keyword does not seem too bad, in the context of a code, it's very readable. What would you suggest?
As for Symbol, it's a JavaScript feature. I bet it was introduced like this so symbols could be polyfilled, so adopted more easily, without making the lack of support a syntax error - i.e. no need to transpile this. A bit verbose, but bearable, and probably easier to parse for a human than a colon. Too much of such terse syntax rapidly makes the code harder to read and more difficult to look up for a novice, or possibly even for someone experimented..
Nah I remember back in 2012 when I wrote C# we used `using` to indicate "dispose of this as soon as it's out of scope". Like for filereaders or streamreaders, if I remember right.
It's not really the same as defer. If you're working with a file, you have to "open" and "defer close" separately. The point of "using" is to combine those into one statement -- it's more like RAII.
It's not really like RAII either, because you have to `using` the object.
It's a closer cousin of Python's context manager (aka C#'s using statement — probably where it actually comes from — and Java's try-with-resource), which probably hew closer to Common Lisp's unwind-protect (that is not type-oriented, but usually you'd create your own macro around that).
You’re right, but I don’t see a big difference in practice if the compiler is able to check that you’re using “using” correctly! Hopefully there’s at least a warning if you call something that returns a disposable object, but don’t dispose of it.
Just go ahead and add the same poor design as in C#. Using statements is probably the one single construction in C# that caused most bugs. I worked with that crap for nearly 20 years. I even had a bug in production today that most likely has to do with these "hidden" scopes.
Why on earth does someone think it is a good idea to finish some work at some arbitrary time instead of what is expected when reading the code top down. Puzzling.
It's not clear to me how / if this works with closures.
I expected that it would be part of a block initializer. e.g. `using x { /* scope where x exists */ } /* x was disposed when the block ended */`. This would also make it clearer that it can work even without assignment or with destructuring assignment (the outer object is disposed after the block even if it is not used).
C# has that syntax for using (the using block) but it's slowing being phased out in favor the using statement like this. It's a big cleaner and closer to the needs of the programmer this way.
It's not much different then let -- you're defining a block scoped variable that just happens to call the dispose symbol function when the block ends.
I have some reservations about the specification - it looks like there are a lot of caveats to how it can be used. For example, it's allowed in for and for-of loops, but not in for-in loops. Also looks like destructuring will not be allowed.
using res = getResource();
const { x, y } = res; // ok
using { x, y } = getResource(); // error
Also seems like there would be use cases where you don't even want to assign the result, you just want the destructor to run at the end of the scope. Something like:
using someMutex.lock();
With the assignment syntax it's not obvious to me whether that's allowed.
It’s intentional. The grammar production in the proposal is parametrized as ~Pattern (a parameter introduced in the proposal), which means it does not allow destructuring.
What would that even mean? The binding in a `for-in` can only ever hold a string, which can't be disposable. With a `for` or `for-of` the binding can hold an object, which is something you could dispose of.
Right, given that await x is an expression which could already resolve to an object with a dispose function, it seems like ‘using await x’ would just work, without needing special ‘await using’ shenanigans.
Reminds me of Resources from Scala. I keep seeing features being carried over from language to language--I wonder if we'll eventually reach convergence where all languages, perhaps within several broad groups like OOP vs FP, look the same with minor syntactical differences.
You're describing entropy. Properties and values distribute in the space until we reach equilibrium. Making all languages mostly the same. Unfortunately this doesn't mean we've found the ultimate programming language. We simply stopped innovating, believing there's nothing to innovate about (while actually we're in a local maximum).
In the 2000s, all the popular languages had C-like syntax and many had similar semantics. But now we're in a world with more diversity (Rust, Swift), so I think things are less convergent than they used to be.
C# used to require blocks, but as more and more things needed to be disposable you'd often find code bases winding up somewhat unreadable "waterfalls" of blocks, for various reasons. Languages automatically track the scopes for a lot of things, especially in the cases of async/await functions and generator functions which build complex state machines. Automatically tracking the scopes of using statements isn't that much different than tracking let/const through all of a function. Some languages make that explicit, too. (The most common LET in LISP requires explicit "blocks", as an example.)
I thought the same at first, but forcing one additional scope level / layer of indentation for each disposable resource ends up looking a lot like "callback hell", with deeply nested code. I've seen that in Python many times. I like the thoughtfulness of Typescript, not going for the naive solution but trying to see how it will scale.
I just want TS to have some kind of RTTI. It would be really cool if you could generate a list of keys on a type or object using `keyof`. It would make typescript significantly better for backend development so that you could automate input validation rather than having to write some kind of swagger annotation on every single field. Would make generics a lot more useful, too.
I know the reasons that they don't do this, but I think without that kind of metaprogramming, typescript and node remain poor choices for backend development
Doesn't work on types, only instantiated objects with values for those keys:
class Foo {
foo: number | undefined;
bar: string | undefined;
};
console.log(Object.keys(Foo)); // prints []
console.log(Object.keys(new Foo)); // prints []
I have always used class-validator for this. The custom validation stuff always comes up in backend dev eventually so starting with it has always been the right call for me. This includes small toy projects and entire SaaS backends.
Yeah we use nestjs decoratos as well, but there's limitations, since the decorators run at runtime.
For example, we wanted to create a Query DTO for our API. It would be a generic class that allowed you to specify the fields you wanted to return in a query, like:
{
include: ["Field", "Foo", "Bar"]
}
You can construct this in typescript like:
class Foo {
bar: number | undefined;
foo: string | undefined;
};
class Query<T> {
includes: Array<keyof T> | undefined;
};
And use the query like:
let q = new Query<Foo>;
q.includes = ["foo", "baz"]; // fail: "baz" isn't in `keyof Foo`;
But since types are erased at runtime, and typescript classes don't work like ES6 classes (which would actually make a hack to support this kind of runtime validation available), you can't validate through an enum on the field, like:
class Query<T> {
@ApiProperty({
enum: keyof T // meaningless at runtime, won't compile
});
includes: Array<keyof T> | undefined;
}
And since Object.keys doesn't work, you can't validate the keys for the object at runtime unless you ditch the generic and just write out the same boilerplate code over and over again with different values for every type you want to be able to query.
Typescript was obviously designed to run in a browser, where it originates data, and not on the backend where it receives data, since input validation is such a chore on the backend.
I don't even want a runtime call for `keyof T` like `typeof`; I just want typescript to compile `keyof T` into an expanded array of strings. I would even settle for a C/C++ style preprocessor macro support. There have been times when I wish I could just log the filename of a function in an error using a preprocessor like in C/C++:
An object called resource, consisting on a anonymous function that logs Horray to the console. This is all fine and I am used to that.
The interesting part that stuck with me is "[Symbol.dispose]:". Why is it in a Array? What does this exactly do? It marks the anonymous function as disposable at the end of the code block? Correct?
So if we know Symbol.dispose in advance we can simplify the code? Why did they make it so unwieldy especially since it is a new functionality not requiring backwards compatibility hacks?
> So if we know Symbol.dispose in advance we can simplify the code?
Symbol.dispose is Symbol.dispose. It's a singleton object and that's the sole reference to it.
> Why did they make it so unwieldy especially since it is a new functionality not requiring backwards compatibility hacks?
1. Bare names in object keys are strings, so there needs to be an "escaping" syntax for non-string keys, that is what the brackets have done since 2015, in order to support the addition of...
2. Non-conflicting keys, which are what symbols are: symbols are guaranteed not to conflict with any string, and even to be unique unless they're created using `Symbol.for` (which accesses and stores into a global registry, this allows adding protocol methods to all objects without the risk of breaking existing any (by triggering new and unexpected behaviour).
> So if we know Symbol.dispose in advance we can simplify the code?
Symbol.dispose is not a string, but a symbol. Its value cannot be written out in the code, only referred to indirectly (that is basically what symbols are for), nor is there special syntax for properties with symbol keys like there is for properties with string keys, because that would be a nonsensical proposition due to the aforementioned fact. Thus, you cannot just write the property name directly, you have to use the computed property name syntax: [Symbol.dispose].
I wonder how does it work with destructuring? The last example extracts `connection` from the returned object:
await using { connection } = getConnection();
If you are used to Rust semantics, you'd expect that the whole thing is disposed, apart from still living objects. Reading this line I am immediately afraid that the whole object is disposed immediately (or right on the next line), the callback is called and your `connection` a couple of lines later will be closed already.
This shit pisses me off. This goes against the language design anti goals. And there’s way more stuff from other languages that is far more valuable than using.
They didn't steal it completely since it doesn't happen automatically. You can still forget to clean up and leak files or threads or what have you.
I continue to be disappointed that only Rust and C++ have automatic destruction. People already regularly forget to use Java's try with resources or Python's with, or Go's defer.
Isn't the point of computers that they can remember stuff for us? Everyone agrees automatic cleanup is the way to go for memory but not any other kind of resource?
RAII doesn't work with a (non-refcounting) GC is the issue: when the scope ends, you've no idea how many references there are and which of these are live.
With static typing you could opt into affine or linear types and those could get special cased by the compiler (to ensure a single reference is possible, and trigger the drop in case of affine types), but first you do need static typing (that's not javascript), and then you need to update the entire language to correctly handle the new affine or linear behaviour.
And that can lead to a lot of weird shit with languages not designed for that (especially linear typing).
Any other good articles on this, other than the proposal spec?
1. Is dispose run when the object is Garbage, or when the nearest scope exits?
2. Is `using` const or let?
3. Is dispose looked up on declaration or before call?
4. Are chained resources disposed, or only the last?
5. What happens if the using resource is not disposable?
4. I don't know what "chained resource" means. If you mean multiple declarations within the same scope, they all get disposed, in LIFO order (even if disposing one of them throws).
5. An error when the `using` statement runs. (Unless the RHS itself is null, rather than an object without a `Symbol.dispose` method; this lets you have conditional declarations.)
5 is an interesting choice, what is the downside to allowing `using` on non-disposable values, treating it like a no op disposer? Otherwise a function that takes a maybe-disposable object parameter has to check if the symbol is defined, or the caller has to wrap the non disposable object
4 I mean `using x = a(b(), c())` if all functions return a disposable value will all three be disposed, or only the value returned by a and assigned to x?
Will await using actually perform the await at release, i.e. at the end of the scope? Could add an unexpected event loop tick around a curly brace (so if you schedule something after the await but before the end of the scope, it will look like the scheduled thing will always happen after dispose, but it could happen before.)
Nice to have, but normaly you wrap file / database / whatever into a function or a lib, that handles erors, file handle closes, etc.
Currently i did not see much usecases for me. But lets see how (crazy?) it will be used.
I also want to point out, that instead of fixing the `with` operator[0] which does the same thing (scope management), they introduced a Symbol. Which is fine, for what it is, however it does limit implementation a little bit, using something like a proxy is a bit ambiguous for example.
Yeah, there's a number of cases that use the Symbol pattern. They're all relatively new, but that's at least in part due to Symbols themselves being relatively new.
Symbol.iterator comes to mind as a decent example.
The problem with structural/duck typing is that words can mean several things. For example `.run`ning your duck is probably very different from `.run`ning your Thread. I think the Symbol is a way to signal "this is the dispose function whose contract is defined in the JS `using` spec and not some random function that happens to be named that way".
Maybe I am dense, but in the examples given, the code this feature is meant to replace looks much more straight forward and easy to reason about. I get the use case here: automatically manage resources that need to be opened/closed like files etc but the syntax is way too convoluted IMO.
It's a new lexical binding form parallel to const and let, and a requirement that any value so bound have a property with a specified name. Where do you see the convolution?
It feels like this sort of syntax convention could be used to bolt on all sorts of interesting features to the language. Like, at this point, this is basically a sort of compiler hack to add features rather than really do any typechecking.
Stage 3 is "implement it and provide feedback based on real implementations" so in some respect Typescript is adding it at the same time as browsers (and is one of the parallel implementations expected to provide feedback ahead of Stage 4).
By coming up with even more special-case syntaxes to replace "const"? e.g. you want to have something like "foo x = something.else" to replicate the syntax convention of "using x = ..."? I'm not sure I understand what you're referring to.
How does this deal with escape analysis etc? I can't see how to implement this without vm changes or extreme code transforms which would be antithetical to the spirit of typescript.
Yeah, I figured as much after taking the time to read the TC doc. Seems like sugar that doesn't really add much. I was excited when I first saw the headline.
While it seems fine and useful, there’s still an aftertaste of c++ness. That’s exactly how `A a;` becomes a statement that does everything and your cat without you noticing.
I wish ts/js hackers just add proper threads (or green threads) with shared memory parallelism to the language instead of adding these gimmick-y features
Why is the `using` keyword necessary, isn't the existence of a `[Symbol.dispose]` property enough? I.e. always call `[Symbol.dispose]()` when exiting the scope. Maybe have a keyword to not dispose the resource.
Seems the current design makes it very easy to forget adding the `using` keyword, leaving resources dangling.
It pushes the extra code to the function. If you're consuming a library, that's code you never have to write. Even if you are writing the code, it moves the complexity out from your business logic so it can focus on the important things rather than cluttering it up with tidying up resources.
From what I can tell from that documentation, you do still have to remember to call `client.release()`. There's even a big warning saying "You must release a client when you are finished with it."
Presumably you would want to wrap that in a try/finally so that you don't accidentally not release the client if there is an exception.
If you can't understand why people would want to use typescript instead of javascript then you're not trying very hard. And a lot of people are in situations where they need to use one of those.