Slowly but surely, promises have spread throughout the JavaScript ecosystem, standardized by ES 2015 and embraced by the web platform. But the world of asynchronous programming contains more patterns than the simple single-valued async function call that promises represent. What about things like streams, observables, async iterators—or even just cancelable promises? How do they fit, both in the conceptual landscape and in your day-to-day programming?
For the last year, I've been working to bring an implementation of I/O streams to the browser. Meanwhile, designs for a cancelable promise type (sometimes called "tasks") are starting to form, driven by the needs of web platform APIs. And TC39 has several proposals floating around for more general asynchronous iteration. We'll learn about these efforts and more, as I guide you through the frontiers of popular libraries, language design, and web standards.
12. TAMING COMPLEXITY VIA OPINIONS
▪ Promises are eager, not lazy
▪ Continuables a.k.a. “thunks” are a lazy alternative
▪ Instead, we use functions returning promises for laziness
▪ Promises enforce async-flattening
▪ No promises for promises
▪ Troublesome for type systems, but convenient for developers
▪ Promise bake in exception semantics
▪ Could delegate that to another type, e.g. Either<value, error>
▪ But promise semantics integrate well with JS (see async/await)
13. RESULT, NOT ACTION
Promises represent the eventual result
of an async action; they are not a
representation of the action itself
14. PROMISES ARE MULTI-CONSUMER
// In one part of your code
awesomeFonts.ready.then(() => {
revealAwesomeContent();
scrollToAwesomeContent();
});
// In another part of your code
awesomeFonts.ready.then(() => {
preloadMoreFonts();
loadAds();
});
15. CANCELABLE PROMISES (“TASKS”)
▪ Showing up first in the Fetch API
▪ A promise subclass
▪ Consumers can request cancelation
▪ Multi-consumer branching still allowed; ref-counting tracks cancelation
requests
16. CANCELABLE PROMISES IN ACTION
const fp = fetch(url);
fp.cancel(); // cancel the fetch
const fp2 = fetch(url);
const jp = fp2.then(res => res.json());
jp.cancel(); // cancel either the fetch or the JSON body parsing
const fp3 = fetch(url);
const jp = fp3.then(res => res.json());
const hp = fp3.then(res => res.headers);
jp.cancel(); // nothing happens yet
hp.cancel(); // *now* the fetch could be canceled
17. CANCELABLE PROMISES IN ACTION
// Every promise gets finally
anyPromise.finally(() => {
doSomeCleanup();
console.log("all done, one way or another!");
});
// Canceled promises *only* get finally triggered
canceled
.then(() => console.log("never happens"))
.catch(e => console.log("never happens"))
.finally(() => console.log("always happens"));
18. CANCELABLE PROMISES IN ACTION
// All together now:
startSpinner();
const p = fetch(url)
.then(r => r.json())
.then(data => fetch(data.otherUrl))
.then(r => r.text())
.then(text => updateUI(text))
.catch(err => showUIError())
.finally(stopSpinner);
cancelButton.onclick = () => p.cancel();
19. DESIGN DECISIONS FOR TASKS
▪ Ref-counting, instead of disallowing branching
▪ Cancelation (= third state), instead of abortion (= rejected promise)
▪ Subclass, instead of baking it in to the base Promise class
▪ Evolve promises, instead of creating something incompatible
23. ITERATORS ARE SINGLE-CONSUMER
▪ Once you next() an iterator, that value is gone forever
▪ If you want a reusable source of iterators, have a function that returns
fresh iterators
▪ Iterables have [Symbol.iterator]() methods that do exactly that
▪ (… except sometimes they don’t)
▪ Is the iterator or the iterable fundamental?
▪ Is the sequence repeatable?
24. WHERE DO WE ADD METHODS?
▪ myArray.map(…).filter(…).reduce(…) is nice
▪ Can we have that for iterators? Iterables?
▪ Iterators conventionally share a common prototype (but not always)
▪ Iterables do not—could we retrofit one? Probably not…
▪ If we add them to iterators, is it OK that mapping “consumes” the
sequence?
▪ If we add them to iterables, how do we choose between
keys/entries/values?
25. REVERSE ITERATORS?
▪ You can’t in general iterate backward through an iterator
▪ The iterator may be infinite, or generated from a generator function, …
▪ You can reduce, but not reduceRight
▪ https://github.com/leebyron/ecmascript-reverse-iterable
33. THE ASYNC ITERATOR DESIGN
▪ The natural composition of sync iterators and promises; fits well with JS
▪ async for-of
▪ async generators
▪ Single-consumer async iterator, plus async iterables = async iterator
factories (just like for sync)
▪ Consumer-controlled rate of data flow
▪ Request queue of next() calls
▪ Automatic backpressure
▪ Implicit buffering allows late subscription
38. THE READABLE STREAM DESIGN
▪ Focused on wrapping I/O sources into a stream interface
▪ Specifically designed to be easy to construct around raw APIs like kernel read(2)
▪ A variant, ReadableByteStream, with support for zero-copy reads from kernel
memory into an ArrayBuffer.
▪ Exclusive reader construct, for off-main-thread piping
▪ Integrate with writable streams and transform streams to represent the
rest of the I/O ecosystem
▪ Readable streams are to async iterators as arrays are to sync iterators
44. THE OBSERVABLE DESIGN
▪ An observable is basically a function taking { next, return, throw }
▪ The contract is that the function calls next zero or more times, then one of either
return or throw.
▪ You wrap this function in an object so that it can have methods like map/filter/etc.
▪ Observables are to async iterators as continuables are to promises
▪ You can unsubscribe, which will trigger a specific action
▪ Push instead of pull; thus not suited for I/O
▪ No backpressure
▪ No buffering
45. OBSERVABLE DESIGN CONFUSIONS
▪ Has a return value, but unclear how to propagate that when doing
map/filter/etc.
▪ Tacks on cancelation functionality, but unclear how it integrates with
return/throw, or really anything
▪ Sometimes is async, but sometimes is sync (for “performance”)
46. PULL TO PUSH IS EASY
try {
async for (const v of ai) {
observer.next(v);
}
} catch (e) {
observer.throw(e);
} finally {
observer.return();
}
47. PUSH TO PULL IS HARD
Too much code for a slide, but:
▪ Buffer each value, return, or throw from the pusher
▪ Record each next() call from the puller
▪ Correlated them with each other:
▪ If there is a next() call outstanding and a new value/return/throw comes in, use it
to fulfill/reject that next() promise
▪ If there are extra value/return/throws in the buffer and a new next() call comes in,
immediately fulfill/reject and shift off the buffer
▪ If both are empty, wait for something to disturb equilibrium in either direction
50. BEHAVIOR DESIGN SPACE
▪ Continuous, not discrete (in theory)
▪ Polling, not pulling or pushing
▪ The sequence is not the interesting part, unlike other async plurals
▪ Operators like map/filter/etc. are thus not applicable
▪ Maybe plus, minus, other arithmetic?
▪ min(…behaviors), max(…behaviors), …?
▪ What does a uniform API for these even look like?
▪ Could such an API apply to discrete value changes as well?
54. THINGS TO THINK ABOUT
▪ Representing the action vs. representing the result
▪ Lazy vs. eager
▪ Push vs. pull vs. poll
▪ Mutable by consumers vs. immutable like a value
▪ Allow late subscription or not
▪ Multi-consumer vs. single-consumer
▪ Automatic backpressure vs. manual pause/resume
▪ Micro-performance vs. macro-performance
58. THIS IS THE
ASYNC FRONTIER,
AND YOU ARE THE
PIONEERS
Thank you!
@domenic
Hinweis der Redaktion
I’m here to talk to you about async frontiers. Now why would I be qualified to talk about something like that?
I wrote the spec for ES6 promises, and Promises/A+ before them
I’ve spent the last couple years working to bring I/O streams to the browser, from Node.js
I’m on TC39, the committee responsible for evolving the JavaScript language
And my day job is as an engineer at Google, working on all this stuff
One common question asked about promises is “why standardize them into the language when they could just be a library?” It’s a very good question, that gets to the heart of what standards are about. And the answer is very simple: we standardize promises for the same reasons we standardize HTTP, or HTML, or JSON, or anything else---to give us a shared protocol that everyone can speak, and that future work---both in the browser and in your code---can build on as a basis.
This protocol has spread throughout the platform: we now have a unifying way for the web platform to communicate asynchronous results. And your code can buy into this rich ecosystem as well, by using promises.
But promises are not the only protocol for the platform, of course. Another common protocol is the iterator protocol, which we’ve seen a bit in the preceding talk. This protocol is important for things like JS’s built-in maps and sets, but also for more exotic things on the platform like collections of headers, cookies, query string fragments, and so on. Protocols are everywhere.
From here, having introduced promises and iterators, the standard spiel is to show you this diagram, and declare that I have come to tell you about the amazing unifying abstraction that will save you all from the fires of async-plural hell. Maybe I would even promise you that this magical solution will be standardized in ES2016, and you should all start writing code with it as soon as possible since otherwise you’ll be left behind as all the cool kids rush off into the rapture.
No.
This chart is oversimplified to the extreme. It is the product of lazy thinking. And filling in that fourth box won’t solve all your problems.
What’s wrong with that chart aren’t the headings---async/sync, singular/plural. What’s wrong with the chart is pretending that we have only one answer for each of them.Even singular, synchronous values come in different varieties:
In languages like C++, you have the distinction of pointer-like vs. object-like;
In JavaScript, we have primitives vs. objects vs. some future “value types” proposal;
We can further distinguish how our singular, synchronous values handle errors: do we use bubbling exceptions, or some monad-like Either type, etc.
So instead of thinking about your problem space as divided up into four neat little boxes, with a single paradigm for each …
… I want you to think about the different ways in which your problems can become interesting, and think about these as two of them. With this perspective, it should become clear that each of these lends itself to more and more interesting possibilities for solutions---iterators and promises are not the only game in town. And when you combine the two, then things get really interesting.
And that’s what today’s talk is really about. I want you to leave this with an appreciation for the rich world of possibilities out there, and the efforts being made to push the boundaries of what’s possible today to explore the frontier of where we’ll be tomorrow. I want you to understand the subtleties involved, the tradeoffs, and all of the other interesting things that make us the kind of software engineers who gather together on a Friday for a conference about code.
Let’s begin by dipping back in to the world of single, async values. We’ve seen a lot about these already today, talking about promises. But let’s try to come at this from a fresh perspective, not taking promises as they are, but instead considering this design space as a whole. What choices do promises make, within the design space of single, async values?
We know that any movement away from the already-complicated singlular/sync quadrant will add complexity. And one of the ways to tame that complexity is to make opinionated decisions about how things should work. Here I’ve given an overview of the opinions that went in to the design of promises---the opinions that helped us narrow down one particular primitive in the async/singular space.
(Read.)
There’s one more design decision I’ve left off this list, and it’s perhaps the most interesting.
In particular, what this means is that you treat them much like you treat a singular synchronous value---you can pass it around, give it to multiple consumers, and they can all observe it and use it and do whatever they want.
Of course, none of these consumers can interfere with the value. And there’s no semantic where, if one consumer looks at the value, others can’t. Thinking of promises as analogous to synchronous values, that’d be crazy. So you can see why we designed promises so they can be used this way.But it turns out that this isn’t always the most useful approach. Sometimes you want something that’s kind of in between---it should represent the value, for most cases, but also give you a representation of the action that produced that value. And in particular, one important thing you could do to that action, would be to cancel it.
Introducing cancelable promises. (read)
On this last point, in particular we didn’t split into two objects, one representing the action and one representing the value.
We need them sooner rather than later; these should be arriving pretty soon. So although this is part of the frontier, we hope to make it settled territory pretty soon.
---
Enough about promises, and everything adjacent to them in the design space. Let’s take a minute…
… and briefly talk about the third quadrant: plural, synchronous sequences. Again, I don’t want to rehash how iterators work for you: I want us to step back and look at the space with fresh eyes, to see what are all the design choices that’ve been made.
The ES spec contains both interfaces: many iterables are simply read-only, with a next() method to get their next value. But those produced by generator functions allow consumers to impact the sequence, including to terminate it with either a return value or an exception. Adding this kind of flexibility to generator functions was specifically done to allow the more … creative use cases, like the spawn function we saw earlier. But it means that consumers can interfere with the sequence, if they want. That’s probably OK though, because…
One the one hand, the iterator is clearly where all the action happens. On the other hand, we talk about maps and sets and arrays more often than we talk about MapIterators or SetIterators or ArrayIterators.
There’s a proposal for a second protocol, reverse iterators, that would allow this. Is such a protocol worth building in to the language?
OK, so we’ve already seen how quadrants 2 and 3 are not as simple as they might first appear. Let’s take our newfound perspective and apply it to the tougher problem, of quadrant 4.
The problem with this idea is that if you have an iterator, you can call next() on it any time---in this example, we call it four times in a row. But, how does the iterator know what it should put for done? It doesn’t even know how long the sequence is, since it still has async work to do!
The async iterator design merges the two concepts in a manner that works out a bit better: instead of next() returning { value, done }, now next() returns a promise for { value, done }. This allows the iterator to asynchronously decide not only the value, but also whether the sequence has ended or not.
Now, it’s important to remember that the async iterator design is speculative. It’s just a proposal. But one thing that’s already a standard, and is already shipping in Chrome in fact, is I/O streams. The interface is a little bit different, for reason we’ll see:
But you can notice the main similarity: each chunk of data from the stream is represented by a promise for a { value, done } pair. There’s this extra reader thing that’s going on, and that’s actually pretty important. But if async iterators take off as a proposal, then we’ll definitely add the extra stuff to readable streams such that you can treat them just like async iterators:
… i.e., using async for-of. Which is great, when you want to consume the stream directly. But really, most of the time when doing I/O, what you actually want to do is set up a chain of transforms, and pipe things to some eventual destination:
… and that’s what the readable stream API is designed around, most of the time. That reader thing I mentioned is specifically designed so that you can pipe a readable stream to a writable stream off the main thread of your app, when you use this kind of code. So you can see we’re actually operating at a somewhat different level here.
To recap some of the highlights of how we designed readable streams, …
Finally, I want to show you …
… OK. So enough about streams. Let’s take a step back.
So far we’ve looked at async iterators, which are what naturally emerges when you combine iterators and promises.What’s interesting about them is that, like sync iterators, they are “pull”: the consumer calls next(), and decides when it’s ready for more data. This gives you all those nice things like buffering and backpressure and so on, automatically.
But is there ever a case where “push” is appropriate? Where the consumer gets data pushed to them, as fast as the producer decides to give it?
Let’s look back to our scenarios where async plural might come in handy. Those first three are pretty clearly driven by the consumer: the consumer gets data from a stream when they are ready for it; the consumer asks for the next directory listing or sensor reading as appropriate.But what about “events in general”? By this I mean things like clicks, or mouse moves, or so on. It kind of makes sense to say that a better model for them is that the producer pushes the events. Instead of a consumer asking for a mouse click, they get pushed a mouse click when it happens.Normally we’d use an event emitter for this kind of thing, and we’ll talk about that later. But there’s another approach that’s being proposed, which is this construct called observables.
The code for using an observable looks like this, in its essence. An observable has a subscribe method, which takes this three-method object and returns you a subscription, which you can later use to unsubscribe. Pretty simple:
The next question I have is, given something like an async iterable, can we convert it to an observable? Well, sure: just drain the async iterable as fast as you can, and push the data elsewhere. The code basically looks like this.
It’s certainly doable, but it also raises a lot of hard questions about e.g. what if my buffer gets too full? Various observable frameworks have offered solutions to this, e.g. Dart has pause and resume methods that you have to call manually, RxJS has approximately 30 different methods that produce different variants of observables and things called “subjects” and so on. But in the end it’s just much harder to go in this direction.
So my overall thoughts on observables …
… are that it’s unclear whether they pull their own weight to contribute to the async-plural problem space. They have a lot of shortcomings and confusing design choices, whereas async iterators are simple and natural, and can easily be converted to a push model for when that’s appropriate. Performance is often claimed as a benefit, since after all they are just wrappers over functions. But demos with even the most high-frequency events, like mousemove, show that this is not borne out in practice. And in reality, most of your performance will be gained by respecting backpressure and otherwise integrating well with the I/O subsystem.
There’s a final category of async-plural that I want to quickly throw out there, for two reasons. First, it’s pretty different from the other stuff we’ve been seeing. And second, because it’s actually showing up a lot in the platform recently. We call it a “behavior,” adopting some FRP terminology.
The basic idea behind a behavior is that you’re measuring a single quantity, repeatedly. So instead of a logical sequence of values, where e.g. if you concatenated them all you’d get something meaningful, it’s the same conceptual value, changing over time.
We have here three examples from current or upcoming specs…
I think this is a really fascinating space. When I wrote my talk description, I wasn’t even aware of this as a pattern. It caught me by surprise to see these popping up everywhere, and IMO we as a community haven’t spent enough time on this. Can we fit them into some other abstraction like observables? Should we?
Well … yes, in the sense that this stuff is fun to think about, and will make you a more fulfilled human being. And no, in that at the end of the day, events will continue to work just fine, and you’ll get your job done even if someone makes a mistake and chooses observables over behaviors or whatever.
What I mean by this is that most of the time it’s really OK that the web platform doesn’t have a beautiful compositional paradigm for reactive programming, and instead we just fall back to events. Except sometimes it isn’t---sometimes we’re stuck without a way to stream data that doesn’t occupy all of our memory, so we work to standardize streams. Right now we don’t have a way to cancel our HTTP requests, so we need to finish standardizing tasks. But the rest of it isn’t that big a deal. As long as we have the streams API, I can do streaming HTTP uploads and downloads. I might have to use a while loop and some promises instead of an async for-of loop, but that’s OK.