Eventing System

What are we going to do as far as events?

UI/Input events, custom user events, low-level concurrency events.

How does our plan to support web workers potentially complicate this?

What’s the event architecture going to look like? Can we or should we use an existing library?

Some food for thought in no particular order:

1 Like

Extending from my ideas in in the web workers thread: The parts of the API that are delegated to workers can work with the user directly, in the UI thread. I think we can avoid sending UI event stuff to a worker and back. There can perhaps be a scene graph that serves for the purposes of managing UI-side events and live in the UI thread. The first worker-side graph lives in at least one worker thread, and exists for the purposes of matrix calculations, etc. The UI-side graph would obviously mirror the structure of the worker-side graph. So, if a UI event happens on a leaf node, that can propagate up the ui-side graph (depending on how it’s wired by the user (shouldn’t be exclusively predetermined like it is in Famous Engine)), and if the event handler tells a node to update it’s position (or size, align, etc), only then will a message go to a worker. So, basically, the ui-side graph might exist merely to house event handler functions, and other UI-side things. Those functions themselves can call things like node.setPosition which would cause messages to go to corresponding parts of a the worker graph.

@oldschooljarvis I added Famous 0.3 events to your list (edited your post).

I really liked the API in 0.3.5. GenericSync was awesome. I really liked being about to bundle touch and mouse events in one place! We may be able to go a step further and package Events for Accelerometer and other mobile specific data. IMHO half of this framework is about the Events the other half about performant animation, we need to get it right!

1 Like

Sorry for late reply. I also think most UI events can stick inside the UI layer (@trusktr & @steveblue, reply coming soon to your awesome ideas about the UI / worker separation, thanks!). DIsclaimer: I haven’t started working on my event model yet.

I guess there are two issues here:

  1. Eventing API
  2. Kinds of events

To focus on the latter, in our model, we just need to consider what kinds of events people need and how they intend to use them, to make sure we have all cases covered. Particularly I mean what needs to leave the UI thread, and latency. e.g.

  1. Swiping, etc. It’s enough to just call setPosition() on the UI thread.
  2. Parallax or anything that depends on mouse position, probably enough to send just mouse position to components.

Keeping events on the UI layer greatly simplifies things for us and for developers, assuming it all works.

Also, no one mentioned HammerJS. I admit I haven’t used it but it seems to be the most popular library for cross platform touch. It looks like it covers everything we need (including e.g. velocity) and I think if we just put a simple wrapper around it to integrate nicely with nodes, we can benefit from this mature and popular package.

I know everyone loved GenericSync… I didn’t use it much, was the main advantage the single API for click and touch? Does HammerJS cover that? From just the front page, it’s no problem to drag / flick with my desktop mouse, and I briefly saw something about emulating multi-touch too.

@steveblue accomodating accelerometer etc would be awesome! Would be one of those wow things on mobile.

Yes!!! :+1: +1 for saving time with 3rd party libs. I think that helps bring communities together. They can write a blog post about our use of it, and give us more exposure at some point, and vice versa. Let’s leverage the community.

1 Like

Seems like it does in a similar way. What I really liked about gesture handlers coupled with Event Handlers were the ways that you could intertwine events with pipe and subscribe. That was really cool for wiring up custom events. We might be able to couple Hammer with some custom event library to achieve similar. Hammer seems to be purely like the syncs of oh-three, but doesn’t provide API like oh-three EventHandler.

I wonder where Famous got their ideas for the pipe/subscribe model in oh-three.

Pipe and subscribe were likely based on node.js streams, which in themselves
are specialized eventEmitters.

There’s some pretty decent libraries around this stuff, like event-stream and perhaps especially relevant dom delegation stream

Pipe was pretty similar to what node did, but subscribe was a bit of sugar around that basic functionality.
It essentially kept a reference to the upstream event emitter, so that you could do an unpipe on the child,
and not the parent.

Streams are pretty much my favorite abstraction when it comes to the idea of decoupling the calculation of things from the rendering, and it’s also something that I’m happy that david’s Samsara library is working towards.

For instance, once you have a working stream implementation, you can use things like workerstream to ship off calculations, using :

inputStream // take the input ,
    .pipe(workerStream) //  send it to a web worker
    .pipe(renderStream); //  and render those

That’s pretty much what I was doing with famous 0.3 with the whiteboard app I started building.

1 Like

@AdrianRossouw I’m not so experienced with this stream approach yet. Can you describe what exactly happens when you run

inputStream // take the input ,
    .pipe(workerStream) //  send it to a web worker
    .pipe(renderStream); //  and render those

? For example, does something get piped to a worker (workerStream), then the worker gives back a result which then gets passed to renderStream? In the case that renderStream is associated with Famous 0.3, what would be happening behind the scenes?

Why not use the eventing system of 0.3? It was probably the most stable (and self-contained) part of the code. You can also use the eventing system of Samsara, which I’ll likely publish to NPM as a standalone module. It’s similar to 0.3, except has no support for pipe. It uses subscribe exclusively. Famous 0.3 EventEmitters with pipe have a consistent API to Node’s, which is a nice feature.

I’m not sure what responsibilities you require of the UI thread. If the UI thread gets all the transforms/opacities of everything needing to be committed, and is solely responsible for committing them to the DOM (or GL, etc), then I’m not sure why you would need any eventing logic there. But perhaps there is more to the UI thread that you have in mind.

I’ve definitely considered adopting the patterns from that (if not using the actual source).

What would be the advantage of using subscribe exclusively?

True, no eventing needed there.

The eventing logic I imagine this for is for user interaction (user clicked a button, now what?), not so much for worker/rendering stuff. That stuff obviously can only happen on the UI thread. In fact, I plan for my prototype to use eventing only for that purpose (if there’s eventing for things like rendering and workers, that stuff will be behind the scenes because I’m aiming to make the API as easy to use as possible for even beginner programmers, yet flexible enough for advanced users).

For what it’s worth, with my efforts so far mainly focusing on parallelism, I’m only using events where absolutely required (the WebWorker API is literally built on the DOM Event model).

Beyond that, events aren’t required—at least not in a low-level sense. I think it’ll actually be pretty easy to decouple the engine entirely from a user-facing event model, and just let the user incorporate their favorite flavor as desired.

1 Like

Sorry if I wasn’t more clear originally, and I might not have used the best names for things.

Node.js Streams are a very powerful and flexible abstraction built on top of how eventEmitters work. From a pretty high level, they are event emitters that follow a standard structure, of all having a ‘data’ event. When you pipe one stream into another, you are telling one stream to take the data events of another as it’s input.

So when you have duplex or transform streams, that can take both input and generates output, you end up with something like this :

stream1 // sends data from stream1,
   .pipe(stream2) // through stream2,
   .pipe(stream3) // into stream3

stream1.write({ msg: 'whatever' }); // emits the data event through the chain
stream1.end() // will end the stream, and unpipe it from all listeners

Now how that could work here is something like :

userInputStream // ie: user clicks on something
  .pipe(transformAppState) // ie: button.clicked = true
  .pipe(transformSceneGraph) // apply transforms to the scene graph
  .pipe(calculateGeometry) // ie: box grows by 5% or whatever
  .pipe(renderToScreen); // displays using css3d or webgl or whatever

or another way to put it :

var doWorkStream = transformAppState


But because everything really only cares about the data events going in, and the data events coming out of it, you could take most of that pipeline, and have it running inside a web worker or a webrtc data channel, or whatever.

// the original example
var workerStream = WorkerStream(/* init options */);

Some of the flexibility you gain from this is being able to generate not just json objects, but buffers/strings or binary data like typed arrays. Because you can pipe streams to multiple things, you could attach multiple renderers (ie: css3d and webgl). You can also handle buffering of calculations (ie: stacks things for in-between redraws) or throttling them (ie: to 60fps).

Anyway, that’s what I was getting at with my previous response. It was a lot more dense once I actually unpacked it, and kinf of off-topic for this thread.

edit: I forgot to mention that the WhatWG is busy working on a standard for streams that will most likely be implemented natively in the browser in the future : whatwg stream spec. I think they have diverged quite a bit from node though, and I don’t know of any standard implementation/ shim yet.


The choice between pipe and subscribe is one of push versus pull. a.pipe(b) will send all events from a to b, whether or not b needs them . b.subscribe(a) means a will send events to b only if b is explicitly listening to them. This gives a performance advantage to subscribe. I also prefer the philosophy of it, which I wrote about here: gist.github.com/dmvaldman/f957dd9a8ed3f6edf35d


I really like the idea of these FRP techniques, but I feel that in these example the public API for the engine is too complicated out of the box for new programmers, which I’d like to cater to. What I think might suit beginner programmers is to completely separate the engine implementation (whether it has FRP or not) from the user space (whether that has FRP or not).

So, for example, the example

userInputStream // ie: user clicks on something
  .pipe(transformAppState) // ie: button.clicked = true
  .pipe(transformSceneGraph) // apply transforms to the scene graph
  .pipe(calculateGeometry) // ie: box grows by 5% or whatever
  .pipe(renderToScreen); // displays using css3d or webgl or whatever

would become just

userInputStream // ie: user clicks on something
  .pipe(transformAppState) // ie: button.clicked = true
  .pipe(transformSceneGraph) // apply transforms to the scene graph

where the transformSceneGraph function manipulates state on a Node. Which might look like this:

// ...
var node = new Node
// ...
function transformAppState() {
function transformSceneGraph(data) {
  node.position.x = node.position.x + data.deltaX
  node.position.y = node.position.y + data.deltaY

and, assuming that in this example node.position.x and node.position.y are setters that cause the engine to modify the scene graph and re-render, then the user doesn’t have to worry about how rendering happens, or how workers in the engine are implemented. This is one of the goals in my prototype, to make things as easy as possible in order to make the level of entry as low as possible.

A new user, who doesn’t know anything about FRP, would at least be able to do something like

var scene = new Scene(document.body)
var node = new Node

var domEl = new DOMElement(node)
// set background color of domEl, border, etc

node.position.x = 10
node.position.y = 10

// make the thing move to a new position after 10 seconds.
setTimeout(function() {
  node.position.x = 100
  node.position.y = 100
}, 10000)

behind the scenes that would still use the engine’s requestAnimationFrame mechanism, assuming that node.position.x/y are setters and they trigger appropriate calls (they could also be node.setPositionX/Y(...) depending on how we end up preferring our API to be).

To me, a simple API like that is really important, and forcing FRP on the public API and making users determine how/when rendering and workers come into play would make a substantial increase the entry-level of the API.

Plus, based on how we implement our API, I can imagine that node.position.x and node.position.y can be streams themselves (in those examples the getter returns a stream perhaps?) to satisfy the more advanced programmers. Maybe the setter passes a value to the stream creator, and if it detects a number it just sets the node state, but if it detects something else (like an array) it can start behaving like a stream (similar to passing an array to Highland.js). I’m just conjecturing, since I only have some basic theoretical knowledge about FRP. :slight_smile:

This article is really interesting, on the topic of Communicating Sequential Processes, or CSP for short – an alternative (or perhaps complimentary thing) to asynchronous control flow seen in FRP techniques:

Taming the Asynchronous Beast with CSP in JavaScript

It has links to some other good articles too. I found my way to that article from this other interesting article: The Future of Drag and Drop APIs (thanks @mthart).

1 Like

FRP is also used by Meteor Tracker https://www.meteor.com/tracker

1 Like