How to decide what updates?

An “update” here refers to recalculating the data necessary to render something (e.g. the position/rotation/scale and then transform updates, whose data is sent to the renderer).

Famous 0.3.5 ran every update on every frame. This led to a very easy developer experience but was a bit crazy. Consider 50 nodes with 5 components each, that’s 250 updates per frame, and at 60 fps, is 15,000 updates (all with their own calculation logic) a second. As the limitations with this were made clear, a _dirty property was added to components to decide whether they needed to be updated.

In Famous 0.5+, you need to call this._node.requestUpdateOnNextTick(this._id);. This implies that components must specifically request an update, but from discussion in the channel on github it’s actually more like the older model, see e.g. @andrewreedy’s comments in famous/engine#464 - High CPU usage in Cordova iOS webview.

I know that @trusktr proposed that updates should be opt-out rather than opt-in, since it’s a more positive developer experience.

I know also my own opinion is that I prefer opt-in. In my mind, I think it’s worth picking a few things that have a big impact on performance and forcing patterns that promote high performance. Obviously nothing that makes in unpleasant to code or a barrier to entry (for those kind of things, we can have a doc about “Performance Optimizations”), but I think it’s ok for a few critical things to be pushed. In famin, from within a component, it’s just this.requestUpdate().

Another approach on this could be [immutable.js] ( If we would construct the node tree with immutable data that would mean any data that is changed on a child would basically replace the entire branch from the point where it has branched off. This allows for really quick traversal of the tree and perform object comparison instead of using a _dirty flag. It also allows the user to just set new data without having to register or call anything. It would just work :slight_smile:

1 Like

This is where I have been focusing a lot of time in design. I mentioned this in the old gitter.

Worse yet is 50 nodes x 5 components x N calculations with rendering in ~16ms (1000ms / 60 = 16.666…). The magic question is how many transactions can we fit into those 16ms on a moderate/poor performing device.

Managing the processing is key and performance has to be our main focus. Throttling might be the best way to handle these issues also.

In another thread someone mentioned giving the user the ability to see how things are performing. I think it imperative to expose the developer to tools that track these metrics as they develop, so they can make better decisions about how to apply animations and timing to their application and expose where performance bottlenecks are happening.

1 Like

@dieserikveelo, immutable.js looks pretty cool, it’s an interesting way of programming and I definitely see the advantage. However, I don’t believe it would be a good solution for the scene graph… data can be changing at 15,000 times a second potentially, and all the extra copying is a lot of unnecessary overhead.

@talves, if only we had 16.67 ms :slight_smile: The general advice is to stay below 10ms. In famin at least, updating the DOM and redrawing the page, combined, take about 40% of the time the calculations took. And there is still other browser overhead. By the time requestAnimationFrame fires, you don’t get the full 16.67 seconds either.

Yeah we should definitely provide a list of resources of how to evaluate performance, and have some basic stuff built in (e.g. log to the debug level if we’ve missed a few consecutive frames).

Oh and of course if we figure out workers in a good way, we have more time.

@gadicc Exactly! The 16.67ms is the max time to process the whole animation frame, do calculations, etc. I realized this and that is the point I was trying to make. Also, we need to realize there is not always a perfect split in time allocation also. Hence the reason for this discussion.

What gets priority?

How does that priority get split?

Can it be managed by the core library?

Should the API expose some of the priority decisions to the developer?

IMO, telling people to opt-in is just as much work as telling them to opt-out. Why should we need to call requestUpdate, if it can simply be implied (and documented). Whichever one is the simpler API, that’s the one I’d prefer. :smiley:

Is there some scenario you imagine opt-out would be less performant?

1 Like

I have to agree with this strongly. There should be a reactive action to the asking for a size change on a node. If a developer does not want to size the node, don’t size the node. :wink: The core should be making these performance decisions based on the requirements.

That being said. I still think there should be a priority control by the developer over what goes into an animation first, because they may not care if a background scrolling completes until before or after a menu opens.

I’m on the fence about how this should be approached and propose the best of both worlds. It would be nice if the scene graph could be subdivided between workers and maybe a handler figures out where a node needs to be messaged without developer input but also give the developer the option to prioritize animation tasks through an option on the node or component that forces the node to the top of a queue on every frame.That last part sounds kinda weird, I’m more in favor of the Scene Graph just being subdivided and we have a cap. There’s going to anyways always be a cap because browsers can only handle so many divs animating at once, even with calculations offloaded to workers.

Hmmm, I like that idea. Someone mentioned something about priority numbers, but that could get hard to manage.

I think, in general, there’s usually only 1 or 2 animations that are more important than other animations as far as user interaction is concerned (f.e. the user slides a menu open, or scrolls a view), and which animations those 1 or 2 are varies depending on the user action. The other case is foreground vs background. Of course, this is in terms of UI. There’s also games, which are completely different (mostly foreground vs background, but if your background enemies are freezing for your foreground friendlies, then that makes it too easy to snipe the enemies. :laughing:).

@Steveblue I also am on the fence, but believe it will be decided once performance testing and performance enhancing the design.

Although it is a good idea for us to keep all this in consideration. It should help us for the correct design, but cannot be decided 100% until we have the structure in place and start performance tests.

1 Like

I think every scenario? Update every component on every frame or only update those that have been changed. If “has been changed” should be transparent to the developer, it means the engine has to figure out which values are “set” (vs calculated), store a copy of them, and see which has changed, to decide whether or not to call an update.

Just the “opting out” part itself is less performant. It means every component has to be evaluated on every frame in order to opt out, instead of only the components that run requesting another update.

To be clear, I’m only talking about compnents. setPosition should know if it’s changed a value and request an update. A user calling setPosition shouldn’t need to do anything else. Hope that’s clear.

1 Like

@gadicc I personally envisioned opt-out as more an intrinsic design where it is implied that if a component does not have a need for updating or no properties for change, then it has opt-out by default. The developer using the API would have some control of this based on settings of a certain property or halting an animation for a node, etc…

We definitely do not want to be traversing the scene graph to have to make these decision. These should be determined at the time of the state change of a property and ready for update.

Not the way I was imagining it. It’s equivalent to opt-in. Components would request updates from the Engine internally, and as soon as the getter is set to false then it stops. Since it’s a getter, when set to true then it will have logic to communicate to Engine that it needs an update, and updates starts again. When no components need updates, Engine will still be completely idle.

Here’s another idea. So, we need updates in two scenarios: user input must change some values, and animation. Are there any more scenarios than those two? So, why don’t we just abstract the need for opting in or out entirely?

For example, suppose there are components on a Node like Rotation and Position. Those components would not contain any animation API, only static set methods. Another component (perhaps called Animation) could be added to the node, and that could be used to animate other specified components. The Animate component would keep track internally of whether it needs to request updates. So, f.e., a contrived example:

animation.go(0, 100, function(node, currentValue) {
  // executed in a worker perhaps?
  node.position.set(currentValue) // call the static method. The Position component attached itself at node.position for convenience perhaps.
}, {duration: 5000, curve: 'expoInOut'})

// or animation.from(0).to(100, function(node, currentValue) { ... })

The animation is it’s own component and the 3rd arg can be used for specifying what properties to change with currentValue. The Animation component has internal logic tells the engine to call it’s go method repeatedly and logic to tell the engine to stop. All of our components that need continual updates for periods of time can be like this. I guess it already was like this in Famous if we established the onUpdate calls inside a component class instead of using literals with onpDate defined right then and there like Famous commonly suggested. We just need nice docs, and I think opt-in or opt-out won’t really matter (and won’t affect performance unless users write things that always update and never stop). In my experience, even with opt-in in Famous, I saw too many people just opt-in and never stop their updates, even if they were done doing what they needed to do, which might just mean the docs weren’t good enough.

1 Like

The user should update only raw data. This should be the source of change. Then the widgets should be able to listen to data and update themselves if the data has changed.


Have you guys done any research behind the ideas around Redux?

read this: What the Flux? Let’s Redux is a great article explaining the principles.

It is built around the idea of storing your state in a single tree (sound familiar?), but then only
replacing that state, not modifying it inline.

From the article:

As it turns out, the simplest/fastest way to check whether an object has changed is to check whether it’s the same object reference or not.

Instead of doing some sort of deep comparison of properties such as:

_.isEqual(object1, object2)

It’d be a lot faster/simpler if we knew that any time an object changed, we replaced it, instead of editing it in place.

Because of this, it allows you to plug it into react really easily, by just making
all state checks use the exact object checks.

Honestly, there’s probably a lot we can learn from the React / vdom stuff, not the least
of which being that JSX is actually pretty great to work with. We should also take a poke
at elm and that clojurescript answer in the same space (i forget the name)

1 Like

Should also mention that Object.observe was just withdrawn as an ecmascript proposal.

This is kind of how angular 1 was doing it’s two-way data binding, and is one of the ways that the whole dirty thing could have been handled.

They say one of the reasons that it lost out was because react’s ‘flux’ pattern of single-directional data flow and immutables became much more prevalent.

On one hand I like how simple this makes working with data reactively, but on the other, if huge parts of the tree are replaced at once, could it have a performance cost?

No, it should actually be a benefit to performance.

You can do a test like :
if (oldState !== newState) { doRender() }

This is much faster than trying to deal with dirty flags, etc.

edit: oh btw, we should ask @dmvaldman about this, because samsara replaces
the entire render tree the whole time, apparently.

Where’d you get this idea @AdrianRossouw? This is what did, but Samsara doesn’t do this at all.

Before I get to what Samsara does, I’d like to clarify that there are two types of “performance” at play here: memory and CPU.

if (oldState !== newState) { doRender() }

Is very fast, as it’s just checking a pointer. However, updating newState from oldState by recreating the tree, or large portions of the tree is expensive memory wise. Especially if this is done at 60fps. did this, and it was a boon to battery life and GC performance. React does a lot better by going through a diff step by essentially doing a tree walk, and only updating what’s changed.

Now we come to Samsara, which takes a drastically different approach by embracing FRP.

Everything in Samsara is a stream that ends in the DOM in opacity, size and transform inline styles. So for every DOM element, there is a pipeline created that goes like

data <- function <- function <- function <- DOM

If data isn’t changing, none of these functions are being called. Once the data changes, each function is triggered, until it ends in the DOM. In this approach there is no dirty checking. Dirty checking is a result of an architecture that needs to ask components if they’ve changed, by making the if statement above. With FRP, components don’t ask, they tell. If you imagine an Excel document, you don’t loop through every row and column entry asking if it’s changed, and updating it if it has. You set up dependencies (subscribe streams) and when an entry changes, all entries causally related to it get an “update” event and they recalc themselves. This is how Samsara works.

This is also true of the render tree itself (because it too is a stream).

    /      \
child      child

when one node changes, only the subtree underneath updates. Moreover, the only way to “change a node” is to change the layout data associated with it. This is the same picture as the data pipeline above, except drawn vertically, and with branches.

In terms of performance, the FRP approach will outperform the React/Virtual DOM approach any time. The cost is that having a stream to every DOM property in memory is quite overkill, which is why React /Virtual DOM is a great compromise. However, in Samsara the only properties we care about are opacity, transform and size, so this is approach is more reasonable.