We say that the graph is damaged when it changes in the middle of evaluation. In certain cases though this is acceptable, if the change is not observable.
It takes an integer variable and an expression and increments the variable each time the expression is evaluated, returning its value unchanged. In general, this combinator is not well-behaved: it changes the graph during its evaluation. The result depends on the evaluation order of expr and counter.
To detect these ambiguities, the current implementation maintains a damaged : bool field in Lwd nodes, to detect if a node has changed in the current update cycle.
However this was an conservative approximation: if any effect was detected during evaluation, the graph was considered ambiguous. The new implementation accepts graph if there is a strict ordering between read and write effects.
join now allows the outer computation to have effects that impacts the inner one. map2 is now evaluated left-to-right (in map2 f t1 t2 , t1 is evaluated before t2). Assuming t2 is not used elsewhere in the graph (there are no aliases outside), then t1 is allowed to perform effects that can invalidate t2.
Make reasoned use of this property. Relying on the ordering of effects is a bad practice in general, and not really in the spirit of Lwd; yet, it enables efficient implementation of a few specialized operations.
Example: sharing constraints and cutting-off evaluation
Imagine we want to make a list of widgets that are constrained to all have the same width.
This definition will produce the right Ui but has a performance bug. When a single element of the table changes, the width is updated and the final layout is entirely recomputed.
An $O(1)$ change (a single item) turns into an $o(n)$ recomputation (layout of each item).
What we would like is a cutoff operator: if after recomputation the width value is the same, we can skip the relaying-out the table. However a performant cutoff operator is tricky to implement and can have surprising runtime characteristics.
The evaluation and damage-resilience of join let us implement a good one:
valoften_changing:floatLwd.tvaltransform:float->ui(* We have a floating point computation that changes often and we would like to
apply a transformer on it.
However, the transformation is expensive and cares only about the integral
part of its input, so we would like to avoid recomputing it when possible.
*)letcross_thresholdv1v2=int_of_floatv1<>int_of_floatv2(* The expensive graph that is always recomputed *)letunfiltered=Lwd.maptransformoften_changing(* A cheaper graph that detects integral changes for recomputing the transform *)letfiltered=cutoffoften_changingcross_threshold(Lwd.maptransform)
When using cutoff, one can observe that there are two continuations:
the one passed explicitly in the continuation argument,
the implicit one built by binding on the returned value.
Only computations in continuation will be filtered, computations in the return continuation will be recomputed as usual.
The new implementation
Damages are safe if they touch a part of the graph that has not been observed yet, as defined by the left-to-right evaluation order of map2, app and pair and outer-inner evaluation of join and bind.
If a part of the graph that has always been observed is damaged:
sample root returns normally,
is_damaged root returns true,
a second call to sample will reevaluate the damaged part of the graph.
We say that the graph is damaged when it changes in the middle of evaluation. In certain cases though this is acceptable, if the change is not observable.
Imagine the following combinator:
```ocaml
val increment_on_evaluation :
int Lwd.var -> 'a Lwd.t -> 'a Lwd.t
let increment_on_evaluation counter expr =
Lwd.map
(fun result ->
let count = Lwd.peek counter in
Lwd.set counter (count + 1);
result
)
expr
```
It takes an integer variable and an expression and increments the variable each time the expression is evaluated, returning its value unchanged. In general, this combinator is not well-behaved: it changes the graph during its evaluation. The result depends on the evaluation order of `expr` and `counter`.
To detect these ambiguities, the current implementation maintains a `damaged : bool` field in `Lwd` nodes, to detect if a node has changed in the current update cycle.
However this was an conservative approximation: if any effect was detected during evaluation, the graph was considered ambiguous. The new implementation accepts graph if there is a strict ordering between read and write effects.
`join` now allows the outer computation to have effects that impacts the inner one. `map2` is now evaluated left-to-right (in `map2 f t1 t2` , `t1` is evaluated before `t2`). Assuming `t2` is not used elsewhere in the graph (there are no aliases outside), then `t1` is allowed to perform effects that can invalidate `t2`.
Make reasoned use of this property. Relying on the ordering of effects is a bad practice in general, and not really in the spirit of `Lwd`; yet, it enables efficient implementation of a few specialized operations.
### Example: sharing constraints and cutting-off evaluation
Imagine we want to make a list of widgets that are constrained to all have the same width.
```ocaml
let list_same_width_inefficient (table : Ui.t Lwd.t Lwd_table.t) : Ui.t Lwd.t
let get_width ui =
let {Ui. w; sw; _} = Ui.layout_spec ui in
(w, sw)
in
let max2 (w1,sw1) (w2,sw2) = (max w1 w2, max sw1 sw2) in
let max_width = Lwd.map_reduce
(fun _row ui -> Lwd.map get_width ui)
(Lwd.pure (0, 0), Lwd.map2 max2)
table
in
let set_width (w, sw) ui = Ui.resize ~w ~sw ui in
Lwd.map_reduce
(fun _row ui -> Lwd.map2 set_width max_width ui)
(Lwd.pure Ui.empty, Lwd.map2 Ui.join_x)
table
```
This definition will produce the right `Ui` but has a performance bug. When a single element of the table changes, the width is updated and the final layout is entirely recomputed.
An $O(1)$ change (a single item) turns into an $o(n)$ recomputation (layout of each item).
What we would like is a _cutoff_ operator: if after recomputation the width value is the same, we can skip the relaying-out the table. However a performant cutoff operator is tricky to implement and can have surprising runtime characteristics.
The evaluation and damage-resilience of `join` let us implement a good one:
```ocaml
val cutoff :
'a Lwd.t -> ('a -> 'a -> bool) ->
('a Lwd.t -> 'b Lwd.t) -> 'b Lwd.t
```
Which can be used as follow:
```ocaml
val often_changing : float Lwd.t
val transform : float -> ui
(* We have a floating point computation that changes often and we would like to
apply a transformer on it.
However, the transformation is expensive and cares only about the integral
part of its input, so we would like to avoid recomputing it when possible.
*)
let cross_threshold v1 v2 =
int_of_float v1 <> int_of_float v2
(* The expensive graph that is always recomputed *)
let unfiltered =
Lwd.map transform often_changing
(* A cheaper graph that detects integral changes for recomputing the transform *)
let filtered =
cutoff often_changing cross_threshold (Lwd.map transform)
```
Here is a possible implementaition:
```ocaml
let cutoff input threshold continuation =
let previous = ref None in
Lwd.map' input (fun input' ->
match previous with
| None ->
let var = Lwd.var input' in
let k = continuation (Lwd.get var) in
previous := Some (var, k);
k
| Some (var, k) ->
if threshold (Lwd.peek var) input' then
Lwd.set var input';
k
) |> Lwd.join
```
And the corresponding `list_same_width`:
```ocaml
let list_same_width (table : Ui.t Lwd.t Lwd_table.t) : Ui.t Lwd.t
let get_width ui =
let {Ui. w; sw; _} = Ui.layout_spec ui in
(w, sw)
in
let max2 (w1,sw1) (w2,sw2) = (max w1 w2, max sw1 sw2) in
let max_width = Lwd.map_reduce
(fun _row ui -> Lwd.map get_width ui)
(Lwd.pure (0, 0), Lwd.map2 max2)
table
in
let set_width (w, sw) ui = Ui.resize ~w ~sw ui in
cutoff max_width (<>) @@ fun max_width ->
Lwd.map_reduce
(fun _row ui -> Lwd.map2 set_width max_width ui)
(Lwd.pure Ui.empty, Lwd.map2 Ui.join_x)
table
```
When using `cutoff`, one can observe that there are two continuations:
- the one passed explicitly in the `continuation` argument,
- the implicit one built by `bind`ing on the returned value.
Only computations in `continuation` will be filtered, computations in the return continuation will be recomputed as usual.
# The new implementation
Damages are safe if they touch a part of the graph that has not been observed yet, as defined by the left-to-right evaluation order of `map2`, `app` and `pair` and outer-inner evaluation of `join` and `bind`.
If a part of the graph that has always been observed is damaged:
- `sample root` returns normally,
- `is_damaged root` returns `true`,
- a second call to `sample` will reevaluate the damaged part of the graph.
We say that the graph is damaged when it changes in the middle of evaluation. In certain cases though this is acceptable, if the change is not observable.
Imagine the following combinator:
It takes an integer variable and an expression and increments the variable each time the expression is evaluated, returning its value unchanged. In general, this combinator is not well-behaved: it changes the graph during its evaluation. The result depends on the evaluation order of
expr
andcounter
.To detect these ambiguities, the current implementation maintains a
damaged : bool
field inLwd
nodes, to detect if a node has changed in the current update cycle.However this was an conservative approximation: if any effect was detected during evaluation, the graph was considered ambiguous. The new implementation accepts graph if there is a strict ordering between read and write effects.
join
now allows the outer computation to have effects that impacts the inner one.map2
is now evaluated left-to-right (inmap2 f t1 t2
,t1
is evaluated beforet2
). Assumingt2
is not used elsewhere in the graph (there are no aliases outside), thent1
is allowed to perform effects that can invalidatet2
.Make reasoned use of this property. Relying on the ordering of effects is a bad practice in general, and not really in the spirit of
Lwd
; yet, it enables efficient implementation of a few specialized operations.Example: sharing constraints and cutting-off evaluation
Imagine we want to make a list of widgets that are constrained to all have the same width.
This definition will produce the right
Ui
but has a performance bug. When a single element of the table changes, the width is updated and the final layout is entirely recomputed.An $O(1)$ change (a single item) turns into an $o(n)$ recomputation (layout of each item).
What we would like is a cutoff operator: if after recomputation the width value is the same, we can skip the relaying-out the table. However a performant cutoff operator is tricky to implement and can have surprising runtime characteristics.
The evaluation and damage-resilience of
join
let us implement a good one:Which can be used as follow:
Here is a possible implementaition:
And the corresponding
list_same_width
:When using
cutoff
, one can observe that there are two continuations:continuation
argument,bind
ing on the returned value.Only computations in
continuation
will be filtered, computations in the return continuation will be recomputed as usual.The new implementation
Damages are safe if they touch a part of the graph that has not been observed yet, as defined by the left-to-right evaluation order of
map2
,app
andpair
and outer-inner evaluation ofjoin
andbind
.If a part of the graph that has always been observed is damaged:
sample root
returns normally,is_damaged root
returnstrue
,sample
will reevaluate the damaged part of the graph.39626f99aa
.