[HN Gopher] Reconcile All the Things
       ___________________________________________________________________
        
       Reconcile All the Things
        
       Author : puzzlingcaptcha
       Score  : 39 points
       Date   : 2021-07-04 22:31 UTC (3 days ago)
        
 (HTM) web link (acko.net)
 (TXT) w3m dump (acko.net)
        
       | thangngoc89 wrote:
       | This is part 2 of a 3 parts article series. You can see the
       | discussion on HN for part 3 here:
       | https://news.ycombinator.com/item?id=27750864
        
       | tlarkworthy wrote:
       | > At this point I should address the elephant in the room. If
       | we're talking about declared computations with input dependencies
       | and cached results, isn't this what data flow graphs are for?
       | Today DFGs are still only used in certain niches. They are well
       | suited for processing media like video or audio, or for designing
       | shaders and other data processing pipelines. But as a general
       | coding tool, they are barely used. Why?
       | 
       | sidenote: the programming model of Observablehq is a data flow
       | graph and its bloody excellent for hot developing a running
       | program. You can swap one dataflow node for a completely
       | different piece of code _and leave the rest of the program state
       | in place_. This is finer grain iteration than a REPL.
       | 
       | The topic is reconciliation on top of that... and those kinda of
       | experiments are possible. Observablehq exposes the "this" value
       | to access the previous value of the node. So you can do
       | traditional reconciliation:
       | https://observablehq.com/@tomlarkworthy/reconcile-nanomorph
       | 
       | An individual dataflow at the top level can be seen as a stream
       | of values.
       | 
       | I have not thought much about _general_ reconciliation, but if I
       | were shooting for maximum efficiency I would be thinking about
       | maybe you could add delta sync to these streams.
       | 
       | I definitely observe the situation where I want a dataflow node
       | to be an array of values, or an aggregate result over an array,
       | and then I want to add a single datum to the dataset upstream.
       | Currently everything just gets recomputed from scratch which
       | seems wasteful. I am not sure if generalising reconciliation
       | helps with the issue though.
        
       | juliend2 wrote:
       | Off-topic, but that 3D animated header is dope.
        
       | laputan_machine wrote:
       | > Unfortunately, this only works for functions of one argument,
       | because each WeakMap key must be one object. If you wish to
       | memoize (x, y) => {...}, you'd need a WeakMap whose keys are xs,
       | and whose values are WeakMaps whose keys are ys. This would only
       | work well if y changes frequently but x does not.
       | 
       | Can someone explain why we couldn't concat / hash the incoming
       | arguments and use those as the cache key? My understanding is
       | that the cache key needs to be unique for its inputs, e.g.
       | slow(1, 2) and slow(2, 1) would still give unique key values.
       | 
       | Edit: nevermind I think this is due to the core concept of
       | WeakMaps, which I've never encountered, from the article:
       | 
       | > This is a key/value map that can hold data, but which does not
       | own its keys. This means any record inside will be garbage
       | collected unless another part of the program is also holding on
       | to the same key.
        
         | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-07-07 23:02 UTC)