[HN Gopher] State Machines II
       ___________________________________________________________________
        
       State Machines II
        
       Author : lukastyrychtr
       Score  : 38 points
       Date   : 2022-08-27 07:51 UTC (1 days ago)
        
 (HTM) web link (blog.yoshuawuyts.com)
 (TXT) w3m dump (blog.yoshuawuyts.com)
        
       | shoo wrote:
       | HMM state-machine tangent:
       | 
       | A certain kind of state machine pops up when trying to model
       | systems with hidden Markov models.
       | 
       | We start by first supposing there's a Markov chain: there is some
       | state space, the chain picks a state from the state space each
       | time step. The state at the next time step depends only on the
       | state at the current time step, according to a matrix of state
       | transition probabilities.
       | 
       | The next part is making the states "hidden" AKA unobservable --
       | we make the assumption that we cannot directly observe the
       | states, but we can observe some other kind of observation each
       | time step, and there is some probabilistic model linking these
       | observations and the "hidden" states, so observing a sequence of
       | observations over a few timesteps gives us some information about
       | the trajectory of states the system may be taking through the
       | state space.
       | 
       | With this particular niche application of state machines, here
       | are some things we might wish to do:
       | 
       | Suppose we have a HMM state machine A (with states, and state
       | transition probabilities, and an observation model) and another
       | HMM state machine B (with its own states, state transition
       | probabilities, observation model).
       | 
       | We might want to compose these to create a larger model where we
       | assume that the system has a state that is either some state from
       | A, or some state from B. This corresponds to creating a new state
       | space that is the disjoint union of A's state space and B's state
       | space, and creating new state transition probabilities and a new
       | observation model in the natural way.
       | 
       | Another way we might wish to compose two HMM state machines A and
       | B is to assume that our system will be in both a state from A and
       | a state from B at the same time. This corresponds to defining a
       | new state space that is the product of both of A and B's state
       | spaces. Again we can define state transition probabilities over
       | this product space in the natural way. To combine both
       | observation models, the simplest thing to assume is that we get
       | to observe an observation generated by A and an observation
       | generated by B each time step - but perhaps a more realistic
       | model might be that the observations generated both systems are
       | combined into some resulting composite observation we see.
       | 
       | For HMMs with finite state spaces, there are many common
       | inference tasks where we want to estimate the probability that
       | the system was in some state s at some time step t. We commonly
       | want to compute and store a 64-bit float worth of probability for
       | each state and each time step. If we have a vector of
       | probabilities indexed by states in our state space, if we have a
       | known matrix of state transition probabilities, we can "advance"
       | our probability vector one time step by multiplying it with a
       | state transition matrix, which may be sparse. The work we want
       | the machine to be doing boils down to sparse linear algebra, or
       | something similar, perhaps with a lot of exponential and
       | logarithm operations thrown in (working with log-probabilities
       | can be much more numerically stable than working with
       | probabilities).
       | 
       | So, dragging it back around to relate more to this post:
       | 
       | If we're encoding state spaces and states with the type system,
       | can the type system handle operations where a modeller might want
       | to take the disjoint union or product of two state machines?
       | 
       | For computations with Markov-chain like state space dynamics, we
       | might commonly want to store a vector of some quantity (floating
       | point probability values, say) indexed by state, and then
       | multiply them with sparse matrices, where one or both of the
       | dimensions of the matrix are indexed by states out of some finite
       | state space. Can the type system support this in a natural way,
       | ideally without any runtime overhead? ... vs defining some
       | bijection between the states in the state space and the integers
       | 0, ..., n-1, and using integer indices everywhere in our code for
       | the states, which aligns with how the CPU will perform matrix-
       | vector and vector-vector operations
        
       | oleganza wrote:
       | We wrote about the same idea in 2018 when implementing multi-
       | party computation protocol in Rust:
       | https://medium.com/@cathieyun/bulletproof-multi-party-comput...
       | 
       | The key idea is to make cryptographically-incorrect usage of the
       | API impossible to compile because we provide the library, but you
       | put networking and your app logic around it.
       | 
       | Specifically, the protocol fails if you allow a counter-party to
       | retry interaction from some intermediate state instead of
       | starting over. Move semantics in Rust allow us to make sure you
       | won't be able to misuse our library in your app.
        
       | [deleted]
        
       | MuffinFlavored wrote:
       | Am I the only person who would prefer `state.red_to_orange()` for
       | like... verbosity?
       | 
       | I guess that opens the door for things the compiler couldn't
       | catch at compile time versus runtime maybe? (like... unexpected
       | progression from one state to another, aka panic/exception).
        
         | rad_gruchalski wrote:
         | There's a middle ground. For example, Akka actors have a become
         | method which defines the new state. Erlang gen_statem events
         | simply tell the state machine what the next state should be.
         | Definitely nicer than just "next".
        
       | PaulWaldman wrote:
       | > A lot of programming is about going from one state to another.
       | We can do that using conditionals statements, but as the state
       | changes those conditions can be tricky to reason about and in
       | turn refactor. Instead it's often more convenient to use state
       | machines: singular objects which represent all possible states
       | and their transitions.
       | 
       | State machines don't necessarily alleviate the need for
       | conditional logic. They can still require conditional logic to
       | determine which state to transition.
        
         | jen729w wrote:
         | I mean almost by definition isn't every exit from a state node
         | a conditional logic test?
         | 
         | If [something], take path A. If [the other thing], take path B.
         | You (the machine) must always decide, given some condition.
         | 
         | This applies even if there's a default path. You only take the
         | default if one of the non-default paths didn't match (or if
         | it's the only path).
         | 
         | So I don't think that's what the author really meant here. They
         | implied conditionals that don't transition you to another
         | state. Because in _that_ scenario, you might still need to
         | consider that past conditional. Its values might still bite
         | you.
         | 
         | Whereas in the tranquility of state machine land, you can now
         | disregard any* decisions that you've already made.
         | 
         | *Obviously you might have stored state to consider, but y'know.
        
       | lloydatkinson wrote:
       | Another article that refuses to call them finite state machines
        
         | criddell wrote:
         | Are infinite state machines very common in software?
        
       | michaelsbradley wrote:
       | Declarative hierarchical state machines (a.k.a. statecharts) for
       | JavaScript / TypeScript:
       | 
       | https://github.com/statelyai/xstate#readme
       | 
       | https://xstate.js.org/
        
       | markjspivey wrote:
       | Parallels to Restful API, Hypermedia as the Engine of Application
       | State ...
        
       ___________________________________________________________________
       (page generated 2022-08-28 23:01 UTC)