[HN Gopher] A non-mathematical introduction to Kalman filters fo...
___________________________________________________________________
A non-mathematical introduction to Kalman filters for programmers
Author : pkoird
Score : 327 points
Date : 2023-08-02 15:11 UTC (7 hours ago)
(HTM) web link (praveshkoirala.com)
(TXT) w3m dump (praveshkoirala.com)
| phkahler wrote:
| There is one aspect missing. When taking the weighted average of
| a prediction and measurement, the weights can be time varying in
| a Kalman filter. Otherwise I believe it goes by a different name.
|
| A good example is a single sensor measuring a slowly changing
| quantity. A fuel Guage for example. A good estimate from second
| to second is that there is no change, but a measurement may have
| noise (fuel sloshing around in the tank). A Kalman filter in this
| case will look like a first order low-pass filter with an
| exponentially decaying gain. The cutoff frequency changes so you
| can find the starting level quickly (in seconds) but then ignore
| the noise with a very low frequency cutoff (0.01hz say).
| the__alchemist wrote:
| Nice article on an important tool.
|
| My understanding is that linear Kalman filters are an optimal
| solution to linear problems. Linear Kalman filters are relatively
| easy to understand and implement.
|
| Most applications I've found are non-linear. Extended (and
| unscented) Kalman filters are much tougher to grasp and
| implement. Resources and libraries are fewer and less useful.
|
| For example, I have an AHRS/GNSS CAN device intended for small
| UAVs. Extended Kalman filters for these I've come across (like
| for PX4 and Ardupilot) are very complex, and have many
| parameters. I've found it more straightforward to take an ab-
| initio approach using quaternion fundamentals, and nudging the
| gyro solution towards the accelerometer "up", and the
| magnetometer inclination vector. If the accelerometer magnitude
| differs much from 1G or the mag vector differs much from the
| local earth mag field strength, downweight or skip the update
| from that sensor, letting the gyro coast.
|
| Likely, an EKF would be the correct play, but I've given up
| trying to understand one, or make or adapt one that would work or
| be easier to tune and diagnose than the ab-initio approach.
| tnecniv wrote:
| For what it's worth, the EKF isn't much more complicated than
| the regular KF. You are basically just linearizing your
| dynamics / sensor model then doing the KF updates with that
| linear model.
|
| However, for quadrotors specifically, there is a huge pain
| point: the rotations. A nuance of the linear models of the
| Kalman filter is that everything lives in Euclidean space.
| However, rotations place you on a manifold -- for quaternions
| specifically this manifold is the set of unit quaternions. If
| you naively apply an EKF to estimate your quaternion, you will
| no longer have a unit quaternion and then your estimates will
| be garbage. There are well-known ways to handle this manifold
| constraint but they result in some of the ugliest equations I
| have ever turned into code.
|
| As a simple example of the difficulty, consider a state (x, y)
| that, due to physics or whatever, we know will always be on the
| unit circle, I.e., if f(x, y) is your dynamics, it outputs a
| new point on the circle. However, if you linearize f(x, y),
| these approximate dynamics are not guaranteed to keep you on
| the unit sphere and you end up with a non-physical state (or
| state estimate for the extended Kalman filter).
| cmehdy wrote:
| You just threw me back into time I spent on EKF
| implementations on early iPads over a decade ago (as an
| exchange student), as part of a M.Sc. in Control Theory...
| before I had learnt about KF. The goal was to compensate for
| the drift observed on the position of those devices which
| made them useless for the need (art exhibition stuff).
|
| I don't think I even fully completed the project because my
| understanding of both quaternions and KF were at that point
| pretty shaky, this could be a fun project to pull with some
| old hardware now..
| sheepshear wrote:
| My recommendation if open courseware is the type of resource
| that might interest you:
|
| http://mocha-java.uccs.edu/ECE5550/index.html
| toasted-subs wrote:
| Yea, when you know what you are trying to model it's extremely
| easy to write a model that represents a traditional linear
| Kalman filter.
|
| But there are many ways to write a Kalman filter; depending on
| where you start non-linear transformation becomes extremely
| cumbersome to get right.
| nextos wrote:
| A probabilistic programming language can make it much easier to
| implement a non-linear filter. Essentially, you implement the
| generative model, and you get inference compiled for you.
| Interesting options to look at include ForneyLab.jl, Infer.net,
| Gen.jl, and Pyro.
| tnecniv wrote:
| This can work if the time scale of inference is smaller than
| the time scale of your measurements. I'm skeptical that this
| would work on a quadrotor that requires a fast control loop.
| However, model predictive control (determining actions given
| a state and dynamic model; the Kalman filter is model
| predictive estimation) first found major use in chemical
| plants because they could crunch the numbers on a big
| computer that didn't need to fit on a robot and the real time
| between each time step was large. For such a situation, you
| might be able to get MCMC to work.
| nextos wrote:
| The options I suggested above are not necessarily MCMC,
| they are mostly message passing algorithms. ForneyLab and
| Gen in particular are designed for online inference.
| tnecniv wrote:
| Good point! Still, a big reason why the reason why the KF
| is a staple is that it's really really fast. When I've
| used tools like the ones that you mentioned, it was
| normally not for inference not in a feedback loop. I'm
| less familiar with message passing methods than Monte
| Carlo methods but I'm going to look into them now
| nextos wrote:
| Message passing can lead to extremely fast maximum
| likelihood or variational inference.
|
| I have used it for realtime control problems. It is a
| very interesting domain!
| tnecniv wrote:
| Do you have a good reference? Preferably one that
| mentions a control application because I'm curious to see
| what the model assumptions and speed look like in that
| context
| nextos wrote:
| You can start with a review [1], and then BRML [2]. I
| think applications that are very focused on control tend
| to come from the military and thus not so well
| documented.
|
| [1] An introduction to factor graphs. https://ieeexplore.
| ieee.org/stamp/stamp.jsp?arnumber=1267047
|
| [2] Bayesian Reasoning and Machine Learning. http://web4.
| cs.ucl.ac.uk/staff/D.Barber/textbook/200620.pdf
| sowbug wrote:
| I'm nearsighted, and I have astigmatism. I've noticed that when I
| close one eye and look at something like a clock on the wall,
| each eye produces a differently distorted image. Each eye is a
| bit different. But when I look at the clock with both eyes, the
| image is much sharper, and better than either eye alone. The
| article's scenario of 1,000 ship passengers reporting their
| individual GPS coordinates reminded me of this phenomenon.
|
| I bet that brains use clever algorithms like Kalman filters
| extensively.
| bhattid wrote:
| If you're interested in the term, what you noticed with your
| eyesight is an example of hyperacuity. Coincidentally, I
| learned about it making the same observation with my own eyes.
| :)
| inopinatus wrote:
| Thanks for this! tiny nitpick: the transparency in the ship
| illustrations becomes problematic when the page renders in dark
| mode.
| askvictor wrote:
| My first real programming job was implementing a Kalman filter
| for GPS car applications in the 90s. GPS still had crippled
| accuracy for civilians back then (about 100m), which made it
| useless for turn-by-turn navigation in your car. But assuming
| that you have an accurate location at the start of the journey,
| and that the car stays on roads (and that the road maps are
| accurate), then we could filter it down to a decent reading.
| TekMol wrote:
| Do I understand it correctly that Kalman filters are supposed to
| give a better estimate of a value from noisy observations than
| just averaging the observations?
|
| Say I measured something 3 times and observed 7,8 and 9. My guess
| would be that the actual value is 8. Would a Kalman filter come
| up with a different estimate?
| sdfghswe wrote:
| KF gives you the optimal estimate (in the sense of maximum
| likelihood) for a (multivariate) Gaussian model.
|
| What KF would give you in your example depends on what exactly
| the model is. But even in the simplest scenario imaginable
| you'd have to specify what expected value you start with, in
| the same way that you would in Bayes.
| mordae wrote:
| The way I understand it, in a linear system with discrete
| measurements where you move and measure your position,
| averaging last 3 measurements, you are effectively lagging 1
| step behind your actual position.
|
| What Kalman filter does is that it estimates your position and
| then averages measurement with that, in essence bringing the
| value closer to where you are at the moment.
|
| Having a delay in a feedback loop may cause oscillations. If
| you react way slower than you measure, you might not need
| Kalman filter. Proposed GPS example is relevant here, because
| position updates come in slowly.
| alfanick wrote:
| So my very easy mental model of Kalman filter is: you feed
| model with some data (measurements, that are noisy, unstable)
| and some noise (usually Gaussian). Kalman filter actually kinda
| reduces precision by adding this noise, but you gain stability
| and less fluctuations and that's what makes it beautiful.
| tnecniv wrote:
| The Kalman filter does not require adding noise. The noise is
| a product of whatever phenomenon you are estimating. Normally
| we use noise in a model to cover up some behavior that is far
| to complicated to include in a more rigorous way. A classic
| example is the origin of Brownian motion. This probabilistic
| model came about to describe the motion of a large particle
| in a fluid of a smaller particles (e.g. a spec of dust in
| water). You could do physics and model every particle in the
| fluid and its collisions but that's not tractable. Thus, the
| insight was to just model the large particle and turn the
| effects of collisions with the smaller ones into random
| disturbances.
| JHonaker wrote:
| > Do I understand it correctly that Kalman filters are supposed
| to give a better estimate of a value from noisy observations
| than just averaging the observations?
|
| Yes, but with a big caveat. Your observations are related by
| something like the transition mechanism. You can't just apply a
| Kalman filter to every problem and expect to get better
| results.
|
| A Kalman filter is traditionally used in estimating something
| that moves over time (but can be used for more than just this).
| Think a person moving in a video, or some other sort of random
| walk. By assuming some sort of relationship between two
| successive timepoints/measurements, like speed and current
| heading, we can blend the information from a model for motion
| with the model from noisy measurements to get a better estimate
| of the position/value or of the entire motion history.
|
| If that motion model is meaningfully incorrect, then you don't
| get improved estimates.
|
| A lot of the ways that people have extended them have to do
| with incorporating more sophisticated motion models, like
| dealing with wheel slippage in robotics, so your heading
| direction might have error now, etc.
| willow8349 wrote:
| [dead]
| ckrapu wrote:
| It would if you had some past history to work with. Let's say
| that you measured the same object awhile ago and you want to
| incorporate that prior information in a principled way, while
| also letting the new measurements dominate if they give a very
| different result.
|
| If you encode that language into a linear Gaussian model, you
| get KF.
| tnecniv wrote:
| In addition to what others (and I) said elsewhere, where the
| Kalman filter beats the simple average is when your samples are
| not identically and independently distributed. If they were,
| then yes, you can just take an average as justified by the law
| of large numbers.
|
| The Kalman filter handles the case where your samples are
| correlated due to some (linear) dynamics (also your
| measurements don't need to be directly of the quantity of
| interest, they can be a (linear) function of that quantity
| (plus Gaussian noise). Thus, the probability of you observing 8
| for your second measurement changes given the knowledge that
| you observed a 7 as the first measurement. If you just take the
| sample average like you did above, that will not in general
| converge to the actual mean value.
| kxyvr wrote:
| No. It's used differently.
|
| Here's another way to understand Kalman filters that doesn't
| require statistics, but does require some knowledge of feedback
| controllers. Consider a model of a system of the form
| x'=Ax+Bu y=Cx
|
| Here, we have a linear system with a state variable `x`, system
| dynamics `A`, control `u`, control dynamics, `B`, and
| observation `y`. This states that we have linear dynamics, but
| we can only observe some of the state variables. For example,
| perhaps we can observe position, but not velocity or
| acceleration. At the same time, we want those other variables
| because we need them for a feedback control or some kind of
| observation. In order to do this, we use machinery similar to a
| feedback controller. Define an observer system:
| xx' = Axx + Bu + L(y-Cxx) xx(0) = xx0
|
| Here, we have a new observer variable `xx` that we want to
| converge to the true state variable `x`. To do this, we have
| introduced a new matrix of gains called `L`, which we call the
| observer gain for a Luenberger observer. The reason that this
| system is useful is that if we consider the error `e=x-xx`, and
| subsitute this into the above equations, we get:
| e' = x' - xx' = ... = (A-LC) e
|
| From ODE theory, we know that `e` will converge to 0 if the
| real part of the eigenvalues of `A-LC` is negative. Hence, to
| facilitate this, we focus on trying to find an appropriate `L`
| that forces this condition. In order to do this, we note that
| the eigenvalues of `A-LC` are the same as it's transpose, `At-
| CtLt`. This leads to an optimization formulation:
| min 0.5 <Qee,ee> + 0.5 <Ruu,uu> st ee' = Atee + Ctuu, ee(0) =
| ee(0)
|
| The notation `<x,y>` means inner product and is simply `xt y`.
| Here, we're essentially looking at the adjoint equation with a
| new kind of control `uu` and a new adjoint state variable `ee`.
| This is a linear quadratic control on the adjoint of the error
| generated by the Luenberger observer. There is a mostly open
| choice for `Q` and `R` that we discuss below. Through a long
| sequence of derivations that is nearly identical to that of
| linear quadratic control, we find that we eventually solve for
| a matrix P in the system: PAt + AP - PCt inv(R)
| C P = -Q
|
| And then set the observer gain to `L=-PCt inv(R)`
|
| Given this observer gain, we go back to our system above with
| `xx`, plug it in, and then solve. That system is constantly
| updated with observations of the original system in `y` and it
| produces an `xx` that converges very rapidly to the original
| state variable `x`. This gives us a way to view all of the
| state variables in the original equation even though we can't
| observe them all directly.
|
| Now, to get to a full Kalman filter, there's a question of how
| to choose the matrices Q and R above. They could be almost
| anything. If you have an estimate of the what you believe the
| covariance of the errors are in the state and observations in
| the original equations, you get to a Kalman filter. Really, the
| derivation above is continuous, so it would be a Kalman-Bucy
| filter. Honestly, you don't need any statistics, though.
| Normally, just normalize the weights, so that all of the state
| variables are around the same size and then maybe adjust it on
| what you think is more important. It generally works just fine.
|
| In short, a Kalman filter is a little machine that uses the
| same machinery as a feedback controller that rapidly converges
| to the state of a system of interest. This is useful because
| it's unlikely that all of those states can be observed
| directly. Due to how feedback controllers work, it is robust to
| errors and can handle variation or error in the observations.
| asimeqi wrote:
| Kalman filters are used for changing entities. Say you have a
| moving object. You could measure (with some errors) its
| position every second and use that as an estimate of where it
| is. Or use your position measurements in combination with an
| estimation of the object's speed that you can get from previous
| measurements and estimate where the object is. The second
| method is the Kalman filter.
| a_wild_dandan wrote:
| To clarify, you mean _completely independent_ speed
| measurements, right? As in speeds not derived from the
| position measurements? Otherwise, that feels like getting
| something from nothing.
| asimeqi wrote:
| I meant speeds derived from previous position measurements.
| Using those speeds is not getting something from nothing.
| It means using previous measurements to estimate where the
| object might be now then combine that estimate with current
| measurement to have the "best" estimate for its current
| position, for some definition of best.
| bee_rider wrote:
| I believe part of the cleverness of the Kalman Filter is
| that it works out the degree to which your measurements are
| correlated for you. I haven't looked at it in a while,
| though.
| tnecniv wrote:
| Not your measurements. That correlation must be
| specified. It works out the correlations of your state
| (the thing you are estimating).
|
| In the above example, their measurements are noisy
| mechanical states (position and momentum). However your
| measurements can be any (linear plus noise) function of
| the state, but you need the covariance of your sensor
| noise.
| trollerator23 wrote:
| Yes, because unlike an simple average you have a model for both
| what you are trying to measure and the measurement.
|
| For example, one basic model for your example could be if you
| are trying to measure a constant with some initial uncertainty,
| say it's Gaussian with a standard deviation, and noisy
| measurements with say also an uncertainty with Gaussian
| distribution and some standard deviation. You can tune the
| initial uncertainty (a number) around the constant you are
| trying to estimate , and the uncertainty in the measurements
| (another number than may or may not change).
|
| In this example a Kalman filter won't behave as an average. If
| the measurements are good (low uncertainty) they will converge
| quickly, if they are bad the estimate will jump around and take
| longer to converge.
|
| Anyway, I made a mess, I'm not good at explaining...
|
| And by the way, it's not true what people are suggesting here
| that Kalman filters are for moving things. They are used to
| estimate constants _all the time_ but yes, they're more popular
| for moving things.
| bachmeier wrote:
| Generic warning for anyone implementing the Kalman filter. Read
| the first few pages of
| https://www.stat.berkeley.edu/~brill/Stat248/kalmanfiltering...
| for ways to handle numerical instability.
| tnecniv wrote:
| That paper contains an interesting reference to an exact
| generalization of the KF to a flat initial condition. I hadn't
| seen that before and solves numerical issues that come from the
| common practice of just picking a large initial covariance. In
| practice, I haven't seen any notable numerical artifacts from
| doing that, but it's quite an appealing fix!
| davidw wrote:
| David's random thoughts: for some reason I pictured Wile-e-
| coyote not handling numerical instability correctly and
| getting, as per usual, blown up.
| frankreyes wrote:
| I've always wanted to write a numerical Kalman Filter code, to
| just plug a function and then get it's derivative. It would be
| slow but it would be easier to use
| MayeulC wrote:
| That other article had nice visualizations too:
| https://www.bzarg.com/p/how-a-kalman-filter-works-in-picture...
|
| Posted to HN three times already :)
| akhayam wrote:
| Wow... you left me wondering why signal processing teachers in
| college don't teach Kalman Filters with this simplicity. I know
| mathematical concepts are best taught mathematically, but that
| does led to information loss for those who don't have the
| background requisites.
|
| I used to teach Discrete Cosine Transform and Wavelet Transform
| through images alone and would always find this teaching method
| of "intuition before rigor" work better than the other way
| around.
| anthomtb wrote:
| > why signal processing teachers in college don't teach Kalman
| Filters with this simplicity
|
| > those who don't have the background requisites
|
| Any college student studying signal processing _should_ have
| the background prerequisites.
|
| That said, it is easy to forget fundamentals. I have a couple
| theories for why professors don't use the intuition-before-
| rigor approach.
|
| 1) The professors themselves do not have great intuition but
| rather deep expertise in manipulating numbers and equations.
| Unlikely theory, but possible.
|
| 2) Professors do not generally get rewarded for their teaching
| prowess. And breaking mathematical concepts down to an
| intuitive level requires quite a lot of work and time better
| spent writing grant proposals or bugging your doctoral
| students. Cynical theory but probably true in many cases.
|
| 3) Once you understand the math, it is so much easier than the
| intuitive approach that the lazy human brain will not allow you
| to go back to "the hard way". I like to think this is the
| primary driver of skipping an intuitive teaching approach.
|
| I would say #3 applies much more broadly than mathematical
| education. It is the difference between expertise and pedagogy.
| That is to say being an expert in a thing is a completely
| different skill than teaching others to become competent at the
| thing. Say you want to improve your golf game with better
| drives - should you learn from the guy at the range who hits
| the ball the farthest? Probably not. You should figure out the
| guy who has improved his drive the most. Eg, the guy who
| started out driving 100 yards and now consistently hits 300 is
| better to learn from than the guy who is hitting 350. (Credit
| Tim Ferriss for driving this concept into my head).
| sn41 wrote:
| Academic here. In my experience, many experts in any given
| area have massive amounts of intuition about their own area.
| But keep in mind that we do have to end up teaching outside
| the area. So it's safer to teach technicalities rather than a
| (most probably incorrect) intuition.
|
| Some other issues:
|
| - in many subjects, it is dangerous to work with the wrong
| intuition - they not only do not give a clear picture, they
| actually give you the false picture.
|
| - even though math gets a bad rep, I think the only reason we
| are able to work with high-dimensional matrices and vectors
| is because we let go of geometric imagery at some point. Most
| people, including myself, have a hard time visualizing even 4
| dimensions.
| kccqzy wrote:
| Totally agreed. One of the favorite professors in college would
| introduce a theorem, and instead of jumping into proving it,
| would first show us useful applications of the theorem, and
| then prove it afterwards.
| stjohnswarts wrote:
| This is exactly what I did as a grad student when I taught
| classes for my prof. She preferred bottom up and I preferred
| top down. I wanted to give people a reason for learning what
| they were getting ready to learn rather than "let's start
| from first principles" that almost all my professors wanted
| to do.
| p1necone wrote:
| I prefer the exact same approach for learning to play board
| games too. Your 20 minutes of rules explanation will go
| completely over my head unless you start with "the goal/win
| condition of the game is to get more points/reach the end
| faster/collect all the macguffins".
| _a_a_a_ wrote:
| Aw god, Yes, Yes, Yes! I understand applications not
| abstractions so show me it's value and I'll understand it
| immediately, or give me a massive motivation to understand it
| if I don't. If they'd taken that approach at my schools and
| university I'd have been so much better off.
| FractalHQ wrote:
| I can _only_ learn top-down. Facts without context are much
| less interesting and much more difficult to internalize. The
| incentive to understand, inspired by said context, is a
| prerequisite to mustering the motivation needed to to truly
| learn with sustained focus without losing interest. My ADHD
| brain will stop cooperating otherwise, and I'll find myself
| daydreaming against my will!
| [deleted]
| max_ wrote:
| Are Karlman filtres useful for applications that involve
| financial markets?
|
| If so, how?
| esafak wrote:
| "Linear and non-linear filtering in mathematical finance: A
| review"
|
| https://bura.brunel.ac.uk/handle/2438/12433
| KRAKRISMOTT wrote:
| That's like asking if moving averages are useful. Of course
| they are, because they are components of basic signal
| processing. What you do with the data however is entirely on
| you. Most of these techniques are to improve the signal-to-
| noise ratio, but they won't help if you don't have an idea of
| what you are trying to look for in the first place.
|
| More broadly, financial markets (in certain cases and depending
| on asset) can generally be modeled by a _random walk_ , and
| that's something Kalman filters can help with as mentioned in
| another comment.
| tnecniv wrote:
| There seems to be a lot of confusion about what Kalman filters
| are for in this thread. Perhaps that's what happens when you seek
| a non-mathematical introduction to a mathematical topic but
| nevertheless I'm going to try and clear this up.
|
| Specifically, the Kalman filter is a _recursive_ way to estimate
| the state of a dynamical system. That is, specifically, the thing
| you want to estimate varies is a function of time. It doesn't
| matter if that thing is the position and momentum of a robot or a
| stock price. What does matter are the following:
|
| 1. The dynamics are _linear_ with additive Gaussian noise. That
| is, the next state is a linear function of the current state plus
| a sample from a Gaussian distribution. Optionally, if your system
| is controlled (i.e., there is a variable at each moment in time
| you can set exactly or at least with very high accuracy), the
| dynamics can include a linear function of that term as well.
|
| 2. The sensor feeding you data at each time step is a linear
| function of the state plus a second Gaussian noise variable
| independent of the first.
|
| 3. You know the dynamics and sensor specification. That is, you
| know the matrices specifying the linear functions as well as the
| mean and covariances of the noise models. For a mechanical
| system, this knowledge could be acquired using some combination
| of physics, controlled experimentation in a lab, reading data
| sheets, and good old fashioned tuning. For other systems, you
| apply a similarly appropriate methodology or guess.
|
| 4. The initial distribution of your state when you start running
| the filter is Gaussian and you know it's mean and covariance (if
| you don't know those, you can guess because given the filter runs
| for a long enough time they become irrelevant)
|
| The Kalman filter takes in the parameters of the model described
| in (3) and gives you a new linear dynamical system that
| incorporates a new measurement at each time step and outputs the
| distribution of the current state. Since we assumed everything is
| linear and Gaussian, this will be a Gaussian distribution.
|
| From a Bayesian perspective, the state estimate is the posterior
| distribution given your sensor data, model, and initial
| condition. From a frequentist / decision theory perspective, you
| get the least squares estimate of the state subject to the
| constraints imposed by your dynamics.
|
| If your dynamics and sensor are not linear, you either need to
| linearize them, which produces the "extended Kalman filter" that
| gives you a "local" estimate of the state or you need another
| method. A common choice is a particle filter.
|
| If your model is garbage, then model based methods like the
| Kalman filter will give you garbage. If your sensor is garbage
| (someone mentioned the case where it outputs a fixed number), the
| Kalman filter will just propagate your uncertainty about the
| initial condition through the dynamics.
| recursive wrote:
| A faulty sensor that always reads "6.8" is going to have zero
| variance. That would seem to make it the most trustworthy
| estimation.
|
| I'm sure real Kalman filters aren't so naive, but that does seem
| to be a tricky part.
| tnecniv wrote:
| If your only sensor is broken and tells you nothing, then your
| system is unobservable with such a sensor and the only thing
| you can do is propagate your initial uncertainty through the
| dynamics. The Kalman filter will do exactly that in this case.
| sowbug wrote:
| If I'm understanding the article, then an aspect of variance is
| how much a sensor deviates from the wisdom of the crowds. So
| the 6.8 is harmless if it happens to be close to the right
| answer, and downweighted if not.
|
| Which, if correct, means that Kalman filters are useful to
| detect malicious input to crowdsourced data like surveys,
| reviews, etc.
| thot_experiment wrote:
| I've been wanting to make a deeper study of Kalman filters and
| related sensor fusion techniques for a while now, this looks like
| a decent resource but perhaps the hacker news horde has some even
| better suggestions for articles/textbooks/videos. If there is a
| particular piece of media that has really helped you grok this
| sort of thing please leave a link below!
| sheepshear wrote:
| http://mocha-java.uccs.edu/ECE5550/index.html
| mlsu wrote:
| If you know a bit of Python and you find it sometimes tough to
| grind through a textbook, take a look here:
|
| https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Pyt...
|
| Interactive examples programmed in Jupyter notebooks.
| sdfghswe wrote:
| The HN horde will tell you that math is bad and you should just
| "get your hard dirty".
| antegamisou wrote:
| Nice typo but 100% agree, and if you dare tell anyone some
| freshman college math is necessary to understand some
| moderate-advanced subject, you're "gatekeeping" knowledge.
| tnecniv wrote:
| This is the thing with math and programming. The anti-math
| programmers are right: you don't need much math to have a
| career as a programmer because lots of applications (e.g.
| web pages) don't need much math.
|
| However the more math you know, the more tools you have
| available to tackle problems and the more diverse the set
| of projects you can tackle are. You'll spend less time
| reinventing the wheel and you don't need to come up with
| shoddy heuristic solutions to problems because you can
| formally specify the problem and apply the right tool to
| solve it.
| musebox35 wrote:
| For a robotics oriented, part theory part hands-on learning
| material on Kalman Filtering, I would suggest The Robot Mapping
| course from Cyrill Stachniss:
|
| http://ais.informatik.uni-freiburg.de/teaching/ws22/mapping/
| http://www.ipb.uni-bonn.de/people/cyrill-stachniss/
|
| There are YouTube videos from 2013 here:
| https://www.youtube.com/playlist?list=PLgnQpQtFTOGQrZ4O5QzbI...
|
| The course itself is mostly based on the Probabilistic Robotics
| book from Sebastian Thrun et al:
| https://mitpress.mit.edu/9780262201629/probabilistic-robotic...
| spieswl wrote:
| Probabilistic Robotics should be required reading for most
| robotics engineers, IMO. Great book.
| tnecniv wrote:
| The Thrun book is great and one of the few "essential" texts
| on robotics in my opinion.
| esafak wrote:
| MSR's Christopher Bishop offers a more intuitive, Bayesian
| derivation of the Kalman filter:
|
| https://www.youtube.com/watch?v=QJSEQeH40hM
| http://mlss.tuebingen.mpg.de/2013/2013/bishop_slides.pdf#pag...
| hazrmard wrote:
| Complementary to this, if you are looking for a thorough,
| mathematical introduction to Kalman Filters and family, I will
| highly recommend this book[1]
|
| - Written by a software engineer who had to implement kalman
| filters for his job. How he motivates and conveys concepts may
| resonate with this audience.
|
| - Written as interactive Jupyter notebooks. You can clone the
| repository and run through the lessons.
|
| - Incremental improvements. Starting from a simple filter,
| incorporating Bayes' rule, and extending to probability
| distributions. It provides a gentle on-ramp to Kalman filters.
|
| [1]: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-
| Pyt...
| ozim wrote:
| Was going to ask about applications - but applications are
| there in GitHub in description. Thanks seems like something I
| might use one day.
___________________________________________________________________
(page generated 2023-08-02 23:00 UTC)