[HN Gopher] Neural-control family: what deep learning and contro...
___________________________________________________________________
Neural-control family: what deep learning and control enables in
the real world
Author : sebg
Score : 81 points
Date : 2021-11-25 16:17 UTC (6 hours ago)
(HTM) web link (www.gshi.me)
(TXT) w3m dump (www.gshi.me)
| mark_l_watson wrote:
| Interesting part about taking advantage of invariances. There is
| more to this article than what I can digest on Thanksgiving. Book
| marked for later.
| joe_the_user wrote:
| This depends on which "real world" you're talking about.
|
| Doing the behavior that feedback driven control-systems do but
| even better is a nice and impressive applications. That seems
| most useful for applications like the application that's being
| described - swarms of flying drones. Flying generally already
| yielded to various control system - autopilots work because the
| skies are mostly empty and so your system working according to
| your predictions is all that matters. A drone swarm is much more
| complicated but is still under the system's control.
|
| It's worth saying that the "real world" where a lot of robots
| fail has different challenges. Whether you're talking self-
| driving cars, robot dogs accompanying troops or wheeled delivery
| robots in hospitals, the problem is figuring both what you're
| looking at and how to respond to it. And this has the problem
| that nearly anything can show up and require unique responses,
| causing progress here to never quite be enough. And better
| physics and better cooperation between controlled elements
| doesn't seem that useful here and this approach might not help
| this "real world".
| narrator wrote:
| What bugs me about most sci-fi is that the robots have bad aim.
| Watching these neural control videos, it becomes pretty clear
| that the robots will kill the people in a fictional sci-fi
| setting from miles away before our protagonist even knows they're
| there.
| mdp2021 wrote:
| Surprisingly?! You just set less need for false positives...
| cs702 wrote:
| Incorporating priors from physics into hybrid DNN-blackbox +
| traditional models makes a lot of sense for these kinds of
| applications. It also makes sense that regularizing the DNN
| blackbox to make sure it's "smooth enough" (i.e., ensuring the
| change in output in relation to the change in input stays below
| some threshold) helps make these complicated models more stable.
|
| However, I don't quite understand how the authors are encoding
| "domain invariance" with "a domain adversarially invariant meta-
| learning algorithm." I'm not sure what that means. If any of the
| authors are on HN, a more concrete explanation of such "domain
| invariance encoding" would be greatly appreciated!
|
| Finally, I have to say: The field of deep learning and AI is
| going to benefit enormously from the involvement of more people
| with strong backgrounds in physics, specially the theorists who
| have invested many years or decades of their lives thinking about
| and figuring out how to model complicated physical systems.
___________________________________________________________________
(page generated 2021-11-25 23:00 UTC)