[HN Gopher] Artificial Intelligence in the Cockpit
       ___________________________________________________________________
        
       Artificial Intelligence in the Cockpit
        
       Author : ur-whale
       Score  : 85 points
       Date   : 2021-12-09 10:27 UTC (12 hours ago)
        
 (HTM) web link (daedalean.ai)
 (TXT) w3m dump (daedalean.ai)
        
       | goodpoint wrote:
       | A lot of accidents have been caused by the almost complete lack
       | of automated monitoring and good alerting on airplanes.
       | 
       | I find it very surprising that there isn't at least non-AI
       | software to monitor what the pilots are doing.
        
         | cjrp wrote:
         | On airliners? There's tons of them.. TCAS (you're about to hit
         | someone), GPWS (you're about to hit something), EICAS (your
         | engine isn't doing so well), etc.
        
           | goodpoint wrote:
           | That's exactly the problem. There's a ton of single-purpose
           | and "relatively dumb" systems that need to be enabled and
           | disabled as needed.
           | 
           | Plenty of accidents follow similar patters: pilots forgot to
           | enable some warning system, or ignored a warning because it
           | came up in the wrong context, or where overwhelmed by the
           | amount of checklists they had to do or by multiple alarms
           | coming up at the same time.
           | 
           | There is no monitoring system that is aware of the full
           | context, including the state of the plane, the history of
           | previous warnings or malfunctions and the intentions of the
           | pilots.
        
             | kube-system wrote:
             | > That's exactly the problem. There's a ton of single-
             | purpose and "relatively dumb" systems that need to be
             | enabled and disabled as needed.
             | 
             | Avionics: the original microservice infrastructure
        
             | jimktrains2 wrote:
             | > There is no monitoring system that is aware of the full
             | context, including the state of the plane, the history of
             | previous warnings or malfunctions and the intentions of the
             | pilots.
             | 
             | Its called the pilot. We even include a second system,
             | fully programmed and built entirely by a separate team, to
             | double-check and challenge the primary system: the copilot.
             | 
             | > Plenty of accidents follow similar patters: pilots forgot
             | to enable some warning system, or ignored a warning because
             | it came up in the wrong context, or where overwhelmed by
             | the amount of checklists they had to do or by multiple
             | alarms coming up at the same time.
             | 
             | And yet, commercial aviation is by far safer than driving.
             | General aviation, with fewer of these systems is roughly
             | the same as driving.
             | 
             | You also day plenty of accidents, but I'd be willing to bet
             | that just as many if not more were situations beyond the
             | pilots control.
             | 
             | You're also not comparing it to the number of times that
             | pilots got out of sticky situations by following checklists
             | and/or not having warnings suppressed.
        
               | goodpoint wrote:
               | > Its called the pilot.
               | 
               | Can you please spare the patronizing tone?
               | 
               | > And yet, commercial aviation is by far safer than
               | driving.
               | 
               | So what? It could be safer than what it already is.
               | 
               | > but I'd be willing to bet that just as many if not more
               | were situations beyond the pilots control.
               | 
               | Then you should review a good number of accidents.
               | 
               | > You're also not comparing it to the number of times
               | that pilots got out of sticky situations by following
               | checklists and/or not having warnings suppressed
               | 
               | That's an incorrect comparison. I never said that the
               | checklists should not be followed or anything like that.
        
               | carabiner wrote:
               | GA is about as safe as riding a motorcycle, which is what
               | this article and the EAA say:
               | https://inspire.eaa.org/2017/05/11/how-safe-is-it/
               | 
               | This is much less safe than driving a car.
        
         | dfsegoat wrote:
         | > I find it very surprising that there isn't at least non-AI
         | software to monitor what the pilots are doing.
         | 
         | For a multi-crew aircraft, I believe this is the entire point
         | of Crew resource management [1].
         | 
         | > "Specifically, CRM aims to foster a climate or culture where
         | authority may be respectfully questioned. It recognizes that a
         | discrepancy between what is happening and what should be
         | happening is often the first indicator that an error is
         | occurring"
         | 
         | For commercial aviation, I'd rather take my chances with the
         | crew with 60+ years of combined experience vs. an AI model.
         | 
         | 1 - https://en.wikipedia.org/wiki/Crew_resource_management
        
           | goodpoint wrote:
           | > I believe this is the entire point of Crew resource
           | management [1].
           | 
           | Not at all, CRM is about cognitive and interpersonal skills.
           | 
           | It's completely orthogonal to having some software that
           | monitors the plane for (subtle) faults and unexpected
           | behaviors and provides good contextual information.
           | 
           | > I'd rather take my chances with the crew with 60+ years of
           | combined experience vs. an AI model.
           | 
           | That's a false dichotomy and also I did not talk about AI.
           | 
           | I'd rather take my chances with the crew with 60+ years of
           | combined experience TOGETHER with a all-seeing monitoring
           | system.
        
       | cnqyx wrote:
       | Why would one name an aerospace company after Dadedalus, who
       | crashed after flying too close to the sun?
        
         | jollybean wrote:
         | Icarus crashed after flying too close to the sun after
         | Daedalus' warnings.
        
         | throw0101a wrote:
         | > [...] _Dadedalus, who crashed after flying too close to the
         | sun?_
         | 
         | It wasn't Dadedalus [sic] who flew towards the sun, but his son
         | Icarus:
         | 
         | > _In Greek mythology, Daedalus ( /'ded@l@s 'di:d@l@s
         | 'deId@l@s/; Greek: Daidalos; Latin: Daedalus; Etruscan:
         | Taitale) was a skillful architect and craftsman, seen as a
         | symbol of wisdom, knowledge and power. He is the father of
         | Icarus_ [...]
         | 
         | * https://en.wikipedia.org/wiki/Daedalus
        
           | mometsi wrote:
           | In any case, he manufactured some critically flawed aviation
           | equipment, which resulted in a fatality.
        
             | throw0101a wrote:
             | IIRC, it was the operator's (Icarus) fault.
        
               | namdnay wrote:
               | That's what Daedalus says, he's trying to shift the blame
               | on the operator. But he's the one who made a wing with
               | wax whilst pretending it could be flown under the
               | previous "bird" certification
        
               | dotancohen wrote:
               | The -MAX? Or are we calling it the -8 now?
        
               | fhd2 wrote:
               | I think I figured out why they called themselves that at
               | this point :P
        
           | zeckalpha wrote:
           | Since weather balloons pop before returning to the earth, at
           | one point I was on a weather ballooning project named Icarus.
        
         | cjrp wrote:
         | There's an (ex-) Navy air base called HMS Daedalus, better than
         | HMS Icarus
        
           | dotancohen wrote:
           | I seem to remember a space station with that name too.
        
       | scarier wrote:
       | I think this lecture (https://youtu.be/5ESJH1NLMLs) should be
       | mandatory for anyone trying to improve flight safety with more
       | automation. I think we're already at diminishing returns, and the
       | only way to eliminate accidents caused by over-reliance on
       | automated systems is to cut the pilot out of the loop entirely.
       | Even in general aviation, the low-hanging fruit is elsewhere.
       | 
       | That said, this is still pretty cool, and I could see something
       | like it being one component of a much larger fully automated
       | flight management system.
       | 
       | Edit: link should be fixed. If I'm still crushing it, the title
       | of the lecture is "Children of the Magenta Line."
        
         | Animats wrote:
         | Wrong link? That's an ad for a book on graph algorithms.
        
           | scarier wrote:
           | Thanks! Should be fixed now.
        
       | mannykannot wrote:
       | This article attempts to conflate two very different levels of
       | capability.
       | 
       | On the one hand, the author discusses what they have achieved so
       | far - a machine-vision based system that can fly reasonably
       | competently in good visibility and low traffic density,
       | comparable, they say, to aviation 80 years ago (actually, as I
       | mentioned in another comment, aviation in 1941 had already
       | advanced significantly beyond that.)
       | 
       | A little later, they write this;
       | 
       |  _There is no reason to believe computers will always be worse at
       | that than you are. There is no reason the machine can't reliably
       | make the call to land in the Hudson when all engines are out and
       | to do so in adversarial conditions safely._
       | 
       | True enough, as far as it goes, but there is also no reason to
       | suppose that the technologies the author is discussing here will
       | deliver that level of performance. The good judgement
       | demonstrated by Sullenberger that day (and by many pilots in many
       | other dire situations) depended on an extensive understanding of
       | how the world works, and to reason about outcomes outside of the
       | rules of the game, so to speak (for example, short-cutting the
       | checklist in order to ensure the aircraft continued to have
       | auxiliary power.) Current machine-vision systems, on the other
       | hand, lack the ability to reason about how things ought or might
       | be, and so can make utterly bizarre-seeming judgements about what
       | they are "seeing."
       | 
       | Personally, I believe fully-automated aviation will become both
       | feasible and acceptable, but with arguments like the one quoted
       | above, this article is glossing over the challenges that remain.
        
         | GrumpyNl wrote:
         | Is it AI or just a lot if then else?
        
           | throwawaygh wrote:
           | What's a neural network made of?
        
         | hef19898 wrote:
         | Well, in good weather at ILS equipped airports automated
         | landings are possible. Flight is done with autopilot more often
         | than not.
         | 
         | You have pilots for the situations the available automation
         | doesn't work.
        
         | fho wrote:
         | I am not exactly knowledgeable about airplane safety, but I
         | feel like safety checklists are something that could be, and
         | probably are, handled by a machine more efficiently?
         | 
         | I would guess that there are probably enough things on there
         | that are constantly monitored theses days so the plane could
         | just give you a big green "all systems nominal" light?
         | 
         | Maybe I am just naive :-)
        
           | quest88 wrote:
           | Pilots still miss the light for "all systems not nominal",
           | they only have so many brain cycles to interpret.
           | 
           | One example that comes to mind is visual and audible alerts
           | when you haven't put the landing gear down and the system
           | thinks you're going to land (speed low, flaps extended,
           | descending). Yet gear-up landings are still common in single-
           | pilot general aviation aircraft.
        
           | mannykannot wrote:
           | I feel that there are definitely some areas where currently-
           | available technology is not being fully exploited - things
           | like calculating the weight and center-of-gravity position
           | (would it measurably increase any sort of risk to put a load
           | cell in each strut of the undercarriage?), checking the
           | parking brake is off when accelerating above taxiing speed,
           | and calling for an abort, before it is too late, if the
           | acceleration on takeoff is insufficient.
           | 
           | The inverse of your "all systems nominal" light exists, in
           | many if not all commercial airplanes of any size, in the form
           | of the master caution and master warning lights, but you also
           | need to know what, specifically, is going wrong.
        
         | taneq wrote:
         | > The good judgement demonstrated by Sullenberger that day (and
         | by many pilots in many other dire situations) depended on an
         | extensive understanding of how the world works
         | 
         | Tree bad, river pretty?
        
       | leaded_syrinx wrote:
       | This is really cool and I overwhelmingly agree with the risk
       | stats they mention in their brief intro.
       | 
       | That said, the reason pilots are paid to fly dangerous whirlybird
       | machines is primarily for take-off and landing. Takeoff and
       | landing are operations that require the most concentration,
       | coordination with air-traffic controllers and other aircraft and
       | regardless of weather carry the most risk. Takeoff and landing
       | are likely to be some of the last functions to be automated, I'd
       | argue this even further with take-off. Another important aspect
       | here is the pilot should be able to act independently of multiple
       | systems failing - as the moniker goes... "fly the plane until its
       | on the ground or stopped".
       | 
       | The coolest pilot safety automation tool to improve safety I've
       | seen thus far is an iPad app developed by the OG developer of the
       | X-Plane flight sim Austin Meyer[0] (definitely check out his
       | blog) called Xavion [1]. Basically, it's an app that with decent
       | GPS will calculate a glide plane to the nearest airport in
       | seconds. Austin is an avid pilot and clearly a brilliant guy -
       | I'm eager to see if he starts commenting on autopilot ai
       | initiatives like that of Deadalean.
       | 
       | 0 - https://austinmeyer.com/
       | 
       | 1 - https://xavion.com/
        
       | Animats wrote:
       | There's already a system that does the whole job of flying, for
       | emergency use only - Garmin Safe Return.[1] In the plane, there's
       | one big red button. If pushed, the plane finds an airport and
       | lands, all by itself. It picks the nearest suitable destination
       | airport, using info about fuel state, weather, and airport
       | status. It starts squawking with the emergency transponder code.
       | It plays emergency messages to ATC. There's also pilot
       | incapacitation detection. If the pilot doesn't do anything for a
       | long time, the system sounds warnings, then takes over.
       | 
       | Everybody else has to get out of the way, though, when it
       | declares an emergency. It can't really communicate with ATC. So
       | it's strictly an emergency system, for now.
       | 
       | This is really just integrated control of the existing avionics.
       | The existing systems are good enough that you can input waypoints
       | and have them followed, and do an automatic instrument landing on
       | a designated runway. This mostly sets up a flight path. It can't
       | deal with traffic. There are no new sensors. The additional
       | hardware just lets it lower the landing gear, apply the wheel
       | brakes, and shut down the aircraft after landing. After which
       | you're blocking the runway until someone comes out and moves the
       | aircraft. Again, emergency use only.
       | 
       | It's intended for the "sick pilot, healthy airplane" case - the
       | pilot is out of action, but the hardware is fine. It's not
       | helpful in making hard decisions when the aircraft is having
       | problems.
       | 
       | [1] https://youtu.be/d-ruFmgTpqA
        
         | t0mas88 wrote:
         | It's a great system, but it is very far from replacing pilots
         | in any useful way. As you said it's mostly a nice integration
         | of the existing autopilot functions.
         | 
         | Real world flying with the 10^7 or 10^9 kind of scale for
         | safety isn't in need for a better autopilot. That would be like
         | making a more precise cruise control while you want a self
         | driving car. What it needs is better decision making and
         | problem solving skills. And those are very much lacking from
         | these current systems.
         | 
         | I have an instructor rating. We can teach an average human in 5
         | to 10 hours how to control a plane. Then we spend an additional
         | 40 to 60 hours on how to make the right decisions, solve
         | problems, deal with weather, traffic etc. And humans are really
         | good at that.
         | 
         | What is needed if you want autonomous flight is really good AI
         | decision making, not aircraft control.
        
           | Animats wrote:
           | Right. Good decision making when in trouble is a hard
           | problem. AI is still very bad at "common sense", which I
           | sometimes define, for robotics, as "getting through the next
           | 30 seconds without screwing up".
        
         | coredog64 wrote:
         | Allegedly the B-2 bomber has had "go to war" and "return to
         | base" buttons for 30 years. The latter is for exactly the same
         | reason you note: Plane is good, pilots are not. Not quite sure
         | about the value of the former.
        
           | hotpotamus wrote:
           | I'm not sure I buy the existence of such buttons, but if the
           | pilots are incapacitated in a nuclear bomber with a target in
           | mind, then I can think of a use for the go to war button.
        
         | Buttons840 wrote:
         | So the modern remake of those "pilot incapacitated, help me
         | land" shows can be shortened to "push the big red button, then
         | have a snack"?
        
       | verisimi wrote:
       | I'm genuinely a little bored of hearing about how AI has any
       | actual 'intelligence'. It doesn't. Its not akin to people.
       | 
       | The better analogy, IMO, is how people are being pushed into a
       | computing model and adopting computing attributes as
       | characteristics. We are becoming like "semi-autonomous
       | computers". And worse, it is a client-server model, with
       | corporations and governance taking the executive decisions!
       | 
       | Rather than describing some software as intelligent, we would be
       | better describing the change that this idea (and the idea of
       | collectivisation - as opposed to individuation) has had on
       | people. It is people that are changing, machines are still
       | inanimate.
        
         | IshKebab wrote:
         | I think it has some kind of intelligence. The word isn't well
         | defined enough for you to just declare that it doesn't.
        
         | bobthechef wrote:
         | Right. Every epoch shows a tendency to model intelligence in
         | terms of some favored technology. It's a mistake to take these
         | beyond very casual and loose metaphors. It's a very difficult
         | habit of thought to break for some people because they've
         | tacitly committed to a particular (sloppy and half-baked)
         | metaphysical view of the world and haven't yet learned to
         | examine those presuppositions and learned how they fail
         | spectacularly.
         | 
         | I would prefer we use "automation" instead of "AI". That way,
         | we are forced to speak specifically ("automation of _what?_ ;
         | it's always specific) and say things like "machine-automated
         | aviation" because that's what this is. It's clear, faithful to
         | the truth, and obvious what we're talking about and what's
         | happening at a general level, and we aren't reifying some vague
         | bullshit fantastical term like "artificial intelligence" which
         | only leads to romanticized mystification and projection. There
         | is no artificial intelligence. Machines doing so-called AI are
         | the same kinds of machines we use to write email and make phone
         | calls. We've just configured them in a way that makes them
         | useful in different situations in different ways _for specific
         | ends_ , even if the applicability _appears_ general; _we 've_
         | done the generalization which is then represented in concrete
         | ways which are not themselves general.
         | 
         | (N.b. central to intelligence is the ability to _abstract_ (not
         | the lambda calc /CS meaning) from particulars. I have the
         | concept Circularity which is not just an image (there are
         | potentially an infinite number of circular objects), but based
         | on experience of particular circular things, my intellect has
         | abstracted from this experience the universal concept of
         | Circularity. I understand Circularity, apart from any given
         | circle, and yet what is true of it is true of all circles. I
         | can analyze the concept to infer that _any_ circle 's
         | circumference is twice its radius times pi or that its area is
         | the square of its radius times pi. Computers don't do this and
         | cannot do this _even in principle_ because all physical things
         | are always concrete. There is no physical Circularity, only
         | physical, concrete circles. And abstraction is not regression.
         | Indeed, all the abstract values in your computer aren 't really
         | abstract except in the mind of the observer. Those in your
         | computer are representations only, devoid of denotation except
         | what's in the programmer's head.)
        
         | andreyk wrote:
         | FYI, there are a ton of definitions of 'intelligence', some fit
         | with what AI algorithms can do, some don't.
         | 
         | I found reading 'A Collection of Definitions of Intelligence'
         | quite interesting myself https://arxiv.org/abs/0706.3639
        
       | the-dude wrote:
       | > we have built a system that uses nothing but visual input, just
       | like the human pilot in that almost-uninstrumented aircraft from
       | 80 years ago, to                 see where you are without GPS or
       | radio or inertial navigation       see where you can fly without
       | ADS-B or RADAR or ATC       see where you can land without ILS or
       | PAPI
       | 
       | Isn't this the 'Tesla' approach?
        
         | mannykannot wrote:
         | When I read that second claim, "see where you can fly without
         | ADS-B or RADAR or ATC", my first thought was "Really? this
         | system is reading sectional charts?" as there are quite a few
         | places where you cannot fly without some of the technologies
         | they are doing without, and those regions are not marked out on
         | the ground.
         | 
         | The choice to focus on purely visual systems seems an odd one,
         | given that aviation did not really get going until it advanced
         | beyond that stage. Even by 80 years ago, instrument flying with
         | radio navigation in conditions that did not permit visual
         | flying was a routine practice, and this, together with the
         | increasing number of aircraft in the sky, necessitated the
         | development of air traffic control.
         | 
         | While one can very plausibly argue that we are approaching the
         | point where ATC as we know it could be largely dismantled (not,
         | by the way, through the use of AI), that will not be achieved
         | by reverting to purely visual methods, which can only function
         | safely in fair weather with good visibility, and also only
         | where the traffic density and speed are low.
        
           | ur-whale wrote:
           | > those regions are not marked out on the ground
           | 
           | There isn't anything in the article that suggests that the
           | system does not have sectional charts in a database...
           | 
           | Once you know where you are in the world in VFR mode, it's a
           | fairly simple affair (eg landmark detection) to register that
           | positons on a sectional map, however invisible it may be in
           | the real world.
           | 
           | > that will not be achieved by reverting to purely visual
           | methods
           | 
           | Another extrapolation on your part: they never said "purely",
           | did they?
           | 
           | The visual part of the system is nowhere listed as being the
           | "only" way for the plane to know where it is.
           | 
           | It's very likely their system with gulp in whatever signal
           | they can get their hands on (GPS, etc...) and integrate it
           | into a final solution.
           | 
           | The final system will be much more robust than either of the
           | existing components use on an airplane to "know where you
           | are".
        
             | mannykannot wrote:
             | I should have known better than to just write "my first
             | thought" without further qualification...
             | 
             | While the author may not have written "purely visual",
             | there is this quote, immediately preceding the claims
             | quoted at the root of this thread:
             | 
             | "At Daedalean, we have built a system that uses _nothing
             | but visual input_ , just like the human pilot in that
             | almost-uninstrumented aircraft from 80 years ago, to..."
             | [my emphasis.]
             | 
             | So, while the author does not use "purely", he states
             | something equivalent to doing so. Furthermore, the entire
             | tenor here is that they are not using any of the modern
             | technologies that are currently central to commercial
             | aviation (why he wishes to stress this fact is another
             | issue altogether.)
             | 
             | My point here is a bit more subtle that pointing out the
             | obvious: that automating VFR flight will not cut it (with
             | or without GPS and other aids, for that matter.) It is,
             | rather, that there remains a huge gap between what they
             | have achieved so far (or could be achieved just by
             | integrating GPS and similar aids) and what will be required
             | to achieve the level of performance blithely suggested in
             | the comment about landing on the Hudson river.
             | 
             | UPDATE: I made that last point in a different comment; what
             | I was wondering here was why the author seems determined to
             | point out that their current system does not use GPS etc. -
             | perhaps because any of that would make it look less like
             | AI?
        
               | triplelll wrote:
               | Paraphrasing the article: If you connect a GPS to your
               | small or large aircraft's autopilot and you use it to fly
               | from A to B without having a human pilot ready to take
               | over at any point you are a) breaking the law and b)
               | going to die sooner or later (well, we all do but you
               | know what i mean). To make GPS a system that is
               | aerospace-grade safe is not possible today. But everyone
               | with a pair of eyes can legally and safely pull off the
               | same feat. So to replicate that feature you have to
               | somehow bring computer vision to the level of aerospace-
               | grade reliabilty. I think that's what the author was
               | going for here.
        
           | jcims wrote:
           | It kind of sounds like they are targeting general aviation,
           | where VFR still has a pretty strong presence. If they can
           | create a bolt on autopilot that can also take off and land
           | that might be marketable.
        
             | kayodelycaon wrote:
             | An autopilot that fails as soon as you end up in a low-
             | visibility situation sounds like a really, really bad idea,
             | especially with pilots who don't know how to fly by
             | instruments.
             | 
             | We have better technology than that now.
        
               | jcims wrote:
               | Totally agree, we know what happens when pilots lose
               | visual reference and don't trust their instruments. I
               | think I'm biasing my opinion on finding it impossible to
               | believe they won't incorporate basic things like an IMU
               | (or three) into the product so that simple flight
               | coordination is impossible when there is a loss of vision
               | data.
               | 
               | Strictly speaking that wouldn't be vision only though, so
               | who knows.
        
               | kayodelycaon wrote:
               | If you haven't seen it, Garmin has an emergency automatic
               | landing system that's installed in a few planes already:
               | https://discover.garmin.com/en-US/autonomi/#autoland
               | 
               | Limitations: https://www.garmin.com/en-US/legal/ALuse/
               | 
               | Video: https://www.youtube.com/watch?v=d-ruFmgTpqA
        
               | jcims wrote:
               | Very nice!!! Thanks for the links...that Piper is a sexy
               | beast.
        
         | cjrp wrote:
         | Seems like a weird selling point "we're as good as the pilots
         | from 80 years ago!"
        
         | [deleted]
        
         | triplelll wrote:
         | Teslas don't land or fly. In what sense do you mean?
        
           | [deleted]
        
           | throw0101a wrote:
           | In the sense of (possibly) overselling what the system is
           | capable of doing.
        
             | DoingIsLearning wrote:
             | I think it was meant in the sense that it has (mostly) a
             | single source of information.
             | 
             | A big part of safety critical systems is having redundancy
             | and often voting systems, or sensor fusion. So having a
             | single source of information means that you can potentially
             | go down with a single-fault condition.
             | 
             | (e.g. If your vision system is in the visible spectrum,
             | things like sun glare, snow, bird poops on your
             | lense/windshield, one of your cameras fails and you lose
             | stereo vision etc)
        
           | hoseja wrote:
           | Trying to achieve self-driving with only optical cameras.
        
         | numpad0 wrote:
         | I believe Tesla's problems basically stem from irresponsible
         | decisions made on dubious technical basis. They're not always
         | architecturally wrong.
        
       | dr_dshiv wrote:
       | Autopilot was invented in 1914. It is clearly a form of
       | artificial intelligence. That AI doesn't require computers should
       | be a conceptual shock-- one that can help us better understand
       | what AI really is.
        
         | throw0101a wrote:
         | I'm not sure a feedback loop is "clearly" AI:
         | 
         | * https://www.electronics-tutorials.ws/systems/negative-
         | feedba...
         | 
         | * https://en.wikipedia.org/wiki/Feedback
         | 
         | There are no heuristics involved:
         | 
         | * https://en.wikipedia.org/wiki/Heuristic_evaluation
         | 
         | * https://en.wikipedia.org/wiki/Heuristic_(computer_science)
        
           | bencoder wrote:
           | Seems like an example of the AI effect:
           | https://en.wikipedia.org/wiki/AI_effect
           | 
           | although I think when autopilot was invented, the concept of
           | "AI" didn't exist, so maybe not a perfect example
        
           | dr_dshiv wrote:
           | Autopilot is clearly AI. If I didn't mention 1914, you
           | wouldn't doubt it.
           | 
           | A feedback loop -- namely between a system and its measure of
           | its own performance-- is _central_ to the idea of AI. At
           | least according to Peter Norvig, the director of research at
           | Google, who defines intelligence as 'the ability to select an
           | action that is expected to maximize a performance measure'
           | (Russell  & Norvig, 2016, p. 37).
        
             | HPsquared wrote:
             | That definition sounds more like a definition of control
             | systems.
        
               | dr_dshiv wrote:
               | Note the author alluding to the steam governor as the
               | start of AI. Norvig points to the convergence of control
               | systems, cybernetics and AI
        
               | fault1 wrote:
               | many formulations of RL systems and optimizing feedback
               | control both have origins in for example, in optimal
               | control theory: https://en.wikipedia.org/wiki/Pontryagin%
               | 27s_maximum_princip...
        
             | tomrod wrote:
             | Respectfully, while the optimization functions and
             | constraint handling may have overlap, they differ in their
             | intended applications. For optimal control there is no
             | adaptation outside the initial ruleset, which is the core
             | of AI. Optimal control is "keep things on path"
        
               | fault1 wrote:
               | But in many industrial control systems, there is inherent
               | adaptation required to deal with exogenous shocks or
               | noise: https://en.wikipedia.org/wiki/Data-
               | driven_control_system
               | 
               | see also the last few chapters here;
               | http://lavalle.pl/planning/book.html
               | 
               | i would say relative to classical control, fast gpus have
               | replaced the need to have certain closed form solutions
               | that are easily analyzable.
        
               | tomrod wrote:
               | My working model here: rocket science and mining use
               | integrals, yet are different.
        
             | throw0101a wrote:
             | > _A feedback loop -- namely between a system and its
             | measure of its own performance-- is central to the idea of
             | AI._
             | 
             | A feedback loop may be _necessary_ for AI (or even
             | "natural" intelligence, and sentience (nevermind sapience))
             | to happen, but is simply having it _sufficient_?
        
               | dr_dshiv wrote:
               | This is a well-phrased question. Is a reward feedback
               | loop sufficient to create intelligence? DeepMind claims
               | "yes" in their 2021 paper "Reward is Enough":
               | https://deepmind.com/research/publications/2021/Reward-
               | is-En...
        
             | mikro2nd wrote:
             | I am forced to wonder, though, whether Norvig's definition
             | is a useful one. By this measure we'd have to consider
             | bacteria to be intelligent. Corporations, too, for that
             | matter.
             | 
             | (I do go along with author Charles Stross, occasionally
             | seen here on HN: that Corporations are Old Slow AIs, in
             | which case we've already had AIs around for centuries.)
             | 
             | Personally I'm happy to grant a degree of intelligence to
             | plants (and probably even bacteria, if I squint hard
             | enough) though it's of a quite different nature to our own.
             | Certainly feedback loops are _central_ to the _idea_ of
             | intelligence, but there 's a whole lot more wanted/needed
             | than merely feedback. And so, I find Norvig's definition a
             | tad wide of the sort of intentionality and sentience we'd
             | want flying a plane, and certainly far, _far_ short of the
             | sort of thing we 'd call AGI.
        
               | fho wrote:
               | Some trivia about plant "intelligence":
               | 
               | 1. Venus fly traps detect if an insect is on the trap and
               | close the trap. So far so known, but less people know
               | that the trap will open again after a while if there is
               | no movement detected (ie if a stone fell in). Likewise
               | digestion will only start if there is movement detected
               | for a while.
               | 
               | 2. Mast years
               | (https://en.wikipedia.org/wiki/Mast_(botany)) ... somehow
               | trees communicate when to produce seeds on mass ... from
               | what I gathered we have no idea how they do that.
        
               | mikro2nd wrote:
               | It seems that quite a lot of tree-to-tree networking is
               | done via mycorrhizal networks. Without doubt there are
               | mutually beneficial interactions between plant roots and
               | fungi for extracting nutrients, and quite a lot of good
               | evidence that those networks are informational in nature,
               | too. Whether that's related to seed-dispersal patterns...
               | I have no idea.
               | 
               | Alternately, I have also read of trees exuding pheremones
               | via their leaves as a warning to other trees in the
               | vicinity when predators (antelope) come around to munch
               | on the leaves, resulting in surrounding trees rapidly
               | increasing tannin content in their own leaves to make
               | them unpalatable to the browsers.
               | 
               | There's a whole lot of shit going on out there that we're
               | scarcely aware of...
        
               | dr_dshiv wrote:
               | The discussion is not about AGI.
               | 
               | Is it _useful_ to view corporations as old, slow AI? I
               | certainly think so. Otherwise we get really confused
               | about AI. Look at Zillow. That was a deep
               | misunderstanding of AI-- the product isn't done until we
               | take the humans out of the equation. No. What is
               | intelligent is to have a system that uses its own
               | measures of success to improve. This, by the way, is why
               | cybernetics is so critical to understand in the context
               | of AI design.
        
               | mannykannot wrote:
               | I cannot help but feel that this is just extending the
               | confusion to Zillow. It seems utterly implausible that
               | Zillow's ill-advised zeal for removing people out of the
               | process was driven by a overarching desire to develop AI,
               | as opposed to, say, for making more money.
        
               | dr_dshiv wrote:
               | I would yield to better evidence, but my suspicion is
               | that it stemmed directly from executive-level confusion
               | about AI. Consider: how many investor pitches have you
               | seen that claim "AI" technology as a mechanism to
               | increase perceived IP value? They were telling their
               | investors that they had AI, and AI means people aren't
               | involved (fallacy).
        
               | mannykannot wrote:
               | No doubt some pitches do claim that AI will increase
               | perceived IP value, but for an investor to go from
               | perceived IP value to the conclusion that people are not
               | involved seems to be a completely unjustified conclusion,
               | and I see no evidence that people are actually thinking
               | this way. Furthermore, I have no idea how you think this
               | line of thought justifies calling 1913's very primitive
               | autopilot technology AI.
        
               | dr_dshiv wrote:
               | I have encountered CEO boardroom thinking that, for
               | instance, suggested that data scientists were not
               | necessary because AI would replace them.
               | 
               | I have experience in adaptive education where massively
               | expensive teams of engineers missed the point that the
               | "smartness" of the system needs to be based on improving
               | outcome measures (namely, learning outcomes) and instead
               | focused on massive, complex modeling initiatives with no
               | feedback loops to indicate whether the models were doing
               | anything useful.
               | 
               | If more people understood why a steam governor or a 1914
               | autopilot or a corporate bylaw were primitive forms of
               | AI, they wouldn't be looking for magic. "If I can
               | understand it, it must not be AI"
        
               | mannykannot wrote:
               | > If more people understood why a steam governor or a
               | 1914 autopilot or a corporate bylaw were primitive forms
               | of AI, they wouldn't be looking for magic. "If I can
               | understand it, it must not be AI"
               | 
               | By the same argument, a person could say "AI is like a
               | steam governor. I understand completely how steam
               | governors work, and I know they cannot possibly translate
               | from one language to another or recognize faces, so any
               | claim that AI can do so is nonsense." This, of course,
               | would a completely fallacious argument - and where it
               | goes wrong is precisely with the assumption that AI is
               | anything like a governor, except in the broadest possible
               | way that gives precedence to a commonplace resemblance
               | over all the substantive ways in which they are almost
               | completely different.
               | 
               | I understand your desire to persuade people to not regard
               | AI as magic, but I do not think this is helping.
        
               | dr_dshiv wrote:
               | See the paper from DeepMind on "reward is enough" and
               | Alfred Russel Wallace on the relationship between steam
               | governors and evolution. From that perspective, systems
               | like steam governors can eventually recognize faces.
        
               | mannykannot wrote:
               | _Eventually_ - after they have evolved to the point that
               | they are more unlike steam governors than they are like
               | them, and become something else in the process (a
               | different species, for example.) To the best of my
               | knowledge, no steam governor has ever recognized a face -
               | or evolved into something that has, for that matter.
               | 
               | I am also curious as to how you reward a steam governor -
               | does it get more sex, with better governors? You might
               | reward the inventor of a particularly effective governor
               | with orders, but that isn't rewarding the governor.
        
         | mannykannot wrote:
         | It does not help to call that technology AI. If we do so, now
         | we have to invent some new term to distinguish between that
         | sort of "AI", and the sort of AI that could possibly replace
         | pilots, as the former sort obviously cannot.
        
           | charcircuit wrote:
           | Some AI are more intelligent than others.
        
           | tomrod wrote:
           | AI-augmented flight.
           | 
           | We should not remove humans from the loop until the system
           | passes the Turing test and solves the Trolley program
           | concurrent to acceptable ethical standards.
        
             | mannykannot wrote:
             | While I share your caution, I feel your requirements are a
             | little too stringent. For one thing, no decent trolley
             | problem has a solution that humans are all satisfied with.
             | Also, quite a few commercial flying accidents have resulted
             | from poor and even bizarre decisions made by their
             | presumably Turing-test passing crew.
        
           | dr_dshiv wrote:
           | "Replacements for people" is a terrible definition of AI, in
           | theory and in practice.
           | 
           | Alexa, Tesla autopilot, Google search, speech to text,
           | recommendations-- any practical example of AI-- these are
           | tools. Not human replacers.
        
             | mannykannot wrote:
             | The author is not defining AI as such; he is proposing it
             | as an application thereof.
             | 
             | Is it your position that commercial flying can never be
             | automated, or only that, if it is automated, it would not,
             | by definition, be AI?
        
               | dr_dshiv wrote:
               | When you say "position that commercial flying can never
               | be automated," I'm assuming this to mean "fully
               | automated."
               | 
               | My position is that "fully automated" is a logically
               | inconsistent human objective. It means that there is no
               | possibility of human control, because if there is human
               | control / supervision, the system is not fully automated.
               | And, so long as there is human control, you are designing
               | tools for people to use. No one wants anything to be
               | fully automated. It's the biggest fallacy of AI in
               | design.
        
               | mannykannot wrote:
               | I wasn't asking about what you think of it as an
               | objective, but whether you think it is possible.
               | Furthermore, when being considered as an objective,
               | whether it is done by anyone's definition of AI seems
               | utterly beside your point. Neither the feasibility of
               | automating the pilot's job, nor whether it is desirable
               | to do so, depends on whether 1st. generation autopilots
               | are considered to be AI.
        
               | dr_dshiv wrote:
               | Fully automated commercial flight is impossible. There,
               | I've said it.
        
               | mannykannot wrote:
               | I think so too - but it is inconceivable to me that this
               | could be accomplished without feedback mechanisms or
               | processes, which would, by your definition, qualify it as
               | AI.
        
               | mannykannot wrote:
               | I see that I completely misread your post, which was
               | quite an mistake, given that it is just one sentence of
               | only ten words!
               | 
               | If your basis for so believing is the same as you
               | expressed a couple of posts back - that "fully automated"
               | means that there is no possibility of human control,
               | because if there is human control / supervision, the
               | system is not fully automated - then you are simply
               | avoiding the issue by using a pedantic definition of
               | automation. It would seem that, to you, full automation
               | of anything is a logical impossibility as there is no
               | such thing as automation in your dictionary.
               | 
               | If this is your argument then, by your definition, even
               | that task of governing the speed of a steam engine cannot
               | be fully automated, as someone sets the desired speed.
               | This sort of reasoning is not insightful; it merely turns
               | your participation in a discussion into a statement of
               | your private lexicon, and avoids engaging any substantive
               | issue.
        
         | peter303 wrote:
         | The term cybernetics was used for both analog and digital
         | feedback automation systems. There is overlap with AI.
        
       ___________________________________________________________________
       (page generated 2021-12-09 23:01 UTC)