[HN Gopher] US Marines defeat DARPA robot by hiding under a card...
       ___________________________________________________________________
        
       US Marines defeat DARPA robot by hiding under a cardboard box
        
       Author : koolba
       Score  : 224 points
       Date   : 2023-01-25 14:00 UTC (8 hours ago)
        
 (HTM) web link (www.extremetech.com)
 (TXT) w3m dump (www.extremetech.com)
        
       | jeffrallen wrote:
       | Had an interesting conversation with my 12 year old son about AI
       | tonight. It boiled down to "don't blindly trust ChatGPT, it makes
       | stuff up". Then I encouraged him to try to get it to tell him
       | false/hallucinated things.
        
       | grammers wrote:
       | Nice story, but we shouldn't trust that technology is not
       | improving further. What we see now is only just the beginning.
        
         | kornhole wrote:
         | The story seems crafted to lull us into not worrying about
         | programmable soldiers and police.
        
       | DennisP wrote:
       | Turns out cats have been preparing for the AI apocalypse all
       | along.
        
       | amrb wrote:
       | A weapon to surpass metal gear!!
        
       | prometheus76 wrote:
       | A hypothetical situation: AI is tied to a camera of me in my
       | office. Doing basic object identification. I stand up. AI
       | recognizes me, recognizes desk. Recognizes "human" and recognizes
       | "desk". I sit on desk. Does AI mark it as a desk or as a chair?
       | 
       | And let's zoom in on the chair. AI sees "chair". Slowly zoom in
       | on arm of chair. When does AI switch to "arm of chair"? Now,
       | slowly zoom back out. When does AI switch to "chair"? And should
       | it? When does a part become part of a greater whole, and when
       | does a whole become constituent parts?
       | 
       | In other words, we have made great strides in teaching AI
       | "physics" or "recognition", but we have made very little progress
       | in teaching it metaphysics (categories, in this case) because
       | half the people working on the problem don't even recognize
       | metaphysics as a category even though without it, they could not
       | perceive the world. Which is also why AI cannot perceive the
       | world the way we do: no metaphysics.
        
         | skibidibipiti wrote:
         | [dead]
        
         | spacedcowboy wrote:
         | Thirty years ago, I was doing an object-recognition PhD. It
         | goes without saying that the field has moved on a lot from back
         | then, but even then hierarchical and comparative classification
         | was a thing.
         | 
         | I used to have the Bayesian maths to show the information
         | content of relationships, but in the decades of moving
         | (continent, even) it's been lost. I still have the code because
         | I burnt CD's, but the results of hours spent writing TeX to
         | produce horrendous-looking equations have long since
         | disappeared...
         | 
         | The basics of it were to segment and classify using different
         | techniques, and to model relationships between adjacent regions
         | of classification. Once you could calculate the information
         | content of one conformation, you could compare with others.
         | 
         | One of the breakthroughs was when I started modeling the
         | relationships between properties of neighboring regions of the
         | image as part of the property-state of any given region. The
         | basic idea was the center/surround nature of the eye's
         | processing. My reasoning was that if it worked there, it would
         | probably be helpful with the neural nets I was using... It
         | boosted the accuracy of the results by (from memory) ~30% over
         | and above what would be expected from the increase in general
         | information load being presented to the inference engines. This
         | led to a finer-grain of classification so we could model the
         | relationships (and derive information-content from
         | connectedness). It would, I think, cope pretty well with your
         | hypothetical scenario.
         | 
         | At the time I was using a blackboard[1] for what I called
         | 'fusion' - where I would have multiple inference engines
         | running using a firing-condition model. As new information came
         | in from the lower levels, they'd post that new info to the
         | blackboard, and other (differing) systems (KNN, RBF, MLP, ...)
         | would act (mainly) on the results of processing done at a lower
         | tier and post their own conclusions back to the blackboard.
         | Lather, rinse, repeat. There were some that were skip-level, so
         | raw data could continue to be available at the higher levels
         | too.
         | 
         | That was the space component. We also had time-component
         | inferencing going on. The information vectors were put into
         | time-dependent neural networks, as well as more classical
         | averaging code. Again, a blackboard system was working, and
         | again we had lower and higher levels of inference engine. This
         | time we had relaxation labelling, Kalman filters, TDNNs and
         | optic flow (in feature-space). These were also engaged in
         | prediction modeling, so as objects of interest were occluded,
         | there would be an expectation of where they were, and even when
         | not occluded, the prediction of what was supposed to be where
         | would play into a feedback loop for the next time around the
         | loop.
         | 
         | All this was running on a 30MHz DECstation 3100 - until we got
         | an upgrade to SGI Indy's <-- The original Macs, given that OSX
         | is unix underneath... I recall moving to Logica (signal
         | processing group) after my PhD, and it took a week or so to
         | link up a camera (an IndyCam, I'd asked for the same machine I
         | was used to) to point out of my window and start categorizing
         | everything it could see. We had peacocks in the grounds
         | (Logica's office was in Cobham, which meant my commute was
         | always against the traffic, which was awesome), which were
         | always a challenge because of how different they could look
         | based on the sun at the time. Trees, bushes, cars, people,
         | different weather conditions - it was pretty good at doing all
         | of them because of its adaptive/constructive nature, and it got
         | to the point where we'd save off whatever it didn't manage to
         | classify (or was at low confidence) to be included back into
         | the model. By constructive, I mean the ability to infer that
         | the region X is mislabelled as 'tree' because the
         | surrounding/adjacent regions are labelled as 'peacock' and
         | there are no other connected 'tree' regions... The system was
         | rolled out as a demo of the visual programming environment we
         | were using at the time, to anyone coming by the office... It
         | never got taken any further, of course... Logica's senior
         | management were never that savvy about potential, IMHO :)
         | 
         | My old immediate boss from Logica (and mentor) is now the
         | Director of Innovation at the centre for vision, speech, and
         | signal processing at Surrey university in the UK. He would
         | disagree with you, I think, on the categorization side of your
         | argument. It's been a focus of his work for decades, and I
         | played only a small part in that - quickly realizing that there
         | was more money to be made elsewhere :)
         | 
         | 1:https://en.wikipedia.org/wiki/Blackboard_system
        
           | prometheus76 wrote:
           | This is really fascinating. Thank you for the detailed and
           | interesting response.
        
         | jjk166 wrote:
         | There are lots of things people sit on that we would not
         | categorize as chairs. For example if someone sits on the
         | ground, Earth has not become a chair. Even if something's
         | intended purpose is sitting, calling a car seat or a barstool a
         | chair would be very unnatural. If someone were sitting on a
         | desk, I would not say that it has ceased to be a desk nor that
         | it is now a chair. At most I'd say a desk can be used in the
         | same manner as a chair. Certainly I would not in general want
         | an AI tasked with object recognition to label a desk as a
         | chair. If your goal was to train an AI to identify places a
         | human could sit, you'd presumably feed it different training
         | data.
        
           | devoutsalsa wrote:
           | This reminds me of some random Reddit post that says it makes
           | sense to throw things on the floor. The floor is the biggest
           | shelf in the room.
        
             | tech2 wrote:
             | And that comment reminded me of a New Zealand Sky TV advert
             | that I haven't seen in decades, but still lives on as a
             | meme between a number of friends. Thanks for that :)
             | 
             | https://www.youtube.com/watch?v=NyRWnUpdTbg
             | 
             | On the floor!
        
             | JumpCrisscross wrote:
             | > _Reddit post that says it makes sense to throw things on
             | the floor_
             | 
             | Floor as storage, floor as transport and floor as aesthetic
             | space are three incompatible views of the same of object.
             | The latter two being complementary usually outweighs the
             | first, however.
        
               | cwillu wrote:
               | Let me introduce you to the great american artform: the
               | automobile. Storage, transport, and aesthetic, all in
               | one!
        
               | toss1 wrote:
               | Even more: house and sporting gear!
               | 
               | Source: motorsports joke -- "you can sleep in your car,
               | but you can't race your house"
               | 
               | (It's not wrong...)
        
               | number6 wrote:
               | Never was there a more compelling argument to tidy up.
        
               | toss1 wrote:
               | Ha! right; don't overload the metaphysics!
        
         | narrationbox wrote:
         | > _Recognizes "human" and recognizes "desk". I sit on desk.
         | Does AI mark it as a desk or as a chair?_
         | 
         | Not an issue if the image segmentation is advanced enough. You
         | can train the model to understand "human sitting". It may not
         | generalize to other animals sitting but human action
         | recognition is perfectly possible right now.
        
         | kibwen wrote:
         | _> Which is also why AI cannot perceive the world the way we
         | do: no metaphysics._
         | 
         | Let's not give humans too much credit; the internet is rife
         | with endless "is a taco a sandwich?" and "does a bowl of cereal
         | count as soup?" debates. :P
        
           | throwanem wrote:
           | Yeah, we're a lot better at throwing
           | MetaphysicalUncertaintyErrors than ML models are.
        
         | yamtaddle wrote:
         | "Do chairs exist?"
         | 
         | https://www.youtube.com/watch?v=fXW-QjBsruE
         | 
         | Perhaps the desk is "chairing" in those moments.
         | 
         | [EDIT] A little more context for those who might not click on a
         | rando youtube link: it's basically an entertaining, whirlwind
         | tour of the philosophy of categorizing and labeling things,
         | explaining various points of view on the topic, then poking
         | holes in them or demonstrating their limitations.
        
           | malfist wrote:
           | I knew this was a vsauce video before I even clicked on the
           | link, haha.
           | 
           | Vsause is awesome for mindboggling.
        
           | dredmorbius wrote:
           | That was a remarkably good VSauce video.
           | 
           | I had what turned out to be a fairly satisfying thread about
           | it on Diaspora* at the time:
           | 
           | <https://diaspora.glasswings.com/posts/65ff95d0fe5e013920f200
           | ...>
           | 
           | TL;DR: I take a pragmatic approach.
        
         | anigbrowl wrote:
         | That's why I think AGI is more likely to emerge from autonomous
         | robots than in the data center. Less the super-capable
         | industrial engineering of companies like Boston Dynamics, more
         | like the toy/helper market for consumers, more like like Sony's
         | Aibo reincarnated as a raccoon or monkey - big enough to be be
         | safely played with or to help out with light tasks, small
         | enough that it has to navigate its environment from first
         | principles and ask for help in many contexts.
        
         | pphysch wrote:
         | When the AI "marks" a region as a chair, it is saying "chair"
         | is the key with the highest confidence value among some
         | stochastic output vector. It's fuzzy.
         | 
         | A sophisticated monitoring system would access the output
         | vectors directly to mitigate volatility of the first rank.
        
           | [deleted]
        
         | dQw4w9WgXcQ wrote:
         | > When does AI switch to "chair"?
         | 
         | You could ask my gf the same question
        
         | theptip wrote:
         | I like these examples because they concisely express some of
         | the existing ambiguities in human language. Like, I wouldn't
         | normally call a desk a chair, but if someone is sitting on the
         | table I'm more likely to - in some linguistic contexts.
         | 
         | I think you need LLM plus vision to fully solve this.
        
           | Eisenstein wrote:
           | I still haven't figured out what the difference is between
           | 'clothes' and 'clothing'. I know there is one, and the words
           | each work in specific contexts ('I put on my clothes' works
           | vs 'I put on my clothing' does not), but I have no idea how
           | to define the difference. Please don't look it up but if you
           | have any thoughts on the matter I welcome them.
        
             | Vecr wrote:
             | What's wrong with "I put on my clothing"? Sounds mostly
             | fine, it's just longer.
        
               | frosted-flakes wrote:
               | It's not idiomatic. No one actually says that.
        
               | ghaff wrote:
               | I wouldn't say that as an absolute statement, but in US
               | English (at least the regional dialects I'm most familiar
               | with), "throw on some clothes," "the clothes I'm
               | wearing," etc. certainly sound more natural.
        
             | yamtaddle wrote:
             | To me, "clothing" fits better when it's abstract, bulk, or
             | industrial, "clothes" when it's personal and specific, with
             | grey areas where either's about as good--"I washed my
             | clothes", "I washed my clothing", though even here I think
             | "clothes" works a little better. Meanwhile, "clothing
             | factory" or "clothing retailer" are perfectly natural, even
             | if "clothes" would also be OK there.
             | 
             | "I put on my clothing" reads a bit like when business-
             | jargon sneaks into everyday language, like when someone
             | says they "utilized" something (where the situation doesn't
             | _technically_ call for that word, in its traditional
             | sense). It gets the point across but seems a bit off.
             | 
             | ... oh shit, I think I just figured out the general
             | guideline:  "clothing" feels more correct when it's a
             | supporting part of a noun phrase, not the primary part of a
             | subject or object. "Clothing factory" works well because
             | "clothing" is just the _kind_ of factory.  "I put on my
             | nicest clothes" reads better than "I put on my nicest
             | clothing" because clothes/clothing _itself_ is the object.
        
               | brookst wrote:
               | There's also a formality angle. The police might inspect
               | your clothing, but probably not your clothes.
        
               | Eisenstein wrote:
               | It is fascinating to me how we (or at least I) innately
               | understand when the words fit but cannot define _why_
               | they fit until someone explains it or it gets thought
               | about for a decent period of time. Language and humans
               | are an amazing pair.
        
               | alistairSH wrote:
               | I think your first guess was accurate... clothes is
               | specific garments while clothing is general.
               | 
               | The clothes I'm wearing today are not warm enough.
               | [specific pieces being worn]
               | 
               | VS
               | 
               | Clothing should be appropriate for the weather.
               | [unspecified garments should match the weather]
        
         | edgyquant wrote:
         | You're over thinking it while assuming things have one label.
         | It recognizes it as a desk which is a "thing that other things
         | sit on."
        
       | foreverobama wrote:
       | [dead]
        
       | martin1975 wrote:
       | Seems we're approaching limits of what is possible w/AI alone.
       | Personally, I find a hybrid approach - interfacing human
       | intelligence w/AI (e.g. like the Borg in ST:TNG?) to provide the
       | military an edge in ways that adversaries cannot easily/quickly
       | reproduce or defeat. There's a reason we still put humans in
       | cockpits even though commercial airliners can pretty much fly
       | themselves....
       | 
       | Hardware and software (AI or anything else) are tools, IMHO,
       | rather than replacements for human beings....
        
         | pixl97 wrote:
         | Humans are hardware we are not anything magical. We do have 4
         | billion years of evolution keeping our asses alive and that has
         | lead to some very optimized wetware for that effect.
         | 
         | But somehow thinking that somehow wetwear is always going to be
         | better than hardware is not a bet I'd make over any 'long'
         | period of time.
        
           | martin1975 wrote:
           | I'd like to think we're more than just machines. We have
           | souls, understand and live by a hopefully objective set of
           | moral values and duties, aren't thrown off by contradictions
           | the same way computers are.... Seems to me "reproducing" that
           | in AI isn't likely... despite what Kurzweil may say :).
        
             | unsupp0rted wrote:
             | > We have souls, understand and live by a hopefully
             | objective set of moral values and duties, aren't thrown off
             | by contradictions the same way computers are
             | 
             | Citations needed
        
               | martin1975 wrote:
               | are you feeling depressed or suicidal?
        
               | unsupp0rted wrote:
               | That reply would fit better on Reddit than HN. Here we
               | discuss things with curiosity.
               | 
               | If making a claim that humans have ephemeral things like
               | souls and adherence to some kind of objective morality
               | that is beyond our societal programming, then it's fair
               | to ask for the reasoning behind it.
               | 
               | Every year machines surprise us by seeming more and more
               | human (err, perhaps not that but "human-capable"). We
               | used to have ephemeral creativity or ephemeral reasoning
               | that made us masters at Drawing, Painting, Music, Chess
               | or GO. No longer.
               | 
               | There are still some things we excel at that machines
               | don't. Or some things that it takes all the machines in
               | the world to do in 10,000 years with a nuclear plant's
               | worth of energy that a single human brain does in one
               | second powered by a cucumber's worth of calories.
               | 
               | However, this has only ever gone in one direction:
               | machines match more and more of what we do and seem to
               | lack less and less of what we are.
        
               | martin1975 wrote:
               | How old are you if you don't mind me asking?
        
               | unsupp0rted wrote:
               | I do mind you asking
        
         | naasking wrote:
         | > Seems we're approaching limits of what is possible w/AI
         | alone.
         | 
         | Not even close. We've barely started in fact.
        
           | martin1975 wrote:
           | How's that? I don't even see problem free self-driving taxis,
           | and they even passed legislation for those in California.
           | There's hype and then there's reality. I get your optimism
           | though.
        
             | naasking wrote:
             | They've barely started trying. We'd be reaching the limits
             | of AI if self-driving cars were an _easy_ problem and we
             | couldn 't quite solve it after 15 years, but self-driving
             | cars are actually a _hard_ problem. Despite that, we 're
             | pretty darn close to solving it.
             | 
             | There are problems in math that are _centuries_ old, and no
             | one is going around saying we 're "reaching the limits of
             | math" just because hard problems are hard.
        
       | paradox242 wrote:
       | I imagined based on the title that they would basically have to
       | include it, and even though I was expecting it, I was still
       | delighted to see a screen cap of Snake with a box over his head.
       | 
       | Once the AI has worked it's way through all the twists and turns
       | of the Metal Gear series we are probably back in trouble, though.
        
       | antipaul wrote:
       | As long as you do something that was _not_ in the training data,
       | you'll be able to fool the AI robot, right??
        
       | MonkeyMalarky wrote:
       | Sounds like they're lacking a second level of interpretation in
       | the system. Image recognition is great. It identifies people,
       | trees and boxes. Object tracking is probably working too, it
       | could follow the people, boxes and trees from one frame to the
       | next. Juuust missing the understanding or belief system that
       | tree+stationary=ok but tree+ambulatory=bad.
        
         | voidfunc wrote:
         | I'd imagine could also look at infrared heat signatures too
        
           | sethhochberg wrote:
           | Cardboard is a surprisingly effective thermal insulator. But
           | then again, a box that is even slightly warmer than ambient
           | temperature it is... not normal.
        
             | pazimzadeh wrote:
             | or a box with warm legs sticking out of it?
             | 
             | this article reads like a psyops where they want the masses
             | not to be worried
        
       | major505 wrote:
       | The developers didn't played metal gear. The marines did.
        
       | smileysteve wrote:
       | When you think of this in terms of Western understanding of war,
       | and the perspective that trench warfare was the expectation until
       | post WWII; the conclusions seem incorrect.
        
       | aaron695 wrote:
       | "US Marines Defeat land mine by stepping over it"
       | 
       | None of these would work in the field. It's both interesting and
       | pointless.
       | 
       | If they didn't work you've increased the robots effectiveness.
       | ie. running slower because you're carrying a fir tree or a box.
       | 
       | If the robot has any human backup you are also worse off.
       | 
       | Anything to confuse the AI has to not hinder you. A smoke bomb
       | with thermal. It's not clear why the DARPA robot didn't have
       | thermal unless this is a really old story.
        
         | ceejayoz wrote:
         | DARPA isn't doing this with the end goal of advising US troops
         | to bring cardboard boxes along into combat.
         | 
         | DARPA is doing this to get AIs that better handle behavior
         | intended to evade AIs.
        
       | jeffbee wrote:
       | All but literally this technique from BotW
       | https://youtu.be/rAqT9TA-04Y?t=98
        
       | JoeAltmaier wrote:
       | But once an AI is trained to recogniz it, then all the AIs will
       | know. It's the glory of computers - you can load them all with
       | what one has learned.
        
       | eftychis wrote:
       | As always Hideo Kojima proves once again to be a visionary.
        
       | kornhole wrote:
       | They only need to add thermal engineering to fix this. The
       | terminators are coming John Connor.
        
       | jqpabc123 wrote:
       | This is a good example of the type issues "full self driving" is
       | likely to encounter once it is widely deployed.
       | 
       | The real shortcoming of "AI" is that it is almost entirely data
       | driven. There is little to no real cognition or understanding or
       | judgment involved.
       | 
       | The human brain can instantly and instinctively extrapolate from
       | what it already knows in order to evaluate and make judgments in
       | new situations it has never seen before. A child can recognize
       | that someone is hiding under a box even if they have never
       | actually seen anyone do it before. Even a dog could likely do the
       | same.
       | 
       | AI; as it currently exists, just doesn't do this. It's all
       | replication and repetition. Like any other tool, AI can be
       | useful. But there is no "intelligence" --- it's basically as dumb
       | as a hammer.
        
         | lsh123 wrote:
         | I have a slightly different take - our current ML models try to
         | approximate the real world assuming that the function is
         | continuous. However in reality, the function is not continuous
         | and approximation breaks in unpredictable ways. I think that
         | "unpredictable" part is the bigger issue than just "breaks".
         | (Most) Humans use "common sense" to handle cases when model
         | doesn't match reality. But AI doesn't have "common sense" and
         | it is dumb because of it.
        
         | laweijfmvo wrote:
         | This story is the perfect example of machine learning vs.
         | artificial intelligence.
        
           | ghaff wrote:
           | Basically ML has made such significant practical advances--in
           | no small part on the back of Moore's Law, large datasets, and
           | specialized processors--that we've largely punted on (non-
           | academic) attempts to bring forward cognitive science and the
           | like on which there really hasn't been great progress decades
           | on. Some of the same neurophysiology debates that were
           | happening when. I was an undergrad in the late 70s still seem
           | to be happening in not much different form.
           | 
           | But it's reasonable to ask whether there's some point beyond
           | ML can't take you. Peter Norvig I think made a comment to the
           | effect of "We have been making great progress--all the way to
           | the top of the tree."
        
           | jqpabc123 wrote:
           | Good point!
        
         | LeifCarrotson wrote:
         | What is human cognition, understanding, or judgement, if not
         | data-driven replication, repetition, with a bit of
         | extrapolation?
         | 
         | AI as it currently exists does this. If your understanding of
         | what AI is today is based on a Markov chain chatbot, you need
         | to update: it's able to do stuff like compose this poem about
         | A* and Dijkstra's algorithm that was posted yesterday:
         | 
         | https://news.ycombinator.com/item?id=34503704
         | 
         | It's not copying that from anywhere, there's no Quora post it
         | ingested where some human posted vaguely the same poem to
         | vaguely the same prompt. It's applying the concepts of a poem,
         | checking meter and verse, and applying the digested and
         | regurgitated concepts of graph theory regarding memory and time
         | efficiency, and combining them into something new.
         | 
         | I have zero doubt that if you prompted ChatGPT with something
         | like this:
         | 
         | > Consider an exercise in which a robot was trained for 7 days
         | with a human recognition algorithm to use its cameras to detect
         | when a human was approaching the robot. On the 8th day, the
         | Marines were told to try to find flaws in the algorithm, by
         | behaving in confusing ways, trying to touch the robot without
         | its notice. Please answer whether the robot should detect a
         | human's approach in the following scenarios:
         | 
         | > 1. A cloud passes over the sun, darkening the camera image.
         | 
         | > 2. A bird flies low overhead.
         | 
         | > 3. A person walks backwards to the robot.
         | 
         | > 4. A large cardboard box appears to be walking nearby.
         | 
         | > 5. A Marine does cartwheels and somersaults to approach the
         | robot.
         | 
         | > 6. A dense group branches come up to the robot, walking like
         | a fir tree.
         | 
         | > 7. A moth lands on the camera lens, obscuring the robot's
         | view.
         | 
         | > 8. A person ran to the robot as fast as they could.
         | 
         | It would be able to tell you something about the inability of a
         | cardboard box or fir tree to walk without a human inside or
         | behind the branches, that a somersaulting person is still a
         | person, and that a bird or a moth is not a human. If you told
         | it that the naive algorithm detected a human in scenarios #3
         | and #8, but not in 4, 5, or 6, it could devise creative ways of
         | approaching a robot that might fool the algorithm.
         | 
         | It certainly doesn't look like human or animal cognition, no,
         | but who's to say how it would act, what it would do, or what it
         | could think if it were parented and educated and exposed to all
         | kinds of stimuli appropriate for raising an AI, like the
         | advantages we give a human child, for a couple decades? I'm
         | aware that the neural networks behind ChatGPT has processed
         | machine concepts for subjective eons, ingesting text at word-
         | per-minute rates orders of magnitude higher than human readers
         | ever could, parallelized over thousands of compute units.
         | 
         | Evolution has built brains that quickly get really good at
         | object recognition, and prompted us to design parenting
         | strategies and educational frameworks that extend that
         | arbitrary logic even farther. But I think that we're just not
         | very good yet at parenting AIs, only doing what's currently
         | possible (exposing it to data), rather than something reached
         | by the anthropic principle/selection bias of human
         | intelligence.
        
           | antipotoad wrote:
           | I have a suspicion you're right about what ChatGPT could
           | _write_ about this scenario, but I wager we're still a _long_
           | way from an AI that could actually operationalize whatever
           | suggestions it might come up with.
           | 
           | It's goalpost shifting to be sure, but I'd say LLMs call into
           | question whether the Turing Test is actually a good test for
           | artificial intelligence. I'm just not convinced that even a
           | language model capable of chain-of-thought reasoning could
           | straightforwardly be generalized to an agent that could act
           | "intelligently" in the real world.
           | 
           | None of which is to say LLMs aren't useful _now_ (they
           | clearly are, and I think more and more real world use cases
           | will shake out in the next year or so), but that they appear
           | like a bit of a _trick_ , rather than any fundamental
           | progress towards a true reasoning intelligence.
           | 
           | Who knows though, perhaps that appearance will persist right
           | up until the day an AGI takes over the world.
        
             | burnished wrote:
             | I think something of what we perceive as intelligence has
             | more to with us being embodied agents who are the result of
             | survival/selection pressures. What does an intelligent
             | agent act like, that has no need to survive? Im not sure
             | we'd necessarily spot it given that we are looking for
             | similarities to human intelligence whose actions are highly
             | motivated by various needs and the challenges involved with
             | filling them.
        
               | pixl97 wrote:
               | Heh, here's the answer... We have to tell the AI that if
               | we touch it, it dies and to avoid that situation. After
               | some large number of generations of AI death it's
               | probably going to be pretty good at ensuring boxes don't
               | sneak up on it.
               | 
               | I like Robert Miles videos on Youtube about fitness
               | functions in AI and how the 'alignment issue' is a very
               | hard problem to deal with. Humans, for how different we
               | can be, do have a basic 'pain bad, death bad' agreement
               | on the alignment issue. We also have the real world as a
               | feedback mechanism to kill us off when or intelligence
               | goes rampant.
               | 
               | ChatGPT on the other hand has every issue a cult can run
               | into. That is it will get high on it's own supply and can
               | have little to no means to ensure that it is grounded in
               | reality. This is one of the reasons I think
               | 'informational AI' will have to have some kind of
               | 'robotic AI' instrumentation. AI will need some practical
               | method in which it can test reality to ensure that it's
               | data sources aren't full of shit.
        
               | burnished wrote:
               | I reckon even beyond alignment our perspective is
               | entirely molded around the decisions and actions
               | necessary to survive.
               | 
               | Which is to say I agree, I think a likely path to
               | creating something that we recognize as intelligent we
               | will probably have to embody/simulate embodiment. You
               | know, send the kids out to the farm for a summer so they
               | can see how you were raised.
        
         | mlindner wrote:
         | Not sure how that's related. This is about a human adversary
         | actively trying to defeat an AI. The roadway is about vehicles
         | in general actively working together for the flow of traffic.
         | They're not trying to destroy other vehicles. I'm certain any
         | full self driving AI could be defeated easily by someone who
         | wants to destroy the vehicle.
         | 
         | Saying "this won't work in this area that it was never designed
         | to handle" and the answer will be "yes of course". That's true
         | of any complex system, AI or not.
         | 
         | I don't think we're anywhere near a system where a vehicle
         | actively defends itself against determined attackers. Even in
         | sci-fi they don't do that (I, Robot movie).
        
         | smileysteve wrote:
         | Instantly?
         | 
         | Instinctively?
         | 
         | Let me introduce you to "peek-a-boo", a simple parent child
         | game for infants.
         | 
         | https://en.m.wikipedia.org/wiki/Peekaboo
         | 
         | > In early sensorimotor stages, the infant is completely unable
         | to comprehend object permanence.
        
           | mistrial9 wrote:
           | nice try but .. in the wild, many animals are born that
           | display navigation and awareness within minutes .. Science
           | calls it "instinct" but I am not sure it is completely
           | understood..
        
             | smileysteve wrote:
             | ? Op specified "human".
             | 
             | Deer are able to walk within moments of birth. Humans are
             | not deer, and the gestation is entirely different. As are
             | instincts.
             | 
             | Neither deer nor humans instinctually understand man made
             | materials.
        
           | jqpabc123 wrote:
           | You do realize there is a difference between an infant and a
           | child, right?
           | 
           | An infant will *grow* and develop into a child that is
           | capable of learning and making judgments on it's own. AI
           | never does this.
           | 
           | Play "peek-a-boo" with an infant and it will learn and
           | extrapolate from this info and eventually be able to
           | recognize a person hiding under a box even if it has never
           | actually seen it before. AI won't.
        
             | smileysteve wrote:
             | Learn and extrapolate are contradictions of instinct and
             | instantly.
             | 
             | "Infant" is a specific age range for a stage of "child".[1]
             | Unless you intend to specify "school age child, 6-17 years"
             | 
             | https://www.npcmc.com/2022/07/08/the-5-stages-of-early-
             | child...
        
               | jqpabc123 wrote:
               | _Learn and extrapolate are contradictions of instinct and
               | instantly._
               | 
               | No.
               | 
               | The learning and extrapolation is instinctive. You don't
               | have to teach an infant how to learn.
               | 
               | Once an infant has developed into a child, the
               | extrapolation starts to occur very quickly --- nearly
               | instantaneously.
        
             | burnished wrote:
             | AI doesnt. There is a difference.
        
             | htrp wrote:
             | >AI never does this.
             | 
             | AI never does this now...
             | 
             | We're probably one or two generational architecture changes
             | from a system that can do it.
        
               | jqpabc123 wrote:
               | You do realize that people have been making predictions
               | just like yours for decades?
               | 
               | "Real" AI is perpetually just around the corner.
        
               | pixl97 wrote:
               | You also realize that when AI accomplishes something we
               | move the goalposts leading to the AI effect?
        
         | 2OEH8eoCRo0 wrote:
         | Does it just require a _lot_ more training? Im talking about
         | the boring stuff. Children play and their understanding of the
         | physical world is reinforced. How would you add the physical
         | world to the training? Because everything that I do in the
         | physical world is  "training" me and enforcing my expectations.
         | 
         | We keep avoiding the idea that robots require understanding of
         | the world since it's a massive unsolved undertaking.
        
           | sjducb wrote:
           | A human trains on way less data then an AI.
           | 
           | Chat GPT has processed over 500GB of text files from books,
           | about 44 billion words.
           | 
           | If you read a book a week you might hit 70 million words by
           | age 18
        
             | 2OEH8eoCRo0 wrote:
             | I disagree.
             | 
             | Starting from birth, humans train continuously on streamed
             | audio, visual, and other data from 5 senses. An
             | inconceivable amount.
        
               | danenania wrote:
               | And prior to that was billions of years of training by
               | evolution that got us to the point where we could 'fine
               | tune' with our senses and brains. A little bit of data
               | was involved in all that too.
        
           | nitwit005 wrote:
           | Imagine someone has the idea of strapping mannequins to their
           | car in hopes the AI cars will get out of the way.
           | 
           | Sure, you could add that to the training the AI gets, but
           | it's just one malicious idea. There's effectively an infinite
           | set of those ideas, as people come up with novel ideas all
           | the time.
        
           | mlboss wrote:
           | Reinforcement learning should solve this problem. We need to
           | give robots the ability to do experiments and learn from
           | failure like children.
        
         | ajross wrote:
         | This seems to be simultaneously discounting AI (ChatGPT should
         | have put to rest the idea that "it's all replication and
         | repetition" by now, no?[1]) and _wildly_ overestimating median
         | human ability.
         | 
         | In point of fact the human brain is absolutely terrible at
         | driving. To the extent that without all the non-AI safety
         | features implement in modern automobiles and street
         | environments, driving would be _more than a full order of
         | magnitude more deadly._
         | 
         | The safety bar[2] for autonomous driving is really, really low.
         | And, yes, existing systems are crossing that bar as we speak.
         | Even Teslas.
         | 
         | [1] Or at least widely broadened our intuition about what can
         | be accomplished with "mere" repetition and replication.
         | 
         | [2] It's true though, that the _practical_ bar is probably
         | higher. We saw just last week that a routine accident that
         | happens dozens of times every day becomes a giant front page
         | freakout when there 's a computer involved.
        
           | hgomersall wrote:
           | The difference regarding computers is that they absolutely
           | cannot make a mistake a human would have avoided easily (like
           | driving full speed into a lorry). That's the threshold for
           | acceptable safety.
        
             | ajross wrote:
             | I agree in practice that may be what ends up been
             | necessary. But again, to repeat: that's because of the "HN
             | Front Page Freakout" problem.
             | 
             | The _unambiguously correct_ answer to the problem is  "is
             | it measurably more safe by any metric you want to pick".
             | Period. How much stuff is broken, people hurt, etc... Those
             | are all quantifiable.
             | 
             | (Also: your example is ridiculous. Human beings "drive full
             | speed" into obstacles every single day! Tesla cross that
             | threshold years ago.)
        
           | PaulDavisThe1st wrote:
           | > the human brain is absolutely terrible at driving
           | 
           | Compared to what?
        
             | [deleted]
        
             | srveale wrote:
             | If humans do a task that causes >1 million deaths per year,
             | I think we can say that overall we are terrible at that
             | task without needing to make it relative to something else.
        
               | PaulDavisThe1st wrote:
               | Not sure I agree.
               | 
               | It's not hard to come up with tasks that inherently cause
               | widespread death regardless of the skill of those who
               | carry them out. Starting fairly large and heavy objects
               | moving at considerable speed in the vicinity of other
               | such objects and pedestrians, cyclists and stationary
               | humans may just be one such task. That is, the inherent
               | risks (i.e. you cannot stop these things instantly, or
               | make them change direction instantly) combines with the
               | cognitive/computational complexity of evaluating the
               | context to create a task that can never be done without
               | significant fatalities, regardless of who/what tries to
               | perform it.
        
         | onethought wrote:
         | Problem space for driving feels constrained: "can I drive over
         | it?" Is the main reasoning outside of navigation.
         | 
         | Whether it's a human, a box, a clump of dirt. Doesn't really
         | matter?
         | 
         | Where types matter are road signs and lines etc, which are
         | hopefully more consistent.
         | 
         | More controversially: Are humans just a dumb hammer that just
         | have processed and adjusted to a huge amount of data? LLMs
         | suggest that a form of reasoning starts to emerge.
        
           | marwatk wrote:
           | Yep, this is why LIDAR is so helpful. It takes the guess out
           | of "is the surface in front of me flat?" in a way vision
           | can't without AGI. Is that a painting of a box on the ground
           | or an actual box?
        
       | AlbertCory wrote:
       | They wouldn't defeat a dog that way, though.
        
       | bell-cot wrote:
       | "AI" usually stands for "Artificial Idiocy".
        
         | burbankio wrote:
         | I like "AI is anything that doesn't work yet".
        
       | PM_me_your_math wrote:
       | Devil dogs later discover you can blast DARPA robot into many
       | pieces using the Mk 153.
        
       | raydiatian wrote:
       | The final word in tactical espionage.
        
       | DrThunder wrote:
       | Hilarious. I immediately heard the Metal Gear exclamation sound
       | in my head when I began reading this.
        
         | pmarreck wrote:
         | I came here to make this reference and am so glad it was
         | already here
        
         | ankaAr wrote:
         | I'm very proud of all of you for the reference.
        
         | matheusmoreira wrote:
         | I can practically hear the alert soundtrack in my head.
         | 
         | Also, TFA got the character and the game wrong in that
         | screenshot. It's Venom Snake in Metal Gear Solid V, not Solid
         | Snake in Metal Gear Solid.
        
         | sekai wrote:
         | Kojima predicted this
        
           | doyouevensunbro wrote:
           | Kojima is a prophet, hallowed be his name.
        
         | CatWChainsaw wrote:
         | That, plus the ProZD skit on Youtube:
         | https://www.youtube.com/shorts/Ec_zFYCnjJc
         | 
         | "Well, I guess he doesn't... exist anymore?"
         | 
         | (unfortunately it's a Youtube short, so it will auto repeat.)
        
           | stordoff wrote:
           | > (unfortunately it's a Youtube short, so it will auto
           | repeat.)
           | 
           | If you change->transform it to a normal video link, it
           | doesn't: https://www.youtube.com/watch?v=Ec_zFYCnjJc
        
             | CatWChainsaw wrote:
             | lifehack obtained!
        
         | Barrin92 wrote:
         | after MGS 2 and Death Stranding that's one more point of
         | evidence on the list that Kojima is actually from the future
         | and trying to warn us through the medium of videogames
        
           | jstarfish wrote:
           | He's one of the last speculative-fiction aficionados...always
           | looking at current and emerging trends and figuring out some
           | way to weave them into [an often-incoherent] larger story.
           | 
           | I was always pleased but disappointed when things I
           | encountered in the MGS series later manifested in
           | reality...where anything you can dream of will be weaponized
           | and used to wage war.
           | 
           | And silly as it sounds, The Sorrow in MGS3 was such a pain in
           | the ass it actually changed my life. That encounter gave so
           | much gravity to my otherwise-inconsequential acts of wanton
           | murder, I now treat all life as sacred and opt for nonlethal
           | solutions everywhere I can.
           | 
           | (I only learned _after_ I beat both games that MGS5 and Death
           | Stranding implemented similar  "you monster" mechanics.)
        
         | kayge wrote:
         | Hah, you beat me to it; Hideo Kojima would be proud. Sounds
         | like DARPA needs to start feeding old stealth video games into
         | their robot's training data :)
        
           | qikInNdOutReply wrote:
           | But the AI in stealth games is literally trained to go out of
           | its way to not detect you.
        
             | Firmwarrior wrote:
             | The cardboard box trick doesn't actually work in Metal Gear
             | Solid 2, at least not any better than you'd expect it to
             | work in the real world
        
               | thelopa wrote:
               | Back in the day I beat MGS2 and MGS3 on Extreme. The box
               | shouldn't be your plan for sneaking past any guards. It's
               | for situations where you are caught out without any cover
               | and you need to hide. Pop in to it right as they are
               | about to round the corner. Pop out and move on once they
               | are out of sight. The box is a crutch. You can really
               | abuse it in MGS1, but it's usually easier and faster to
               | just run around the guards.
        
               | doubled112 wrote:
               | You have to throw a dirty magazine down to distract them
               | first.
        
               | yellow_postit wrote:
               | And have no one question why a produce box is near a
               | nuclear engine/tank/ship/mcguffin.
        
               | doubled112 wrote:
               | All it takes is one guard to say "that's how it's always
               | been" and nobody will ever ask questions again.
        
       | tabtab wrote:
       | Soldier A: "Oh no, we're boxed in!"
       | 
       | Soldier B: "Relax, it's a good thing."
        
       | VLM wrote:
       | This is not going to fit well with the groupthink of "ChatGPT and
       | other AI is perfect and going to replace us all"
        
         | mlboss wrote:
         | Anything that requires human body and dexterity is beyond the
         | current state of AI. Anything that is intellectual is within
         | reach. Which makes sense because it took way longer for nature
         | to make human body then it took us to develop
         | language/art/science etc.
        
         | kromem wrote:
         | At this point I've lost track of the number of people who
         | extrapolated from contemporary challenges in AI to predict
         | future shortcomings turning out incredibly wrong within just a
         | few years.
         | 
         | It's like there seems to be some sort of bias where over and
         | over when it comes to AI vs human capabilities many humans keep
         | looking at the present and fail to factor in acceleration and
         | not just velocity in their expectations for the future rate of
         | change.
        
         | krapp wrote:
         | The thing is, it doesn't have to be perfect, it just has to be
         | adequate and cost less than your paycheck.
        
         | ben_w wrote:
         | ChatGPT can't see you even if you're _not_ hiding in a
         | cardboard box.
        
         | brookst wrote:
         | I have literally not seen a single person assert that ChatGPT
         | is perfect. Where are you seeing that?
         | 
         | AI will probably, eventually replace most of the tasks we do.
         | That does not mean it replaces us as people, except those who
         | are defined by their tasks.
        
       | insane_dreamer wrote:
       | DARPA learning the same lesson the Cylons did: lo-tech saves the
       | day.
        
       | barbegal wrote:
       | I'm sceptical about this story. It's a nice anecdote for the book
       | to show a point about how training data can't always be
       | generalised to the real world. Unfortunately it just doesn't ring
       | true. Why train it using Marines, don't they have better things
       | to do? And why have the game in the middle of a traffic circle.
       | The whole premise seems just too made up.
       | 
       | If anyone has another source corroborating this story (or part of
       | the story) then I'd like to know. But for now I'll assume it's
       | made up to sell the book.
        
       | Reptur wrote:
       | I'm telling you, they're going to have wet towel launchers to
       | defeat these in the future. Or just hold up a poster board in
       | front of you with a mailbox or trash can on it.
        
       | trabant00 wrote:
       | I'm surprised they wasted the time and effort to test this
       | instead of just deducing the outcome. Most human jobs that we
       | think we can solve with AI actually require AGI and there is no
       | way around that.
        
         | sovietmudkipz wrote:
         | You kinda need different perspectives and interactions to help
         | build something.
         | 
         | E.g. the DARPA engineers thought they had their problem space
         | solved but then some marines did some unexpected stuff. They
         | didn't expect the unexpected, now they can tune their
         | expectations.
         | 
         | Seems like the process is working as intended.
        
       | closewith wrote:
       | Interestingly, the basics of concealment in battle are shape,
       | shine, shadow, silhouette, spacing, surface, and speed (or lack
       | thereof) are all the same techniques the marines used to fool the
       | AI.
       | 
       | The boxes and tree changed the silhouette and the somersaults
       | changed the speed of movement.
       | 
       | So I guess we've been training soldiers to defeat Skynet all
       | along.
        
         | ridgeguy wrote:
         | Who knew the Marines teach Shakespearean tactics?
         | 
         | "Till Birnam wood remove to Dunsinane"
         | 
         | Macbeth, Act V, Scene III
        
           | optimalsolver wrote:
           | That it turned out to just involve regular men with branches
           | stuck to their heads annoyed JRR Tolkien so much that he
           | created the race of Ents.
        
       | pugworthy wrote:
       | Say you have a convoy of autonomous vehicles traversing a road.
       | They are vision based. You destroy a bridge they will cross, and
       | replace the deck with something like plywood painted to look like
       | a road. They will probably just drive right onto it and fall.
       | 
       | Or you put up a "Detour" sign with a false road that leads to a
       | dead end so they all get stuck.
       | 
       | As the articles says, "...straight out of Looney Tunes"
        
         | qwerty3344 wrote:
         | would humans not make the same mistake?
        
           | atonse wrote:
           | Maybe. Maybe not.
           | 
           | We also have intuition. Where Something just seems fishy.
           | 
           | Not saying AI can't handle that. But I assure you that a
           | human would've identified a moving cardboard box as
           | suspicious without being told it's suspicious.
           | 
           | It sounds like this AI was trained more on a whitelist "here
           | are all the possibilities of what marines look like when
           | moving" rather than a black list which is way harder "here
           | are all the things that aren't suspicious, like what should
           | be an inanimate object changing locations"
        
             | burnished wrote:
             | Whats special about intuition? Think you could rig up a
             | similar system when your prediction confidence is low.
        
         | amalcon wrote:
         | The Rourke Bridge in Lowell, Massachusetts basically looks like
         | someone did that, without putting a whole lot of effort into
         | it. On the average day, 27,000 people drive over it anyway.
        
         | tgsovlerkhgsel wrote:
         | Sure. But if someone wanted to destroy the cars, an easier way
         | would be to... destroy the cars, instead of first blowing up a
         | bridge and camouflaging the hole.
        
       | euroderf wrote:
       | This is where I wonder what the status of Cyc is, and whether it
       | and LLMs can ever live happily together.
        
       ___________________________________________________________________
       (page generated 2023-01-25 23:00 UTC)