[HN Gopher] Video of Tesla FSD almost hitting pedestrian receive...
       ___________________________________________________________________
        
       Video of Tesla FSD almost hitting pedestrian receives DMCA takedown
        
       Author : camjohnson26
       Score  : 433 points
       Date   : 2021-09-17 15:11 UTC (7 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | partido3619463 wrote:
       | Is the car turning right (after a no right turn sign) while it
       | was supposed to be going straight according to nav?
        
         | mcjshciejebcu wrote:
         | The no right turn sign appears to be for the other side of the
         | street, since at the intersection itself, there's a one way
         | sign indicating right turns are possible, and no do not enter
         | signs.
        
         | [deleted]
        
         | stagger87 wrote:
         | Yes, and that's what makes it interesting in my mind. Where was
         | the car going?
        
           | cmsj wrote:
           | It was hungry ;)
        
           | rvz wrote:
           | The car probably thought the pedestrian was an emergency
           | vehicle, given the person was wearing a bright red coat and
           | Teslas on FSD have a habit of crashing into them.
           | 
           | To Downvoters: Well it is actually true. [0] and just
           | recently another crash involving an emergency vehicle. [1]
           | 
           | So there is a strange habit with Tesla FSD and red objects in
           | its view. Given those incidents, care to explain why I am
           | wrong?
           | 
           | [0] https://www.autoblog.com/2018/01/23/tesla-autopilot-
           | crash-fi...
           | 
           | [1] https://www.reuters.com/business/autos-transportation/us-
           | ide...
        
             | NickM wrote:
             | _care to explain why I am wrong_
             | 
             | Because even in cases where cars have hit emergency
             | vehicles, it's because the software didn't see the vehicles
             | at all and just continued driving straight in its lane.
             | Whatever flaws the Tesla vision system may have, the idea
             | that it is programmed to deliberately seek out and crash
             | into emergency vehicles seems pretty far-fetched (much less
             | that it would mistake a person wearing a red coat for an
             | emergency vehicle and therefore attempt to crash into it);
             | I assume this is why people are downvoting you.
        
         | comeonseriously wrote:
         | Exactly! What the heck spooked the "AI"?
        
           | pjc50 wrote:
           | Some of the discourse suggested that previously it had had
           | problems with the pillars for the monorail running down the
           | median, and the owner/driver was trying it again to see if it
           | had improved.
           | 
           | One of the big limits of this kind of AI is that it does not
           | provide human-legible explanations for its actions. You
           | cannot put it in front of a tribunal.
        
             | emn13 wrote:
             | Actually, all kinds of data pertaining to the decision
             | making process is recorded (at least for some of Tesla's
             | competitors, not sure about Tesla), and in great detail.
             | The data is specifically designed to make the AI driver
             | "debuggable", i.e. it includes all kinds of details,
             | intermediate representations, etc that an engineer would
             | need to improve a poor decision, and thus certainly to
             | understand a poor decision.
             | 
             | Whether that kind of logging is always on or was
             | specifically on here, I don't know, but I'd expect Tesla
             | _can_ analyze why this happened: the car does have the
             | ability to explain itself; it 's just that owners and
             | drivers do not have access to that explanation.
        
               | mcguire wrote:
               | Does Tesla use a neural network for sensing and scene
               | recognition/the observe-orient steps? For parts of the
               | decide-act steps?
               | 
               | That particular kind of black box is very black; it has
               | hundreds of thousands to millions of inputs feeding a
               | hyperdimensional statistical model.
        
               | mrguyorama wrote:
               | You can log and debug the inputs going into the black
               | box, and the outputs, but how do you debug inside the
               | black box?
        
               | emn13 wrote:
               | The box isn't as black as you might think; they're not
               | training some monolithic AI model, there are separated
               | systems involved. Also, the models aren't entirely
               | freeform; i.e. engineers embed knowledge of how the world
               | is structured into those networks.
               | 
               | They can use those intermediates to project a kind of
               | thought process others can look at - and you've probably
               | seen videos and images of that kind of thing too; i.e. a
               | rendered version of the 3d intermediate world it's
               | perceived, enhanced with labels, enhanced with motion
               | vectors, cut into objects, classified by type of surface,
               | perhaps even including projections of likely future
               | intent of the various actors, etc.
               | 
               | Sure, you can't fully understand how each individual
               | perceptron contributes to the whole, but you _can_
               | understand why the car suddenly veered right, what it 's
               | planned route was, what it thought other traffic
               | participants were about to do, which obstacles it saw,
               | whether it noticed the pedestrians, which traffic rules
               | it was aware of, whether it noticed the traffic lights
               | (and which ones) and how much time it thought remained
               | etc.
               | 
               | ...at least, sometimes; I don't know anybody working at
               | Tesla specifically.
               | 
               | Here, for example waymo has a public PR piece kind of
               | highlighting all the kind of stuff they can extract from
               | the black box: https://blog.waymo.com/2021/08/MostExperie
               | ncedUrbanDriver.ht...
               | 
               | And while they emphasize their lidar tech, I bet Tesla's
               | team, while using different sensors, also has somewhat
               | similarly complex - and inspectable - intermediate
               | representations.
        
               | mcguire wrote:
               | " _Also, the models aren 't entirely freeform; i.e.
               | engineers embed knowledge of how the world is structured
               | into those networks._"
               | 
               | In the days when Sussman was a novice, Minsky once came
               | to him as he sat hacking at the PDP-6.
               | 
               | "What are you doing?", asked Minsky.
               | 
               | "I am training a randomly wired neural net to play Tic-
               | tac-toe", Sussman replied.
               | 
               | "Why is the net wired randomly?", asked Minsky.
               | 
               | "I do not want it to have any preconceptions of how to
               | play", Sussman said.
               | 
               | Minsky then shut his eyes.
               | 
               | "Why do you close your eyes?" Sussman asked his teacher.
               | 
               | "So that the room will be empty."
               | 
               | At that moment, Sussman was enlightened.
               | 
               | (https://news.ycombinator.com/item?id=10970937)
               | 
               | IIRC, in the incident where the Tesla [Edit: Uber self
               | driving car] collided with a pedestrian pushing a bicycle
               | in Arizona, the Tesla repeatedly switched between calling
               | the input a pedestrian and a bicycle. And took no evasive
               | actions while it was trying to decide.
        
               | mrguyorama wrote:
               | >the incident where the Tesla collided with a pedestrian
               | pushing a bicycle in Arizona
               | 
               | That was Uber's self driving car program. Notably, the
               | SUV they were using has had pedestrian detecting auto-
               | stopping for several years, though I'm sure it's not 100%
        
               | mcguire wrote:
               | Sorry, Uber, you're right! Whoops!
        
             | mjevans wrote:
             | It takes a jury of peers to interrogate properly.
             | 
             | So, the same / similar data fed to the same / similar
             | algorithms and the state of the code examined by qualified
             | experts (programmers).
        
         | mzs wrote:
         | I did not see a no right turn sign.
         | 
         | ed: modeless 1 hour ago
         | 
         | >That sign applies only to the lanes to the left of the
         | pillars. It is legal to turn right there from the right lane.
         | I've done it myself. Yes, it is confusing.
        
           | RandallBrown wrote:
           | It's not an illegal right. That sign is for the left lane.
        
           | CameronNemo wrote:
           | Look again. It is on one of the pillars in the median,
           | shortly before the turn.
        
             | [deleted]
        
             | modeless wrote:
             | That sign applies only to the lanes to the left of the
             | pillars. It is legal to turn right there from the right
             | lane. I've done it myself. Yes, it is confusing.
        
       | ghufran_syed wrote:
       | I think we're missing the point that this is currently designed
       | for a driver to monitor at all times, the driver intervened
       | appropriately, _and_ thereby provided another training example
       | for the network. This is also a beta version that is being tested
       | _by humans_ who have the legal and moral responsibility for
       | control of the car.
        
         | toss1 wrote:
         | And this is the WORST possible combination
         | 
         | One of the attributes of human perception is that it is
         | TERRIBLE at maintaining persistent vigilance without
         | engagement.
         | 
         | Even at a very low level, the nervous system is designed to
         | habituate to constant stimuli; e.g., when you first encounter
         | something that smells (good or bad) it can be overwhelming, but
         | after a few minutes the same smell barely registers. More on
         | point, spend some time looking forward at speed, or rotating
         | (e.g., in a swivel chair), then stop quickly, and watch how
         | your visual system creates the illusion of everything flowing
         | in the opposite direction.
         | 
         | Now, scale that up to higher levels of cognition. The more the
         | car gets right, the worse will be the human's attention. When a
         | car here does _almost_ everything right, people can and will
         | literally read or fall asleep at the wheel. Until that one
         | critical failure.
         | 
         | As a former licensed road racing driver and champion, I find
         | the idea of anything between L2 and L4 to be terrifying. I can
         | and have handled many very tricky and emergency situations at a
         | wide range of speeds on everything from dry roads to wet ice
         | (on and off the track) -- _when my attention was fully focused_
         | on the road, the situation, my grip levels, the balance of the
         | car, etc.
         | 
         | The idea of being largely unfocused while the car does _almost_
         | everything, then having an alert and having to regain, in
         | fractions of a second, full orientation to everything I need to
         | know then take action, is terrifying. 60 mph is 88 feet per
         | second. Even a quick reaction where I 've squandered only a
         | half second figuring out what to do is the difference between
         | avoiding or stopping before an obstacle, and blowing ~50' past
         | it, or over it, at speed.
         | 
         | Attempts to say "it's just fine because a human is in the loop
         | (and ultimately responsible)" are just bullsh*t and evading
         | responsibility, even if human beta testing is fantastic for
         | gathering massive amounts of data to analyze.
         | 
         | Among high risk and speed sports, it is almost axiomatic for us
         | to draw distinctions between "smart crazy" vs "dumb crazy", and
         | everyone knows the difference without a definition. The best
         | definition I heard was that it's the difference between [using
         | knowledge, technology, and skill to make a hazardous activity
         | reliably safe] vs [getting away with something]. You can 'get
         | away' with Russian Roulette 5 out of six times, and you'll
         | probably get a great adrenaline rush, but you can't expect to
         | do so for long.
         | 
         | Although this kind of "full self driving" has much better odds
         | vs Russian Roulette, it is still unreliable, and the system of
         | expecting the human to always be able to detect, orient, and
         | respond in time to the car's errors is systematically unsafe.
         | You will 'get away with it' a lot, and there will even be times
         | when the system catches things the humans won't.
         | 
         | But to place the entire "legal and moral responsibility" on the
         | human to 100% reliably operate a system that is specifically
         | designed against human capabilities is wrong, unless you want
         | to say that this is a [no human should every operate under
         | these conditions], like drunk driving, and outlaw the system
         | and the action of operating it.
        
           | ghufran_syed wrote:
           | If your concerns are correct, shouldn't we see a lot MORE
           | collisions among the millions of current tesla drivers using
           | the existing, less advanced system than we do among
           | comparable vehicles and drivers? Wouldn't we expect to see
           | higher insurance premiums for tesla drivers with FSD than for
           | comparable drivers and comparably expensive cars? That
           | doesn't seem to be the case for the most numerous tesla
           | vehicles[1]. In which case this sounds like a "it works in
           | practice, but does it work in theory?" kind of situation :)
           | 
           | [1] https://www.motortrend.com/features/how-much-are-tesla-
           | insur...
        
             | toss1 wrote:
             | Indeed - one thing insurers are good at is gathering good
             | and relevant data! In this case, a quick skim shows the
             | Tesla often more to insure than the regular car, but not a
             | ton. What I'd want to see is the data for _only_ the models
             | with the  "Full Self Drive" option.
             | 
             | Not necessarily more, but we do see some really horrifying
             | ones that humans would rarely do. E.g., the car in FL that
             | just full-self-drove at full speed straight under a semi-
             | trailer turning across the road, decapitating the driver,
             | or the Apple exec that died piling into the construction
             | divider on the highway because the Tesla failed to
             | understand the temporary marks on the road.
             | 
             | I'm fine with Tesla or other companies using and even
             | testing automated driving systems on public roads (within
             | reason). Ultimately, it should be better, and probably is
             | already better than the average human.
             | 
             | My objection is _ONLY_ to the idea that the human driver
             | should be considered 100% morally  & legally responsible
             | for any action of the car.
             | 
             | Aside from the fact that the code is secret and
             | proprietary, and even it's authors often cannot immediately
             | see why the car took some action, the considerations of
             | actual human performance make such responsibility a
             | preposterous proposition.
             | 
             | The maker of the autonomous system, and its user, must
             | share responsibility for the actions of the car. When there
             | is a disaster, it will, and should come down to a case-by-
             | case examination of the actual details of the incident. Did
             | the driver ask the car/system to do something beyond it's
             | capabilities, or was s/he unreasonably negligent? Or, did
             | the system do something surprising and unexpected?
             | 
             | In the present situation, where the car started to make a
             | sharp & surprising turn and almost ran over a pedestrian,
             | it was Pure Dumb Luck that the driver was so attentive and
             | caught it in time. If he'd been just a bit slower and the
             | pedestrian was hit, I would place this blame 90%++ on
             | Tesla, not the driver (given only the video). OTOH, there
             | are many other cases where the driver tries to falsely
             | blame Tesla.
             | 
             | We just can't A Priori declare one or the other _always_ at
             | fault.
        
         | adflux wrote:
         | >designed for a driver to monitor at all times
         | 
         | >calls it full self driving
         | 
         | What a joke & GREAT way to mislead customers.
        
       | valine wrote:
       | This is near the infamous monorail where previous versions of FSD
       | would kamikaze into the giant concrete pillars. Presumably in
       | response to the monorail failure Tesla updated their obstacle
       | avoidance to handle unclassified objects.
       | 
       | https://twitter.com/elonmusk/status/1437322712339423237?s=21
       | 
       | From the nav you can see the car is trying to go straight but
       | swerves around some invisible obstacle. I wouldn't be surprised
       | if this was a failure in their new "voxel based" collision
       | avoidance.
        
         | miken123 wrote:
         | Funny, you see the same mistake at exactly the same spot,
         | albeit less spectacular, in the video linked from the tweet:
         | https://www.youtube.com/watch?v=xWc-r0InwVk&t=148s
        
           | kzrdude wrote:
           | It seems likely that navigation is momentarily seeing the
           | straight-ahead as obstructed and trying to route around using
           | the street on the right? But the car really needs to respond
           | more calmly when doing navigation changes (see especially the
           | car in the OP video - no safe driver changes course in a
           | split second in the middle of an intersection, not at that
           | speed).
        
         | phkahler wrote:
         | Voxels? Wow. I was recently contemplating conscious driving vs
         | ones own autopilot (when you get to work and dont remember any
         | of the drive because it was unevetful). I realised that I
         | sometimes drive largely in image space and other times in world
         | space. It was an odd thing to deliberately flip between them. I
         | don't really want to think about that...
        
       | [deleted]
        
       | ummonk wrote:
       | Seeing this video, I imagine that driving with FSD in a busy city
       | is significantly more taxing and stressful than driving without.
        
       | fnord77 wrote:
       | welcome to our dystopian corporate future.
       | 
       | video of company dumping toxic waste into the ocena? DMCA
       | takedown !
        
       | sorokod wrote:
       | If a pedestrian got hurt, what would be the legal liability of
       | the driver who didn't have his hands on the steering wheel?
        
         | sidibe wrote:
         | 100% on the driver. "Full self driving" is for marketing only
         | and they supposedly make it very clear to the customers.
        
           | paxys wrote:
           | This will have to be settled in court eventually (I'm
           | guessing very soon, looking at all these videos). It's very
           | unlikely that car manufacturers will be able to avoid all
           | responsibility just because of fine print in the terms of
           | service.
        
       | jmcguckin wrote:
       | At the time the car was going to make a right hand turn, it looks
       | like the pedestrian hadn't even stepped off the curb. I don't see
       | the problem.
        
       | joshribakoff wrote:
       | I have FSD but not the beta. When cars cut across my lane
       | autopilot will not react for a solid second or so, then proceeds
       | to slam on the brakes after the car is clear. It does not instill
       | confidence to say the least. From my POV it feels like someone
       | put a debounce on the reaction to prevent phantom breaking, to
       | the point the car doesn't brake when it needs to.
        
         | no_butterscotch wrote:
         | I don't have FSD, but autosteer + cruise-control in my Tesla
         | does this as well.
         | 
         | Additionally I've experienced cases of "phantom braking"
         | (https://www.cbsnews.com/news/automatic-emergency-braking-
         | in-...) which doesn't instill confidence in those features.
        
         | rootusrootus wrote:
         | The part that always bothered me most about AP on the highway
         | was how willing it is to drive full speed into an obvious
         | problem. Obvious for me, that is. Dense merging traffic, for
         | example -- I can see the conflict coming, I can read the way
         | people are driving on that on-ramp, and I take defensive
         | action. My Tesla would just drive right into the center of that
         | mess than then panic when things got crazy.
         | 
         | I no longer have my Tesla, but I stopped using AP before I sold
         | the car, partly as a result of this behavior. And partly
         | because phantom braking became pretty common for a number of
         | overpasses we go under on our way to grandma's house, and it
         | really scares the crap out of my wife & kids.
         | 
         | Aside from phantom braking, the best use case for AP, in my
         | opinion, is to be a jerk and cruise in the middle lane of the
         | freeway. It deals okay with that most of the time.
        
           | cryptoegorophy wrote:
           | well, you don't just "trust" it, if you are smart enough you
           | quickly learn the good and bad sides of it. When I use it
           | (95% of time) I know what to expect and I have a habit of
           | putting my foot on the accelerator pedal instead of break
           | pedal for "phantom" braking which has gotten a lot better
           | than let say in 2018. You also know the limits - if the lines
           | look strange - no lines at all or if they are not perfect or
           | if there are extra painted lines then you have your hand on
           | your knee with a finger holding the steering wheel ready to
           | quickly grab the wheel and go. You also learn as a pro tip to
           | let the car push its limits and let it do what it wants to do
           | like doing something dangerous then realizing that it would
           | correct itself after your "comfortable" zone. People expect a
           | "human" like behaviour from FSD/AP but it is not, it is far
           | from it. It is a good/ideal assistant but not a full
           | replacement. I wish tesla did some training for everyone on
           | autopilot to show them what to expect and how to handle it.
        
             | gugagore wrote:
             | > You also learn as a pro tip to let the car push its
             | limits and let it do what it wants to do like doing
             | something dangerous then realizing that it would correct
             | itself after your "comfortable" zone.
             | 
             | That's literally "trusting it".
             | 
             | As a human, you are attempting to learn the situations
             | under which the car operates safely. You could be very
             | conservative and say "it never works". The big problem is
             | that the kinds of errors that occur in these systems are
             | very different from the kinds of errors that humans expect.
             | Even designers of autonomous systems can be surprised by
             | emergent behavior. You say you can quickly learn the good
             | and bad sides of it, however, that's not true. How much
             | reliability do we expect from a system that, when it fails,
             | leads to sever injury or death? ASIL-D is something like
             | one failure in 10^9 hours [1]. That's 100,000 years.
             | 
             | You do not have that much experience with your Tesla,
             | sorry. In principle, you might drive every waking hour for
             | 10 years, never witness an accident, and you still not have
             | an argument that passes even the most basic scrutiny for
             | safety for critical systems. It could make a fatal mistake
             | in the 11th year and that would not be a big surprise.
             | 
             | No one has that much experience with the ASIL-D components
             | of any car. That's why I we must rely on regulation and
             | standardization to help keep us safe. Safety isn't just
             | "well I tried it a bunch _under these conditions_ and didn
             | 't get hurt". And that's why what Tesla's strategy is
             | reckless.
             | 
             | [1] https://en.wikipedia.org/wiki/Automotive_Safety_Integri
             | ty_Le...
        
         | ptidhomme wrote:
         | IIRC there was also a 1 second delay before reaction on the
         | Uber car that crashed into a pedestrian.
         | 
         | There must be quite a lot of false alarms to filter out...
        
         | LeoPanthera wrote:
         | I also have FSD but not the beta and I just wanted to add that
         | my car (a 2020 model X) does not do this. Actually I've always
         | been very impressed with how it smoothly slows to avoid merging
         | cars.
         | 
         | Is it possible you have older autopilot hardware that runs
         | different/older software?
        
         | mumblemumble wrote:
         | I worry a little bit about things like this. If they were just
         | using debouncing on a band-aid rather than dealing directly
         | with the bad input that led to the phantom breaking, then that
         | would seem to imply that the project is falling into the
         | classic death march project trap. It turns out that the last
         | 10% of the project is going to be 90% of the work. Meaning that
         | the original schedule was off by an order of magnitude. But the
         | business has made made huge promises that mean that the team is
         | not at liberty to adjust the schedule accordingly. And so
         | they're under ever increasing pressure to cut corners and pile
         | on quick fixes.
        
           | zaptrem wrote:
           | Production AP is a completely different software stack than
           | FSD beta. I think the only things that carry over are the car
           | detection, VRU, and traffic light CV NNs.
        
       | traveler01 wrote:
       | Almost hitting pedestrian is a big stretch. Car had enough time
       | to stop before he took hold. He probably just picked up the wheel
       | because he didn't want the car going that way.
        
         | staticassertion wrote:
         | It's barely a stretch. The car veers pretty significantly. The
         | driver and the pedestrian both notice this and wave as a
         | 'sorry'. The car may have stopped, of course we won't ever
         | know, but it's absolutely wrong to say that it was just the
         | wrong direction - again, both the driver and pedestrian
         | acknowledge the aggressive turn.
        
           | traveler01 wrote:
           | Car wasn't at a very high speed and if breaks are working
           | normally it would have a lot of time to break before hitting
           | the person. Almost hitting a pedestrian is a big stretch of a
           | title, we don't really know what the car was going to do.
           | Probably there's some logs on the car that will tell Tesla if
           | the car would stop or not, but for the viewer of the video we
           | don't really know.
        
         | sfblah wrote:
         | I run a lot on streets. You do you, but this video is going to
         | make me much more careful around Tesla cars from now on.
        
         | bjtitus wrote:
         | This would have been reckless from a human driver and was just
         | as reckless for FSD.
        
       | microtherion wrote:
       | I don't think a DMCA notice is what "take ownership of your
       | mistakes" means.
        
       | gutino wrote:
       | I do not see how the pedestrian would have benn hit. For me the
       | car was turning very sharply, away from them.
        
       | giantrobot wrote:
       | FSD _without_ LIDAR /radar is a fool's errand. Camera-only
       | systems just do not pull in enough data to drive on real roads
       | around real hazards.
       | 
       | For one they do not actual approximate human eyes since our eyes
       | themselves are gimbaled in our skulls and our skull is gimbaled
       | on our necks. Our eyes can switch their focus very quickly and
       | return to scanning very easily.
       | 
       | Not only are we processing the visual information but also
       | feedback from all of our body positioning and other sensory
       | inputs. Driving heavily relies on our proprioception.
       | 
       | Fully autonomous cars need a lot of different sensors and
       | processing power. Even driver _assist_ should be using a lot of
       | sensors. Autonomous driving needs more and better sensors than
       | humans because it needs to deal with humans that are both smarter
       | and more irrational than the automation. Besides conscious
       | abilities humans have a lot of inherent self-preservation
       | instincts that might be locally optimum for survival but globally
       | poor choices.
       | 
       | Tesla scrimping on sensors because their boss is an egomaniac is
       | the height of hubris. A web service moving fast and breaking
       | things and choosing cheap but redundant options is fine. Usually
       | the worst that happens is people lose money or some data. Moving
       | fast and breaking things in a car is literally life and death.
       | Musk's hubris around FSD is sociopathic.
        
         | tsimionescu wrote:
         | Even worse, we actually rely heavily on our knowledge of the
         | world, common sense,and physical intuition to make sense of our
         | vision.
         | 
         | Put a human in a room with no shadows, unrecognizable objects,
         | and misleadingly sized objects (tiny cars, huge animals), and
         | then watch them fail to judge distances or other vision tasks.
        
       | WA wrote:
       | Reminder that Tesla will kill critical videos of FSD, but still
       | to this day has not taken down the fabricated "full self driving
       | demo" from their own website since 2016:
       | https://www.tesla.com/videos/autopilot-self-driving-hardware...
       | 
       | Tesla (and fanboys) say that they are clear about the limited
       | capabilities of Autopilot and FSD when in reality, they
       | indirectly support all fan-videos by letting them live on YouTube
       | while DMCAing critical videos.
       | 
       | They want you to believe that FSD is more capable than it truly
       | is.
        
         | notJim wrote:
         | > Tesla will kill critical videos of FSD
         | 
         | Examples?
        
           | bhelkey wrote:
           | The implication is that Tesla DMCAed the video in TFA.
           | 
           | Obviously, I don't know if that is true that, 1 the video was
           | DMCAed and 2 that Tesla sent the DMCA takedown request.
        
             | notJim wrote:
             | I think most likely the person who created the video must
             | have done it. I don't see how Tesla would own the
             | copyright. This also isn't the first time one of these
             | videos has gone viral, and the other ones were not taken
             | down.
        
       | koonsolo wrote:
       | Maybe it's just me, but it seems the car wanted to make a right
       | turn on the right part of that side street. There are no
       | pedestrians there.
       | 
       | It's actually the interference of the driver that stops the turn
       | halfway, and steers towards the people at the left starting to
       | cross that street.
       | 
       | I wonder what would have happened when the driver didn't
       | interfere. I guess that the right turn would have been completed
       | without any problem.
       | 
       | But maybe I'm missing some extra info here?
        
         | kzrdude wrote:
         | There are a few problems:
         | 
         | Erratic navigation and driving. 1 second earlier the car was
         | heading straight. The car placement was firmly to the left edge
         | of its lane. A safe driver would slow down towards the turn and
         | use the right hand side of the road - pedestrians will
         | recognize this "body language" of the car and understand the
         | situation better.
         | 
         | In our peripheral vision, the car placement and "body language"
         | is probably more important than ostensible signals such as the
         | turn signal (!)
         | 
         | This kind of "body language" is something we learn when driving
         | and automatic drivers should adhere to it too, so that they can
         | be predictable and interpretable drivers.
        
         | TheSpiceIsLife wrote:
         | I don't know what where you live, but here in the civilised
         | world you're not legally supposed to turn on to a road when
         | there are freaking pedestrians _already crossing it_.
         | 
         | You're supposed to wait for them to fully complete their
         | crossing, and yes: the overwhelming majority of drivers here
         | abide by that requirement.
        
           | koonsolo wrote:
           | The car didn't cross the zebra path, so it still had time to
           | stop and be compliant.
           | 
           | I drove in US, and I know how slow and relax the traffic is.
           | Come drive in Brussels or any of the big European cities
           | (like Paris) and see for yourself.
           | 
           | Good to know US is civilized and EU is not. The difference is
           | that you just have way more space than us.
        
         | FartyMcFarter wrote:
         | > I guess that the right turn would have been completed without
         | any problem.
         | 
         | The right turn would (at best) be completed by making
         | pedestrians wait for the car, which is backwards.
        
         | crote wrote:
         | I neither see nor hear any indication of a turn signal, so I do
         | not think it is actually trying to make a right turn.
         | 
         | Besides, the pedestrians were already on the road when the car
         | initiated the turn. Note the "one way" sign: that street only
         | has a single lane, and that lane was already occupied. It
         | should _definitely_ have yielded.
        
           | mcdoogal wrote:
           | Agreed that I don't think it was trying to turn. Crazy how
           | the system just went for it even with pedestrians. However, I
           | know this area well; that street it's turning on to is
           | actually 3 lanes with no parking on the right except for
           | buses.
        
       | josefresco wrote:
       | "almost hitting pedestrian"
       | 
       | It wasn't even close. Look at where the ped is at 12 seconds,
       | compared to the car. It's because he stopped the turn, and
       | straightened the vehicle that the pedestrian stopped and looked
       | up with concern (at 13 seconds). Even if the pedestrian broke out
       | into a full run, they wouldn't have been in any real danger.
       | 
       | This sort of close interplay between pedestrians and cars is very
       | common in cities.
       | 
       | The concern I have is that the "eye contact" element is lost.
       | When in doubt of another motorists intentions, we make eye
       | contact and look for acknowledgement or an indication of intent.
       | This doesn't exist with FSD.
       | 
       | Edit: Some people have pointed out the car wasn't supposed to be
       | turning right which I missed. If that's the case, the situation
       | and my opinion is completely different.
        
         | mcguire wrote:
         | How close are Tesla's programmed to come to pedestrians? An
         | obviously safe distance, or an "I didn't actually touch you"
         | distance?
         | 
         | I've been mostly looking at the OSD---it doesn't look like the
         | car noticed the pedestrians until after the driver had taken
         | manual control.
        
         | dgudkov wrote:
         | >It wasn't even close.
         | 
         | Apparently, it's not what the pedestrian thought.
        
         | kzrdude wrote:
         | The car is going way too fast towards the crossing. No safe
         | driver pretends to drive straight and then suddenly tries turn
         | and sneak in front of pedestrians on a crossing. A person
         | driving that way is a jerk :)
        
         | mzs wrote:
         | You must yield to pedestrian in crosswalk, how far away the
         | pedestrian is does not matter.
        
           | SilasX wrote:
           | Ehhh at most that would be one of those things where, "yeah,
           | that is the law but it's pretty impractical and unnecessary
           | for people to follow to the letter and so everyone breaks it
           | when safe to do so and it's a dick move to actually write
           | tickets in those cases."
           | 
           | If a driver is trying to turn right, and the pedestrian has
           | just entered the crosswalk other side of the street, 40 ft
           | away, it seems stupid to wait for them to clear the whole 40
           | ft when your right turn doesn't put them at risk. I say that
           | _even when I 'm the pedestrian_ in that situation.[1]
           | 
           | If the worst thing about SDCs that they do _these_ kinds of
           | violations (which would also include going 62 in a 60 mph
           | zone), then I would say SDCs are a solved problem.. Though to
           | be clear, Teslas are not at that point!
           | 
           | [1] Not just idle talk -- I cross East Riverside Drive in
           | Austin a lot on foot, where this exact situation is common.
        
           | throwawayboise wrote:
           | But what does "yield" mean? It means slow down, and proceed
           | when it's safe to do so. If a pedestrian starts crossing a
           | 4-lane road and I have time to turn across his path without
           | making him stop or deviate, then I can proceed safely and I
           | think I've met the definition of "yield"
        
             | tzs wrote:
             | In Washington, where the video was recorded, the rule is
             | that you must stop and remain stopped as long as there are
             | pedestrians in the crosswalk in in your half of the roadway
             | or within one lane of your half of the roadway.
             | 
             | "Half of the roadway" is defined as all traffic lanes
             | carrying traffic in one direction. In the case of a one-way
             | road such as the one in the video "half of the roadway" is
             | the whole road.
             | 
             | In your 4-lane hypothetical, if there are two lanes in each
             | direction you can drive through the crosswalk if the
             | pedestrian is in the farthest away opposite direction lane
             | from the lane you are in. In a 4-lane road with 1 lane in
             | one direction and 3 in the other, you can drive through if
             | you are in that 1 and the pedestrian is in the farthest 2
             | away from you. If you are in one of the 3 going the other
             | way, you have to stop no matter which lane they are in,
             | because they are either in your half (the 3 lanes going
             | your direction), or within 1 lane of your half.
        
               | renewiltord wrote:
               | It is interesting how different the law is from behavior.
               | Walking through Seattle, I would not expect that this was
               | the law considering how often people will cross in front
               | of and behind you (both of which I'm fine with).
        
               | sleepybrett wrote:
               | It's true that there are laws and enforcement, but an
               | automated system needs to obey the laws. Anything else is
               | a judgement and machines shouldn't be making judgements
               | when lives are on the line.
        
               | SilasX wrote:
               | Fair point but that's less restrictive standard than
               | (your long lost sibling?) mzs was saying applies here.
        
             | mzs wrote:
             | In the state I live in it is illegal to enter a crosswalk
             | when there is a pedestrian in it when the road is undivided
             | as in this example. I find it unlikely that there is any
             | state where it's legal to do so for a one-way street such
             | as this. (There are states that treat the crosswalk as two
             | halves when there is two-way traffic.)
        
         | mikestew wrote:
         | _The concern I have is that the "eye contact" element is lost._
         | 
         | As a motorcyclist, runner, and cyclist, I can tell you that
         | drivers will look you right in the eye as they pull out in
         | front of you.
         | 
         |  _we make eye contact and look for acknowledgement or an
         | indication of intent._
         | 
         | That behavior stands a good chance of getting one killed or
         | hurt, _especially_ in Seattle, the land of  "oh, no, _you_ go
         | first... " Look at the wheels, not the eyes. The wheels don't
         | lie.
        
           | sleepybrett wrote:
           | As a seldom bicyclist I've seen this as well. But it's gotta
           | be a lot less frequent to see a driver make eye contact and
           | then proceed to almost kill you versus one that yields. Even
           | as a seldom bicyclist (and as a driver even) if I'm
           | approaching a four-way yield if I can't get eye contact with
           | someone else approaching the intersection I'll just yield to
           | them by default.
        
         | ModernMech wrote:
         | Agreed it wasn't close, but it's still not the kind of behavior
         | you'd want to see in a robot car. If I were a passenger in a
         | car where a human driver did this, I would wonder if they were
         | impaired, and I would no longer trust their driving abilities.
         | Tesla's whole selling point is that a) autonomous cars are
         | safer than human drivers b) we are almost there. This clip
         | shows both claims are questionable at the moment.
        
           | emn13 wrote:
           | I wasn't close because the driver was paying absurdly close
           | attention. We can't say if it would have been close or not,
           | but there's certainly no hint the car had a handle on the
           | situation, and it's clear to see it was making _some_ kind of
           | mistake, even if possible a non-fatal one.
        
             | ceejayoz wrote:
             | Yeah. "It _probably_ can 't kill anyone if you watch it
             | like a hawk at all times..." wasn't the FSD selling point.
        
         | cogman10 wrote:
         | I agree, but this is probably the worst behavior I've seen of
         | the FSD software.
         | 
         | It abruptly tried to do a right turn on a no-right intersection
         | with no warning (map said it was supposed to go straight, it
         | looked like it was going to do that right up to the last
         | second).
         | 
         | Putting aside the "almost hitting a pedestrian", this is really
         | dangerous behavior for an autonomous vehicle.
        
           | antattack wrote:
           | Tesla on Autopilot is not autonomous.
           | 
           | I look at current driver assistance systems as human-robot
           | hybrid. Any driver assistance system should be required to be
           | clear in it's intentions to the driver and give time to
           | react, otherwise it's like a wild horse.
        
             | sleepybrett wrote:
             | it's sold as an 'autopilot' that means 'automatic pilot'
             | that does not imply hybridization, that implies full
             | automation.
        
               | antattack wrote:
               | I don't know where you get the idea. Even plane autopilot
               | will not land the plane w/o Pilot's supervision.
               | 
               | People who don't read manuals put themselves at risk.
               | 
               | Even simple systems like Cruise Control carry plenty of
               | warnings that one should be familiar with when operating
               | (from Ford's manual):
               | 
               | WARNING: Always pay close attention to changing road
               | conditions when using adaptive cruise control. The system
               | does not replace attentive driving. Failing to pay
               | attention to the road may result in a crash, serious
               | injury or death. WARNING: Adaptive cruise control may not
               | detect stationary or slow moving vehicles below 6 mph (10
               | km/h). WARNING: Do not use adaptive cruise control on
               | winding roads, in heavy traffic or when the road surface
               | is slippery. This could result in loss of vehicle
               | control, serious injury or death. [and few more,
               | shortened for brevity]
        
               | rootusrootus wrote:
               | So pilots will get the distinction, but non-pilots think
               | it means self-driving.
        
               | weeblewobble wrote:
               | I don't think so. Most people are aware that autopilot
               | exists for airplanes, and they are also aware that there
               | are always human pilots on board the plane.
        
               | ahahahahah wrote:
               | It doesn't really matter what you think. Studies have
               | shown that a significant number of people believe that
               | ADAS branded as "Autopilot" means that the driver does
               | not need to pay attention.
        
               | frumper wrote:
               | I'm also not sure where you get that idea, it's clear
               | when purchasing that Autopilot is a fancy cruise control
               | to assist drivers and that full attention is required at
               | all times.
        
           | aeternum wrote:
           | It's not like the car is going rogue and just randomly making
           | a right turn. It's trying to find the right side of the road
           | in a rare intersection. A monorail in the middle of a city
           | street is quite rare so this is an understandable failure-
           | mode.
        
             | cogman10 wrote:
             | Look at the screen. It is going rogue, the line on the
             | screen shows where the car is supposed to go, it abruptly
             | decided it was going to take a right turn.
        
           | banana_giraffe wrote:
           | As a human driver, in this area, I've made a right turn
           | there. (Well, not this intersection, the next right turn from
           | it)
           | 
           | https://www.google.com/maps/@47.6175579,-122.3456341,3a,90y,.
           | ..
           | 
           | Even Google Maps has a car turning right here. I've always
           | read those signs that you can't turn right from the left
           | lane. It's confusing, for sure.
           | 
           | https://mynorthwest.com/1186128/slalom-seattle-monorail-
           | colu...
           | 
           | This article seems to agree (and bonus, you can slalom
           | through the columns). You can turn so long as you're not
           | crossing lanes.
        
         | wlesieutre wrote:
         | Why did the car want to swerve right in the first place? You
         | can see the route map on the display, it was going straight
         | through this intersection.
         | 
         | EDIT - other people are pointing out the "no right turn" sign
         | but I believe that's for cars to the left of the elevated
         | tracks. The sign hanging by the traffic lights indicates the
         | crossing road is one-way to the right, so that turn should be
         | legal from where the Tesla is.
         | 
         | But it isn't planning a turn here so I don't think you can say
         | "it would've made a tight turn and cleared the pedestrians." It
         | looks like it has no idea where it's going and could have
         | wanted to make a wide turn into the left lane, or even be
         | swerving far out of its lane and then back in because it
         | interpreted the intersection as something like a lane shift.
        
         | brown9-2 wrote:
         | People aren't bothered by how close the car came to the
         | pedestrians - it's the fact that the autopilot was programmed
         | to go in a straight line and at an intersection decided to
         | steer into a crosswalk.
         | 
         | Not to mention the fact that it seems entirely legal for a
         | company like Tesla to test such dangerous functionality on
         | public streets with other drivers and pedestrians, with zero
         | permission or regulation.
        
           | rjp0008 wrote:
           | > the autopilot was programmed to go in a straight line and
           | at an intersection decided to steer into a crosswalk.
           | 
           | Agree this is a weird and very concerning bug.
           | 
           | > Not to mention the fact that it seems entirely legal for a
           | company like Tesla to test such dangerous functionality on
           | public streets with other drivers and pedestrians, with zero
           | permission or regulation.
           | 
           | This driver seems to be someone who I would be ok with
           | testing this. He's got his hand an inch from the steering
           | wheel and obviously paying attention. I would rather this
           | scenario happen, than Tesla be restricted to no public road
           | access and they just unleash an untested system.
        
             | sleepybrett wrote:
             | The problem, as I see it, is that the longer driver uses
             | the autopilot feature the less attention they will pay to
             | it as long as it doesn't do something nuts like this. My
             | understanding is that this occurred soon after a new 'beta'
             | update to the autopilot system. I'm not sure how this is
             | surfaced to the user, if you need to opt into this new
             | version etc.
             | 
             | My fear is that a new version of the autopilot system could
             | have new bugs introduced that could disrupt the navigation
             | on a route that the user has confidence that the previous
             | version of the autopilot could handle. They commute on
             | route x every day and over time they've gained confidence
             | in the autopilot on that route. This new update however is
             | going to run them into some obstruction due to a bug. That
             | obstruction might be those monorail pylons we see in the
             | seattle video or it might be a crosswalk. A driver who had
             | confidence in the autopilot might be well distracted having
             | confidence in his automation and not be able to correct the
             | situation before tragedy.
             | 
             | IMO autopilot should be banned until it can be proved to be
             | 100% safe. I don't think we can get there until roads are
             | outfitted with some kind of beaconing system that sensors
             | in the car can read and cars on the road are potentially
             | networked together cooperatively... and only then to be
             | enabled on roads with those markers/beacons.
             | 
             | People in this thread deciding that the system is safe
             | because it's no worse than a drunk driver or student driver
             | are missing the point. We absorb those hazards into the
             | system because they present only a handful agents in the
             | system. Out of 100,000 drivers in a rush hour flow, how
             | many are students and/or drunk.. probably very few. However
             | as teslas keep selling with this feature, our new class of
             | hazard agents keeps going up and up and up.
             | 
             | Hell we wouldn't even have to mandate it to be illegal at
             | the political level. Perhaps the insurance industry will do
             | it for us.
        
               | frumper wrote:
               | Just curious, how does one prove 100% safe? Even if
               | Tesla's were show to be safe on every road in existence
               | there are always too many variables. Weather,
               | construction, road condition, other drivers, objects
               | falling into the roadway, on and on.
               | 
               | There is some level of reasonable safety that should be
               | expected, but proving 100% safe isn't a realistic goal,
               | nor is it even a standard for existing automobiles.
        
         | staticassertion wrote:
         | The car veers pretty aggressively and suddenly. "Almost
         | hitting" or not, this looks dangerous.
         | 
         | > This sort of close interplay between pedestrians and cars is
         | very common in cities.
         | 
         | Yeah and I'm usually yelling at the idiot that wasn't looking
         | where they were driving.
        
           | antattack wrote:
           | In this case pedestrians waved back to the driver who
           | apologized as they were far away. If they were closer car
           | would have probably stopped as AP stops and slows down near
           | pedestrians.
        
             | tsimionescu wrote:
             | When it's not driving straight into them, that is. Which it
             | has done quite a few times.
        
         | spullara wrote:
         | Early on I was thinking that self-driving cars would have to
         | have some kind of "face" for just this kind of interaction to
         | occur.
        
         | paxys wrote:
         | Almost hitting is an exaggeration, but making a right turn
         | while pedestrians have started crossing at an intersection is a
         | 100% incorrect (and likely illegal) action. It's concerning to
         | me that the "full self driving" car cannot successfully handle
         | this very common everyday occurrence.
        
           | parineum wrote:
           | That's definitely illegal and definitely happens all the
           | time.
           | 
           | Except it usually happens when the pedestrian has already
           | passed the area where you'd turn through or the intersection
           | is enormous and the pedestrian just started crossing the
           | other side.
           | 
           | I'd expect FSD to follow the law in this case though.
        
             | aidenn0 wrote:
             | Going behind pedestrians in a crosswalk is legal in many
             | states. It's universal that you must yield to pedestrians
             | in a crosswalk, but not universal that you must wait for
             | them to clear the crosswalk.
        
               | mikestew wrote:
               | _Going behind pedestrians in a crosswalk is legal in many
               | states._
               | 
               | Not in the state in which this particular Tesla was being
               | driven:
               | 
               | "The operator of an approaching vehicle shall stop and
               | remain stopped to allow a pedestrian, bicycle, or
               | personal delivery device to cross the roadway..."
               | 
               | https://apps.leg.wa.gov/RCW/default.aspx?cite=46.61.235
        
               | rootusrootus wrote:
               | You left out the rest.
               | 
               | "when the pedestrian, bicycle, or personal delivery
               | device is upon or within one lane of the half of the
               | roadway upon which the vehicle is traveling or onto which
               | it is turning"
        
               | aidenn0 wrote:
               | It further defines "half" to essentially mean "all lanes
               | going in the same direction as you" which makes it
               | illegal to cross if the pedestrian is anywhere in the
               | crosswalk on a one-way street.
               | 
               | Interestingly enough that should make it legal to pass in
               | _front_ of pedestrians who are crossing traffic going the
               | opposite direction, provided that you do not interfere
               | with their right-of-way.
        
           | antattack wrote:
           | We don't know however if Tesla would have stopped in front of
           | the crosswalk as AP was disengaged(Which was the correct
           | thing to do)
           | 
           | We know that pedestrians were far enough not to feel
           | threatened as they waved back responding to drivers wave.
           | Police was at the intersection also and they did not look
           | concerned either.
        
           | sleepybrett wrote:
           | It is illegal in seattle (and all of washington state), where
           | this video was filmed to enter a crosswalk when there are
           | pedestrians present in the crosswalk.
        
         | emn13 wrote:
         | The pedestrian stopped as soon as they saw the car suddenly
         | veering towards them; not just once the driver intervened, as
         | far as I can tell.
         | 
         | Whatever the hypothetical (it certainly might have missed the
         | pedestrian, or made a last second emergency stop) - it suddenly
         | veered towards a pedestrian that was already walking on the
         | road, and that alone is absolutely not OK. Scaring the living
         | daylights out of people isn't acceptable, even if you might not
         | have killed them without an intervention. And let's be fair, if
         | the pedestrian had not paid attention, and the driver neither,
         | this certainly could have lead to an accident. Even if it were
         | "just" an unexpected emergency stop; that itself isn't without
         | risk.
        
       | cma wrote:
       | There was a popular Hacker News thread on this video too that was
       | removed somehow without being marked dead:
       | 
       | https://news.ycombinator.com/item?id=28545010
       | 
       | It was on the front page and then a few minutes later not on the
       | first 5+ pages.
        
       | ehz wrote:
       | Totally naive guess, but could there be some misdetection of a
       | road feature (Stoplight)? Car swerves very close to when people
       | in the background wearing very bright neon and very bright red
       | jackets cross each other.
        
       | plausibledeny wrote:
       | This is Fifth Ave and Lenora in Seattle (about a block away from
       | Amazon Spheres). Lenora is a one-way and you can take a right off
       | of 5th onto Lenora (basically heading toward the water). The
       | signage (and road in general) is confusing because of the
       | monorail pylons.
        
       | tgsovlerkhgsel wrote:
       | Is there any evidence for the video being DMCA'd (and if so, who
       | sent the takedown - IIRC youtube shows that), instead of being
       | taken private by the uploader as the link in the tweet indicates?
       | 
       | Edit: It's about the copy on Twitter,
       | https://twitter.com/TaylorOgan/status/1438141148816609285
        
       | Jyaif wrote:
       | Not to pile on Tesla, but a yoke driving wheel will make taking
       | over the FSD mode much harder.
        
       | supperburg wrote:
       | Recently I have been wondering why there are no videos of FSD v10
       | on any of the mainstream platforms which apparently includes HN
       | now. There are tons and tons of videos of FSD doing amazing,
       | amazing things. The situations it handles are unbelievable. There
       | are tons of videos of it making end to end trips with few or no
       | interventions all while handling insane environments that would
       | break every other self driving system. If you showed these videos
       | to someone in 1990 they would exclaim that the car "drives
       | itself," regardless of the knowledge that a person has to
       | supervise and that it makes mistakes. We have arrived. And there
       | isn't any sign of it on cnn, Reddit or hacker news. But what you
       | do see on these platforms are the handful of cases where FSD made
       | a serious mistake. The overall picture painted by these platforms
       | is incorrect.
        
       | bananapub wrote:
       | for those unaware (like me), "FSD" is the extremely misleading
       | term Tesla uses to describe their cars as "Full Self Driving".
        
         | Jyaif wrote:
         | Technically it is fully self driving, it's "just" that you have
         | a (much) lower chance of arriving safely at your destination
         | than with a human driver.
        
           | kzrdude wrote:
           | It's fully self driving until it cops out and gives it back
           | to you
        
         | rvz wrote:
         | The correct term is 'Fools Self Driving' given its users think
         | that they are driving a 'fully self driving' car yet they have
         | to be behind the wheel and pay attention to the road at all
         | times.
         | 
         | Not exactly the FSD robot-taxi experience the Tesla fans were
         | promised. Instead, they got happily mislead with beta software.
        
         | kzrdude wrote:
         | Even more incredibly, "FSD" is an addon you pay for. So
         | customers are paying for the lie.
        
       | perihelions wrote:
       | Good thing evidentiary videos of car accidents are copyrightable.
       | We wouldn't want to discourage auto-collision-footage-creators
       | from creating content by failing to protect their exclusive
       | monetization rights.
        
       | tibiahurried wrote:
       | This tech is clearly not ready for mass adoption. I am surprised,
       | to say the least, that they are allowed to sell cars with that,
       | clearly not safe tech, on board. Where are the
       | regulators?Pretending that it is all good?
        
       | jjj3j3j3j3 wrote:
       | This reminds me how film/music industry tried to DMCA youtube-dl
       | project.
        
       | jancsika wrote:
       | I love reading about self-driving cars and cryptocurrencies on
       | HN. Suddenly, all the orders-of-magnitude improvements in
       | efficiency/performance/scaling/production/integrity/etc. go out
       | the window. Apparently they get replaced with conversations that
       | devolve into how bad humans have been at the tasks that the
       | fanboys are hoping get replaced by marginal-at-best tech.
       | 
       | E.g.,
       | 
       | git discussion on HN: of course we all know how git leverages
       | merkel trees to make it workable to produce millions-of-line
       | ambitious projects across timezones with people we've never even
       | met before (much less are willing to fool around giving repo
       | permissions to). But goddammit it takes _more than five minutes_
       | to learn the feature branch flow. That 's unacceptable!
       | 
       | self-driving car discussion on HN: I've seen plenty of human
       | psychopaths nearly mow down a pedestrian at an intersection so
       | how is software doing the same thing any worse?
        
       | loceng wrote:
       | "Almost hitting pedestrians" isn't how I'd describe what I saw in
       | the video.
        
         | kevinmgranger wrote:
         | Which makes it even more confusing as to why they'd Streisand
         | effect themselves.
        
           | aquadrop wrote:
           | I doubt DMCA came from Tesla, they let worse videos take
           | thousands of views on youtube and never done anything.
        
         | devb wrote:
         | Ok, I'll ask. How would you describe it?
        
           | aquadrop wrote:
           | "Car doesn't yield to pedestrians". For "almost hit" I would
           | expect car to run by pedestrian in less than half a meter or
           | for pedestrian having to react quickly to avoid car.
        
             | stagger87 wrote:
             | It makes more sense when you realize the car wasn't even
             | supposed to be turning. Look at the nav screen. If this guy
             | didn't take over, where exactly was the car going? I think
             | it's very easy to speculate it wouldn't have ended well.
        
               | kbenson wrote:
               | It looks like it re-routed down the side street at
               | exactly that moment. You can see the projected path line
               | shift to that street immediately before.
               | 
               | It doesn't look like it would have hit anyone to me (it
               | was only partway through turning, so was facing people
               | when it or he aborted it, but that's not the path shown
               | on the display), but it was definitely an illegal move.
        
               | mwint wrote:
               | Step through the video frame by frame; you can see the
               | car begin turning back to the left before the driver
               | takes over. He actually _stopped_ the left turn; the car
               | would have recovered quicker without his actions.
               | 
               | Not that the driver was wrong to do what he did; we have
               | the advantage of frame-by-frame replay. But the frame-by-
               | frame does show that this is not what it appears at first
               | glance.
        
               | aquadrop wrote:
               | Yes, it's weird that car decided to turn right. But I
               | don't agree that's it's easy to speculate that it would
               | hit the pedestrians, they were pretty far. I would think
               | some emergency braking would take over or something. We
               | have many thousands of tesla cars out there, lots of
               | people are making videos about autopilot, but I don't
               | think we have an example of a car actually hitting a
               | pedestrian?
        
               | devb wrote:
               | Two seconds of googling:
               | 
               | Here's one: https://www.reuters.com/business/autos-
               | transportation/us-pro...
               | 
               | Here's another:
               | https://www.nytimes.com/2021/08/17/business/tesla-
               | autopilot-...
               | 
               | Oh, another one: https://www.forbes.com/sites/lanceeliot/
               | 2020/05/16/lawsuit-a...
               | 
               | This appears to be the first one, in 2018:
               | https://www.theguardian.com/technology/2018/mar/19/uber-
               | self...
               | 
               | I'd keep going but maybe you get the point.
        
             | sleepybrett wrote:
             | "Car decides to turn for no reason and in the process fails
             | to yield to pedestrians in a crosswalk."
        
           | soheil wrote:
           | "Car slightly spooks a couple of people including the driver
           | who safely took over and resumed driving"
        
         | hnthrow917 wrote:
         | Watch again. Those pedestrians were in the crosswalk before the
         | car turns. That's illegal _at best_ , and definitely dangerous
         | for those pedestrians. The car wasn't taking a tight right turn
         | either.
        
           | aquadrop wrote:
           | Yep, that's all true, but it was several meters away from
           | them, far from hitting.
        
           | soheil wrote:
           | Haha how is that illegal? The driver took over way before the
           | car entered the crosswalk section.
        
             | hasperdi wrote:
             | For one, before the car enters the intersection, there is a
             | pedestrian crossing. There is a pedestrian on the right
             | side about to cross. I think it is safe to say in most
             | countries, the car has to yield and stop.
        
             | jacquesm wrote:
             | I sincerely hope you are not in the possession of a drivers
             | license.
        
               | soheil wrote:
               | How is this comment helpful?
        
             | munchbunny wrote:
             | I suppose we'll never know if the Tesla was going to follow
             | through on that right turn.
             | 
             | But if you were an observer on the corner and could not see
             | whether it was a human or an AI was driving the car, you
             | would almost certainly consider whatever the car did
             | (starting the turn) while pedestrians were actively
             | crossing dangerous.
        
             | maybeOneDay wrote:
             | "Entered the crosswalk section" is a rather euphemistic way
             | of describing "drove directly through where the people were
             | going to be"
        
               | kbenson wrote:
               | That's not what I saw. I saw the car abort it's turn that
               | would have taken it through the crosswalk in front of the
               | people and not impeding them. Still illegal, but a far
               | cry from almost hitting them.
               | 
               | The car aborted (or was aborted by the driver) and
               | stopped before finishing it's turn on the road from what
               | I saw, leaving it at about a 45 degree angle instead of
               | the 90 degrees it would eventually reach if it continued
               | turning.
        
               | soheil wrote:
               | Was going to drive != drove
        
             | hnthrow917 wrote:
             | RCW 46.61.245 RCW 46.61.235
             | 
             | You are required, by law, to stop for pedestrians in
             | crosswalks. Every intersection in Washington state is a
             | crosswalk, marked or not.
        
               | TheHypnotist wrote:
               | In this particular case they had the walk sign too.
        
       | literallyaduck wrote:
       | Not familiar with how the autopilot works, but the guy is touch
       | the wheel a lot, is it the car driving, both, or just manual
       | control?
       | 
       | If it is just the car, it looks like it was trying to turn and
       | didn't consider the people in the crosswalk.
       | 
       | Edit: thanks for the explanation.
        
         | jofaz wrote:
         | In the video there is a steering wheel symbol near the speed
         | that is highlighted in blue. That indicates autopilot is
         | engaged, when the driver turns the wheel to correct the mistake
         | the symbol turns gray indicated autopilot disengaged.
        
         | hnthrow917 wrote:
         | Just the car. The guy can override by taking hold of the wheel
         | and overriding what the car wants.
        
         | ashafer wrote:
         | iirc the car does all the driving but you have to have a hand
         | "on the wheel" so they don't technically have to legally
         | classify it as a self driving car or something.
         | 
         | I'm guessing if he is videoing it and hovering over the wheel
         | that much this has happened before and he is nervously
         | expecting it?
        
           | kube-system wrote:
           | Tesla has caused a lot of confusion by calling this system
           | "full self driving", but in reality, this system is an SAE
           | level 2 driver assistance system where the human is required
           | to be in charge of driving at all times.
           | 
           | https://www.sae.org/binaries/content/gallery/cm/articles/pre.
           | ..
        
         | kalleboo wrote:
         | This isn't the standard autopilot but the "full self driving"
         | option that is currently in a limited beta (around 1000 users
         | that have gone through extra approval, not just 100% random
         | owners)
        
         | modeless wrote:
         | The guy in the video is stress testing the system by repeatedly
         | driving next to the pillars which he knows are handled poorly
         | (previous versions of FSD would actually drive right into
         | them). So he is being extra cautious with his hands on the
         | wheel.
        
       | acd wrote:
       | Selfvdriving cars almost hitting a pedestrian is terrible.
       | Socialmedia platforms is an attention economy, where the most
       | spectacular posts get attention. I think its diverting attention
       | to the wrong causes.
        
       | bogwog wrote:
       | Does anyone know if there have been efforts to connect a city's
       | traffic lights/data with a self driving system like this? That
       | crosswalk had a timer on it, and if that status could be accessed
       | by the Tesla in real time, the autopilot would have known not to
       | make that turn even if the cameras were telling it something
       | else. Idk if those things are networked, but it seems like a good
       | investment to do that if it means safer self-driving cars.
        
       | acd wrote:
       | A self driving car almost hitting a pedestrian is terrible.
       | Socialmedia platforms is an attention economy, where the most
       | spectacular unusual posts gets most attention! I think its
       | diverting attention to the wrong causes.
        
       | dang wrote:
       | Recent and related:
       | 
       |  _Tesla FSD Beta 10 veers into pedestrians with illegal right
       | turn_ - https://news.ycombinator.com/item?id=28545010 - Sept 2021
       | (118 comments)
       | 
       | Less recent and less related:
       | 
       |  _Watch a Tesla Fail an Automatic Braking Test and Vaporize a
       | Robot Pedestrian_ - https://news.ycombinator.com/item?id=24588795
       | - Sept 2020 (111 comments)
        
         | supperburg wrote:
         | Vaporize. What an objective and sensible term to describe
         | something that I'm sure you are totally unbiased about...
        
           | dang wrote:
           | That's just the title of the submitted article. If we'd seen
           | it at the time, we would have vaporized it. No one bats a
           | hundred!
        
       | yawaworht1978 wrote:
       | Wait, why did the steering wheel and the car swerve right here?
       | Because the pedestrian that was passed or because the one waiting
       | to cross? And wasn't the drivers intent to drive straight ahead?
        
       | blendergeek wrote:
       | Can we find out who sent the takedown notice?
        
         | beeboop wrote:
         | This is the part that matters. Anyone can DMCA anything
        
           | closetnerd wrote:
           | Are we thinking that Elon super fans might do this?
        
             | soheil wrote:
             | Or Elon haters to make it look like Elon is super paranoid?
        
               | addicted wrote:
               | Wow. We're at the crisis actors stage of discourse.
        
             | LewisVerstappen wrote:
             | Edit:
             | 
             | I was being dumb and am wrong. This is YouTube's internal
             | system for copyrights not DCMA.
             | 
             | Apologies.
             | 
             | Old comment below...
             | 
             | You can read up on how to submit DCMA takedowns for YouTube
             | here ->
             | https://support.google.com/youtube/answer/2807622?hl=en
             | 
             | The original uploader of the video submitted the DCMA
             | takedown request.
        
               | Someone1234 wrote:
               | Your YouTube support link isn't about the DMCA, and in
               | fact is an attempt at trying to get people to use their
               | system _instead_ of the DMCA.
               | 
               | YouTube's internal copyright takedown system is just
               | that: Internal. DMCA is a legal avenue that you can use
               | against YouTube and their users, but it wouldn't be via
               | their internal copyright take down system.
               | 
               | It is in YouTube's best interests for you NOT to use the
               | DMCA as there are avenues where it can leave YouTube
               | themselves civilly liable (OCILLA[0]), so they work hard
               | to funnel people into their internal copyright system
               | that doesn't expose YT.
               | 
               | [0] https://en.wikipedia.org/wiki/Online_Copyright_Infrin
               | gement_...
        
               | LewisVerstappen wrote:
               | My mistake, apologies.
               | 
               | Edited the comment to note my error.
        
               | judge2020 wrote:
               | OP's point is that anyone can lie in the DMCA takedown
               | form and say they own the content. It's illegal but I'm
               | not aware of it ever coming back to the perpetrator in
               | the form of jail time/lawsuit.
        
           | lathiat wrote:
           | From the comments in this thread the Twitter copy was not the
           | original creator of the video. The original creator posted it
           | on YouTube but has since made the video private. And they
           | DMCAd the person that reposted it on Twitter.
           | 
           | I imagine they probably started getting a lot of negative
           | attention.
           | 
           | https://twitter.com/xythar/status/1438539880045309953?s=21
        
           | advisedwang wrote:
           | Anyone is physically capable of sending a DMCA takedown.
           | 
           | However it is supposed to be from the copyright owner [17 USC
           | 512(c)(3)(A)(i)] and if it isn't, it can be ignored [17 USC
           | 512(c)(3)(B)] and makes the person who does it liable for
           | damages [17 USC 512(f)(1)]. Although that's probably pretty
           | hard to use in practice.
        
         | riffic wrote:
         | lumen will have a copy
        
       | brown9-2 wrote:
       | The tweet that went viral was a clip of a video posted by a
       | Youtube channel owned by someone else. Seems pretty obvious the
       | copyright owner who would make a claim against the tweet was the
       | Youtube channel who owned the video.
        
       | ra7 wrote:
       | Interestingly, the original creator of the video has since made
       | it private. FSD beta testers are carefully handpicked -- a lot of
       | the Tesla "influencers" are given access for some free marketing.
       | I wonder how many FSD bad driving videos are not uploaded to
       | YouTube at all because they don't want to say anything negative
       | about Tesla and possibly lose influence/ad revenue.
       | 
       | This is on top of Tesla classifying FSD as level 2 system (while
       | promoting as "level 5 soon") so they don't have to share data
       | with CA DMV. Only reason not to be transparent like the others is
       | if you're not confident in your system.
        
         | Shank wrote:
         | There's a mirror of the original video here:
         | https://streamable.com/grihhc. I consider it fairly valuable to
         | study academically, as it shows both how the Tesla autopilot
         | system performs, and how the the driver reacts to the system,
         | with the knowledge that it's a beta, and their manner of
         | movement, operation, and knowledge of the system and what they
         | let it do.
         | 
         | I think the biggest problem I have is that the driver doesn't
         | intervene/even really stop the car. The good autopilot 'test
         | channels' like 'AI DRIVR' will routinely disengage if any
         | safety concern is present.
         | 
         | The monorail test driver, in contrast, knows the monorail is a
         | problem and was criticized for not correcting the car driving
         | under the monorail, which is already illegal behavior.
        
         | ballenf wrote:
         | Do you know if the terms of those arrangements give Tesla
         | copyright ownership of the videos? Just wondering whether the
         | driver or Tesla issued the DMCA.
        
           | sidibe wrote:
           | The driver is a big fan of Tesla (even in that video he was
           | very impressed with FSD 10). I think he probably deleted it
           | himself because of the backlash
        
       | sandos wrote:
       | Haha, I commented on this exact sort of video, one titled
       | something with "success" where the car somehow managed to get
       | past that monorail but in an extremely sketchy way, way worse
       | than any learner driver would every do.
       | 
       | This FSD beta seems way too cocky and at the same time confused
       | underneath, I think we will see quite a few incidents with this
       | one unless everyones a good boy and is really in the loop with
       | taking over quickly.
        
       | KingMachiavelli wrote:
       | Based on the audio in the video it seems like they were testing a
       | Tesla FSD update? It sounded like they deliberately drove it in
       | an area that it did not handle well and even commented that it
       | was 'better'.
       | 
       | It seems like it thinks the lane shifted to the right because it
       | had to reason to do a right turn.
       | 
       | Anyone have more insight into what occurred?
        
       | cronix wrote:
       | I think the whole dream/goal of building a self driving car on
       | top of a system completely designed for manual human
       | interpretation and input is asinine, and that's before you even
       | consider the complexity of adding other humans manually driving
       | or suddenly walking in front of your car from a blind spot.
       | Instead of building a digital system from the ground up, we are
       | forcing manufacturers to come up with tech to read speed signs
       | meant for human eyes and accurately interpret them under a myriad
       | of conditions. What if it's covered in mud, or snow? Wouldn't
       | coming up with national/international standards and a digital
       | system work better for digital cars, like broadcasting speed
       | limits instead of only relying on interpreting visual inputs,
       | etc? Or standardizing the charging interface so you don't end up
       | in a mac-like dongle hell and can just know no matter what car
       | charging station you pull up to it will just work, like gas
       | stations and standardized fueling systems work for all brand of
       | ICE vehicles? I can go to any branded gas station and there isn't
       | a station specific for just Ford cars. It just seems like a mish
       | mash system of cludges hobbled together instead of a solid, well
       | thought out system from the ground up. Due to that, we are making
       | it 10x harder (and more dangerous) than it really needs to be.
       | It's like saying you have to build all modern OS's on top of DOS
       | made for 8 bit cpu's.
        
         | phkahler wrote:
         | >> What if it's covered in mud, or snow?
         | 
         | That's the entire problem. These systems don't understand
         | anything about the world. If you see a sign covered in snow
         | you're going to recognize it as such and make your best guess
         | as to what it says and weather you care. You probably don't
         | need that information to drive safely because you have a lot of
         | knowledge to fall back on. These things dont, so some folks
         | want to improve GPS accuracy and maps and signage. That's not a
         | viable solution on so many levels.
         | 
         | General AI is the only way forward for fully autonomous cars.
        
           | misiti3780 wrote:
           | Right now, the TSLAs with FSD will disable it in snow+rain,
           | so I assume that even if TSLA achieves L4, it will be only
           | under ideal conditions until they have enough data to resolve
           | these edge cases.
        
             | joshuahaglund wrote:
             | I keep seeing people incorrectly call less than perfect
             | driving conditions "edge cases." That term refers to rare
             | conditions. Rain, snow, pedestrians, and support pillars
             | are not rare.
        
               | misiti3780 wrote:
               | You're right, they are not really edge cases, but the
               | original poster mentioned a sign covered in snow. In that
               | case, a human would probably just do the same thing the
               | TSLA would: blow the stop sign or continue at the current
               | speed limit until they get pulled over.
        
             | cma wrote:
             | Musk said after winter 2019 they would have all the data
             | they need for all human drivable ice and snow conditions:
             | 
             | https://www.youtube.com/watch?v=Ucp0TTmvqOE
        
             | itsoktocry wrote:
             | > _Right now, the TSLAs with FSD will disable it in
             | snow+rain_
             | 
             | What about that instant before it figures out it's bad
             | enough weather to shut itself off? What is the threshold?
        
               | misiti3780 wrote:
               | It seems like currently the minute it detects rain it
               | shuts it down. (wipers are automatic also)
               | 
               | On the other hand, there are a lot of videos of FSD Beta
               | on youtube that show the car driving into the sun where
               | it's almost impossible for a human to see, but the
               | cameras detect all of it.
        
         | sorokod wrote:
         | > we are forcing manufacturers to come up with tech ...
         | 
         | No one is forcing manufacturers to do anything. They a perceive
         | a business opportunity and take it.
        
         | josuepeq wrote:
         | Agree with your points but one thing to note, electric car
         | world has indeed settled on a universal fast charging standard.
         | Its also just as ubiquitous, thankfully.
         | 
         | Every new EV from Rivian to GM to BMW uses the "Combined
         | Charging System." Tesla, chooses to continue to use their own
         | proprietary design in the U.S. despite having CCS technology
         | available. (and the implementation completed already - European
         | Tesla models ship CCS - required by law.)
         | 
         | Tesla will sell you $10K beta software that wants to steer into
         | pedestrians, that most choose to pay for because after
         | delivery, the price doubles (so act fast!) Most who fork that
         | kind of cash are not able to access all parts of this beta
         | software, this software license cannot sold/transferred to the
         | next owner regardless of what the maroney sticker says, but
         | don't worry, FSD is being launched next month.
         | 
         | If thats excusable, surely then everyone's ok with vendor lock
         | in for DC Fast charging unless one wants to shell out $450 for
         | a slower "old school" Chademo adapter, the fact one cannot buy
         | replacement parts for their car, and cannot repair their own
         | vehicle.
         | 
         | Tesla - America's Beta Test Success Story.
        
         | tsimionescu wrote:
         | When your technology starts working only when the entire world
         | changes dramatically, you probably don't have a great
         | technology idea.
         | 
         | In particular, road maintenance is already expensive and often
         | left to rot. Still, current driving is relatively robust - even
         | if a few signs are damaged or go missing, no major issues will
         | occur. Road signs keep working even during storms, even if
         | there is no power, even if something has hacked national
         | electronic systems, even if the GPS is not accessible.
         | 
         | Replacing all this with brittle, finicky, expensive, easy to
         | attack en masse digital systems is not a great idea.
        
         | TheSpiceIsLife wrote:
         | Couldn't agree more.
         | 
         | There are way too many edge cases in real world driving
         | conditions because road traffic and signage changes from
         | instant to instant.
         | 
         | As a driver I don't know what's round the next bend, could be
         | road works or a vehicle collision.
         | 
         | There's no technical reason a computer based driving system
         | shouldn't be _more_ aware, it's an implementation failure.
        
       | mwint wrote:
       | This looks like a weird navigation failure. It certainly
       | shouldn't have made this maneuver at all, but it's provably false
       | to say it would have run over the pedestrians... this frame shows
       | it:
       | 
       | https://i.stack.imgur.com/2KymN.png
       | 
       | Context - this is after the sudden sharp turn, as the driver is
       | taking over.
       | 
       | - We can see the system is still driving itself; the driver has
       | not taken over and is not a factor. The blue steering wheel shows
       | this.
       | 
       | - The dotted line coming off the car shows its intended path. If
       | you step through the video, you can see it decide it's going to
       | make the turn (for some unknown reason).
       | 
       | - Before the driver takes over, as this frame shows, we see the
       | car decide to turn back to the left and resume its correct
       | course. In fact, if you step through the video, you can see the
       | _car_ is turning the wheel to the left, rotating under the driver
       | 's hands before the driver applies enough force to disengage.
       | 
       | - The car was aware of the pedestrian at the time it decided to
       | make the turn (you can see them on the screen), and its clear the
       | path would not have collided with the person _even if_ the driver
       | had not taken over, _and_ the car had not corrected itself.
       | 
       | - But the car did correct itself, and intended to resume its
       | straight course before the driver took over, as that frame above
       | shows.
       | 
       | This is not "Tesla almost hits pedestrian!", this is "Tesla makes
       | bad navigation decision, driver takes over before car would have
       | safely corrected"
        
         | renewiltord wrote:
         | Yeah, looks like it. Software could use some smoothing on its
         | actions, though.
        
         | yellow_lead wrote:
         | I love that your defense is that the car would've stopped
         | closer to the pedestrian than the driver did.
        
         | stagger87 wrote:
         | > but it's provably false to say it would have run over the
         | pedestrians...
         | 
         | You can't prove something like that, I mean consider the irony
         | that you typed this statement right after
         | 
         | > It certainly shouldn't have made this maneuver at all
         | 
         | How can you prove that it wouldn't have made another unexpected
         | maneuver?
         | 
         | It's all speculation, and that's OK. I wish OP used a different
         | title, because everyone seems to be getting hung up on it, when
         | the lede is certainly the FSD being unable to maintain a
         | straight line.
        
       | antattack wrote:
       | How about changing the title to be more accurate:
       | 
       | "Video accusing Tesla FSD beta of almost hitting pedestrian
       | receives DMCA take-down."
        
       | [deleted]
        
       | josephcsible wrote:
       | Is there anywhere that says who issued the DMCA takedown? A lot
       | of people are assuming it was Tesla, but I don't see any evidence
       | of that.
        
         | diebeforei485 wrote:
         | This is not clear. I assume the original uploader of the video
         | (the person who runs the Hyperchange YouTube channel).
        
       | tdrdt wrote:
       | According to Tesla's own legal documents FSD beta is still at
       | level 2.
       | 
       |  _Level 2 automated driving is defined as systems that provide
       | steering and brake /acceleration support, as well as lane
       | centering and adaptive cruise control._
       | 
       | So it is the same as in most modern cars (other brands also have
       | 'environment awareness') but Tesla still market it as 'sit back
       | and close your eyes' driving.
        
       | diebeforei485 wrote:
       | This was a private beta, to be clear.
        
         | FartyMcFarter wrote:
         | Except for the people walking / driving on the road.
        
       | GrimRe wrote:
       | Are we just ignoring the fact that Russ Mitchell is an infamous
       | Elon hater and that the DMCA has _nothing_ to do with the content
       | of the video? The video was re-uploaded by a 3rd party and the
       | original video owner makes a living off YouTube videos so
       | rightfully asked YouTube to take it down.
       | 
       | Onto the ostensible issue of the car "almost hitting a
       | pedestrian" ... This is a beta software that is equivalent to
       | cruise control from a liability perspective. Its an advanced
       | driving assistance feature. The driver must be ready to take over
       | control at any moment as was the case here.
        
       | jorpal wrote:
       | Basic game theory dictates that if automated driving is seen as
       | too safe, pedestrians will feel comfortable walking anywhere,
       | anytime. Dense downtowns will become completely gridlocked and
       | unusable for cars. Which maybe isn't a bad thing. But if you want
       | cars in cities you need them to be perceived as dangerous. Yeah I
       | was convinced by Gladwell.
        
         | paulkrush wrote:
         | I agree, but also think we will adapt in many different ways.
         | New types of tickets. There would be no go zones where
         | pedestrians don't compile(or comply!). Mackinac island will not
         | have this issue! And for fun I have to note basic game theory
         | also dictates if we replace airbags with a sharped steel spike
         | that comes out during accidents people will drive safer!
        
         | [deleted]
        
         | sprafa wrote:
         | You solve this face recognition and ticketing of offenders. The
         | cars have cameras everywhere anyway
        
           | paulkrush wrote:
           | China has it so easy! This is not fare...
        
           | pidge wrote:
           | Yeah Los Angeles already fines pedestrians pretty
           | aggressively for jaywalking, even on urban pedestrian-heavy
           | streets.
           | 
           | This maintains a different norm for the city about jaywalking
           | than say Boston, so in principle the issue seems quite
           | solvable.
        
       | gremloni wrote:
       | People are going to die. I'm okay with it. These people are not
       | hand selected to be sacrifices. If single deaths can do stall
       | progress the west is going to get nowhere.
        
       | ChrisKnott wrote:
       | Maximise the video and watch the left hand HUD pane from 0:10 to
       | 0:11.
       | 
       | The dotted black line coming from the front of the car (which I
       | am assuming is the intended route) quickly snaps from straight
       | ahead, to a hair pin right, to a normal right turn.
       | 
       | Ignoring the fact that the right turn happened to be into a
       | pedestrian crossing with people on it - what was the car even
       | trying to do? The sat-nav shows it should have just continued
       | forwards.
       | 
       | I am astounded that software capable of these outputs is allowed
       | on the roads. When could crossing a junction then taking a
       | hairpin right back across it, ever be the correct thing to do?
        
         | seniorgarcia wrote:
         | Just to point to another possibility, since the drivers right
         | hand is not in shot, even though his left hand is carefully
         | poised over the steering wheel for the whole shot, it also
         | looks like he pulled hard on the steering wheel with his right
         | hand.
         | 
         | That might explain the car trying to re-route on the display,
         | although I'm not even sure it does. To me it looks like the
         | black line disappears into compression artefacts at just the
         | right time.
         | 
         | Mostly I think we don't have enough evidence either way and
         | speculating just expresses our desire for the technology to
         | work or not.
        
           | beprogrammed wrote:
           | You can see his right hand reflected in the display, it was
           | just idle during the event.
        
             | seniorgarcia wrote:
             | Gotta give you that. He still might have moved the steering
             | wheel with his thigh though ;-)
             | 
             | Crazy how it handled a shitty situation like around 6:55 in
             | https://streamable.com/grihhc but failed there. Guess that
             | shows what most of the ADAS engineers already know, the
             | really hard part are those last few percentage points to
             | make it actually reliable.
        
         | mwint wrote:
         | Keep watching the HUD pane - before the driver takes over, it
         | corrects and decides to go straight. In fact, it's turning the
         | wheel to the left before the driver stops the correction.
         | 
         | I think this is a navigation issue. This is exactly what I
         | would have done if I had a passenger yell "WAIT TURN RIGHT TURN
         | RIGHT oh nevermind GO STRAIGHT"
        
           | crote wrote:
           | So blindly follow random instructions without making sure
           | it's the safe thing to do?
           | 
           | The car was driving 11 mph. The obvious thing to do is very
           | simple: hit the brakes! Stop, re-evaluate, and _then_
           | continue.
        
             | mwint wrote:
             | Hitting the brakes in the middle of an intersection is very
             | safe.
             | 
             | What exactly would the machine evaluate in a couple seconds
             | that it can't do instantly?
        
               | Arrath wrote:
               | I dare say its safer than turning into a crosswalk of
               | pedestrians.
        
           | fabatka wrote:
           | What do you mean by "nagivation issue"? I don't see the
           | navigation system changing momentarily and then back (that
           | would be analogous to your example). If in your phrasing the
           | navigation system includes object recognition then if it
           | istructs the car to suddenly steer right without any real
           | reason, how could we trust that it stops before hitting
           | pedestrians? Even if in this situation it would've corrected
           | itself, I wouldn't say that this is a minor issue.
        
           | rwcarlsen wrote:
           | So you're saying the tesla self driving capability is on par
           | with humans driving at near their worst in a panic'd sort of
           | circumstance. Great!
        
             | mwint wrote:
             | Right, nothing should ever be built unless it can be 100%
             | correct immediately.
             | 
             | I'm stunned that it hasn't, so far as we know, actually
             | _hit_ anything yet. I'm not sure how, but clearly they're
             | doing something right between choosing drivers and writing
             | software.
        
               | ctvo wrote:
               | > Right, nothing should ever be built unless it can be
               | 100% correct immediately.
               | 
               | Right, a strawman.
               | 
               | How about this compromise: Let's call it "Poorly
               | Implemented Self Driving" until it improves, and we won't
               | try to sell it years early too.
        
               | TheSpiceIsLife wrote:
               | What hasn't hit anything yet? A Tesla on autopilot?
               | 
               | Surely.
        
               | freeone3000 wrote:
               | There were _several_ high publicity collisions with a
               | Tesla colliding into stationary objects -- a highway
               | barricade [1], a blue truck [2], a parked police car[3],
               | two instances of a parked ambulance[4]...
               | 
               | Teslas driving themselves hit objects all the time.
               | 
               | 1. https://www.google.com/amp/s/abcnews.go.com/amp/Busine
               | ss/tes... 2. Ibid 3. https://www.google.com/amp/s/www.mer
               | curynews.com/2020/07/14/... 4. https://www.google.com/amp
               | /s/amp.cnn.com/cnn/2021/08/30/busi...
        
               | mwint wrote:
               | This was all not FSD beta; those were all on the
               | completely separate public stack.
        
               | mwint wrote:
               | This is kinda fascinating - factually true statements
               | about FSD are blanket downvoted. Why is that?
        
               | freeone3000 wrote:
               | "autopilot" versus "full self driving* capable" versus
               | "full self driving" versus "autonomous mode" seems like
               | marketing hype instead of actual improvements. After all,
               | "autopilot" was supposed to drive itself, so what's the
               | new one do differently?
        
               | toast0 wrote:
               | So, devil's advocate. We know the Autopilot ignores
               | stationary objects, so a lane with a parked vehicle
               | (emergency or otherwise) is therefore a reasonable place
               | to travel at the set speed on the cruise control, etc.
               | 
               | But, I believe this discussion is about the new FSD
               | software which is supposed to be more capable. Have we
               | had reports about the new one doing the old tricks?
        
         | Geee wrote:
         | Tesla's system is real time, which means that it makes routing
         | decisions on every frame, and the decisions are independent
         | from previous decisions. It doesn't have memory. It just makes
         | decisions based on what it actually sees with its cameras. It
         | doesn't trust maps, like most other systems. For some reason,
         | it thinks that it must turn right.
         | 
         | Situations like these are sent to Tesla to be analyzed, the
         | corresponding data is added to the training data, and the error
         | is corrected in the next versions. This is how the system
         | improves.
         | 
         | After this situation is fixed, there will be an another edge
         | case that the HN crowd panics over.
        
           | tsimionescu wrote:
           | You're probably right, but from your tone you seem to be
           | implying that this is in any way an acceptable way to develop
           | safety critical software, which is a bit baffling.
        
             | Geee wrote:
             | It is safe, because there's a driver who is ready to
             | correct any mistakes. There isn't a single case where FSD
             | Beta actually hits something or causes an accident. So,
             | based on actual data, the current testing procedure seems
             | to be safe.
             | 
             | It isn't possible to learn edge cases without a lot of
             | training data, so I don't see any other way.
        
               | tsimionescu wrote:
               | This person is not a Tesla employee being trained and
               | paid to test this extremely dangerous piece of software
               | that has already killed at least 11 people. This is
               | obviously unacceptable, and no other self-driving company
               | has taken the insane step of letting beta testers try out
               | their barely functional software.
        
               | Geee wrote:
               | FSD Beta has never killed anyone. Maybe you're confusing
               | it with Autopilot, which is different software, but also
               | has saved much more people than killed.
               | 
               | I'm not saying that safety couldn't be improved, for
               | example by disengaging more easily in situations where it
               | isn't confident. One heuristic would be when it changes
               | route suddenly, like in this scenario.
        
               | kaba0 wrote:
               | Source on it saving anyone?
        
               | Geee wrote:
               | "In the 1st quarter, we registered one accident for every
               | 4.19 million miles driven in which drivers had Autopilot
               | engaged. For those driving without Autopilot but with our
               | active safety features, we registered one accident for
               | every 2.05 million miles driven. For those driving
               | without Autopilot and without our active safety features,
               | we registered one accident for every 978 thousand miles
               | driven. By comparison, NHTSA's most recent data shows
               | that in the United States there is an automobile crash
               | every 484,000 miles."
               | 
               | Source: https://www.tesla.com/VehicleSafetyReport
        
               | ahahahahah wrote:
               | Apples to oranges (or more like apples to oceans).
        
           | ra7 wrote:
           | > It doesn't have memory. It just makes decisions based on
           | what it actually sees with its cameras. It doesn't trust
           | maps, like most other systems.
           | 
           | This is incorrect. Tesla also relies on maps for traffic
           | signs, intersections, stop signs etc. They just don't have
           | additional details like the others do.
           | 
           | > After this situation is fixed, there will be an another
           | edge case that the HN crowd panics over.
           | 
           | Is anything Tesla FSD can't handle an "edge case" now? It's
           | literally an everyday driving scenario.
        
             | Geee wrote:
             | It's a specific edge case that the driver was testing. It
             | has issues around those monorail pillars.
        
               | ra7 wrote:
               | Monorail pillars are also not an edge case. Plenty of
               | cities have monorails. Just because FSD doesn't work
               | there doesn't mean it's an edge case.
        
         | antattack wrote:
         | Looking at the video Tesla was probably not closer than 25 feet
         | from any pedestrians. If pedestrians were closer I think their
         | proximity would make the car stop as in other videos.
         | 
         | As to abrupt maneuver, EU limits rate of change for steering
         | control and I think it would be wise for US to adopt something
         | similar for driver assist systems.
        
           | theslurmmustflo wrote:
           | like this?
           | https://twitter.com/tripgore/status/1311661887533314048?s=21
        
             | antattack wrote:
             | That was when Tesla AP relied on radar which has a hard
             | time detecting styrofoam.
        
               | theslurmmustflo wrote:
               | this is a model 3, it doesn't have radar.
        
               | vmladenov wrote:
               | Why do you say this? Literally every Model 3 before
               | delivered May 2021 has radar.
        
               | antattack wrote:
               | It did rely on radar when the video was taken. Tesla
               | stopped relying on radar on models 3 and Y with April
               | 2021 release.
        
           | zaptrem wrote:
           | This makes evasive maneuvers impossible. EU regulations make
           | it impossible for ADAS to even change lanes half the time.
           | 
           | Let's not go down the rabbit hole of government specifying
           | how machines we haven't even built yet should work.
        
             | antattack wrote:
             | Because they are sudden and unpredictable, evasive
             | maneuvers cannot be supervised. It would be unfair to rely
             | on driver to supervise such scenarios.
             | 
             | If evasive maneuver is required it would have to be done in
             | fully autonomous way. For the time of the maneuver system
             | would have to be responsible for driving.
        
           | phkahler wrote:
           | >> As to abrupt maneuver, EU limits rate of change for
           | steering control and I think it would be wise for US to adopt
           | something similar for driver assist systems.
           | 
           | That's a terrible idea. First it's a bandaid over an
           | underlying problem. Second it's a legislative answer to a
           | technical problem, which is IMHO never a good idea.
        
             | antattack wrote:
             | Rate limiting is a very valid engineering solution. It's
             | implemented in all kinds of controls.
             | 
             | Above everything, the car's driver assist should not be
             | making moves that human does not understand if system
             | requires human to supervise it.
        
         | GaylordTuring wrote:
         | > I am astounded that software capable of these outputs is
         | allowed on the roads.
         | 
         | Me too. However, in my encounter with software like this, it
         | usually runs on meat brains.
        
           | belter wrote:
           | I know ...its hard to believe... "They're Made Out of Meat"
           | 
           | https://youtu.be/7tScAyNaRdQ
        
         | masklinn wrote:
         | > I am astounded that software capable of these outputs is
         | allowed on the roads.
         | 
         | 'murica
         | 
         | No, seriously, when California decided to increase reporting
         | requirements a few years back Arizona reminded everyone you
         | could run any shitbox with no requirements.
         | 
         | That same year, Elaine Herzberg would be run over by an uber
         | self-driving shitbox, in Arizona.
        
         | oefrha wrote:
         | From the admittedly not much footage of Tesla FSD (1-2 hours
         | total maybe) I've watched, it seems to be roughly on par with a
         | student driver who occasionally panics for no apparent reason.
        
           | aaroninsf wrote:
           | ...and it is obvious that it will actually ever get more than
           | incrementally technically better, absent AGI.
           | 
           | The problem is too hard for existing tools. Good job on the
           | 80% case; let's look for problems for which the 80% case is
           | good enough and back away from unsupportable claims about
           | "just a little more time" bringing FSD in environments
           | defined by anomaly and noise, in which the cost of error is
           | measured in human life and property damage.
        
           | yawaworht1978 wrote:
           | The difference is the student driver will learn within a few
           | more dozen hours and be fit for the road, Tesla no silver
           | lining on the horizon.
        
             | gpm wrote:
             | You have it backwards. With humans driving there is a
             | constant and unending stream of student drivers, because
             | every new human must learn from scratch. There is no silver
             | lining.
             | 
             | With self driving it takes longer for the one individual to
             | learn, but the difference is that there is only the one
             | individual, and it need not ever die. The learning is
             | transferable between cars, regressions between generations
             | need not be accepted. The problem of bad driving _can_
             | finally be solved.
        
               | FartyMcFarter wrote:
               | That sounds great in theory, but in practice the
               | technology is not there for reliable and demonstrable
               | learning of that type.
        
               | gpm wrote:
               | Given the number of different people pouring millions to
               | billions of dollars at the problem, I think it's pretty
               | incredible for you to be so certain that that is the
               | case.
        
             | kaba0 wrote:
             | As well as he/she has a professional driver paying close
             | attention to whatever he does. Overseeing something is a
             | really mentally trying thing and "fsd" car drivers will not
             | be able to do that for long times.
        
           | CameronNemo wrote:
           | Petition to label all Teslas with FSD as student drivers.
           | They need a little LED panel that tells you when the driver
           | is using the AI.
        
             | dylan604 wrote:
             | While most people see the student vehicle and give it a
             | wide berth, there are those that see it as a "teachable"
             | moment. We've already seen that asshats that screw with
             | cars labeled with Student Driver just to "give 'em a
             | lesson" are already screwing with self driving cars for the
             | same reason: They're assholes.
        
               | [deleted]
        
             | misiti3780 wrote:
             | I agree - but do you think if they did that the haters
             | would stop hating ?
        
               | refulgentis wrote:
               | Assuming by hater you mean "someone who is scared by Full
               | Self Driving including cars veering into pedestrians",
               | it'd help me: I'm in an urban area and don't have a car,
               | and I'd be lying if I claimed I wasn't always scared when
               | I see a Tesla approach when I'm in a crosswalk, I'm never
               | sure if Full Self Driving is on or not.
        
               | misiti3780 wrote:
               | "I'd be lying if I claimed I wasn't always scared when I
               | see a Tesla approach when I'm in a crosswalk, I'm never
               | sure if Full Self Driving is on or not."
               | 
               | Sure - which is why I said I agreed there should be some
               | indication of it being in FSD or not, originally.
        
               | refulgentis wrote:
               | Sure - which is why I'm answering your question regarding
               | how haters will feel under this proposed intervention,
               | that yes, you've agreed with
        
               | Draiken wrote:
               | The problem is that you could apply that argument for a
               | drunk driver, a distressed driver, a driver without a
               | license, a driver that hasn't slept enough, etc.
               | 
               | Any of those could mean a car could randomly turn into
               | the crosswalk.
               | 
               | Humans are horrible at assessing risk. We worry about
               | Teslas because it's on the headlines now, but you're
               | infinitely more likely to get ran over by one of the
               | cited examples than a Tesla just by sheer statistics.
        
               | ragingrobot wrote:
               | Never mind the driver who can't wait the two to five
               | seconds for the pedrestrian to cross, which is probably
               | far more common than the above, and rather intentional.
               | 
               | TBH I don't think it's ever crossed my mind whether a
               | Tesla was in FSD or not, even when driving alongside one.
               | As long as it doesn't enter my space, I'm fine.
               | 
               | Walking through Manhattan, I'm more worried about the
               | human driver nowadays than the non-human.
        
               | criley2 wrote:
               | Here's my hater take: "Full Self Driving" is an
               | absolutely dangerous and egregious marketing lie. The
               | vehicles are not full self driving and require by law a
               | driver capable of taking over at a moments notice.
               | 
               | I do not believe that Tesla's technology will _ever_
               | achieve  "Full self driving" (levels 4 or 5, where a
               | driver isn't required https://www.nhtsa.gov/technology-
               | innovation/automated-vehicl...), and the labeling of
               | their "conditional automation" system as "full self
               | driving" is an absolute travesty.
               | 
               | People do honestly think they have bought a self driving
               | car. The marketing lie absolutely tricks people.
        
           | Draiken wrote:
           | I mean... 5 minutes on /r/IdiotsInCars and it's very easy for
           | me to understand how even with bizarre bugs like this, it's
           | better than people.
           | 
           | People say "how could it happen?!" and get angry for a
           | software bug - that can be fixed - and seem to accept how
           | drunk drivers do this (and a lot worse) literally every
           | single day. But since it's a human, that's fine!
           | 
           | Edit: wording
        
             | oefrha wrote:
             | Drunk drivers on the road are... "fine"? Not sure where you
             | live.
        
               | Draiken wrote:
               | Of course it's not fine in that sense.
               | 
               | What I mean is that you don't see a headline on news
               | outlets or twitter hashtags trending for every drunk
               | driver fatality or crash that happens. Which for me tells
               | that collectively, we are fine with it. We accept it as
               | ordinary, as crazy as it may be.
        
               | oefrha wrote:
               | A DUI, in California for instance, means you're suspended
               | for months (years if you're a repeat offender). Since
               | it's basically the same "driver" driving all FSD Teslas,
               | are you arguing that they should be suspended under the
               | same rules for drunken behavior? If so, they will
               | basically be out of commission indefinitely, and we won't
               | see headlines anymore.
        
               | Draiken wrote:
               | No, my point is that this is overblown just because it's
               | Tesla and people love drama, so this makes headlines and
               | trending topics.
               | 
               | My comparison with drunken drivers is just with the
               | regards to the odd behavior. If you look at a drunk
               | driver (or even someone that fell asleep in the wheel)
               | with a near crash, many times it would resemble this
               | video. But the outrage from the public differs vastly.
        
               | [deleted]
        
               | _jal wrote:
               | Drunks are ordinary because alcohol use predates
               | agriculture.
               | 
               | Unpredictable death-robots roaming the streets are pretty
               | novel.
               | 
               | I don't think that's very complicated, from a news
               | perspective.
        
               | majormajor wrote:
               | I regularly see "traffic collision kills [person or
               | people]" stories in my local newspaper. I don't know if
               | it's every one, it might be restricted to the extremely
               | stupid (wrong-way on the highway, drunk, or excessively-
               | high-speed) but "someone got killed" is certainly a
               | common story in the news here.
               | 
               | I've certainly seen many more local "person dies in car
               | accident" non-brand-related stories than "person dies in
               | Tesla accident" stories.
        
               | josephcsible wrote:
               | The point is those stories remain local news, but
               | whenever a Tesla is involved, it becomes national news.
        
             | roywiggins wrote:
             | There are pretty stringent laws against drunk driving in
             | most places... if Tesla's "self driving" modes are that bad
             | then they should be equally illegal
        
               | oneplane wrote:
               | A law doesn't prevent anything, it only applies after the
               | fact. You could argue that the prospect of being
               | prosecuted might scare people into not doing the thing
               | that they are not allowed to be doing, but with all the
               | people doing the things they are not allowed to be doing
               | anyway, I doubt a comparative legal argument helps here.
               | 
               | You could make a law that states that your FSD has to be
               | at least as good as humans. That means you have the same
               | post-problem verification but now the parallel with drunk
               | drivers can be made.
        
               | oefrha wrote:
               | The key difference is human drivers have independent
               | software, whereas the same software powers all FSD
               | Teslas. One human driver getting drunk/otherwise impaired
               | doesn't affect the software of any other human driver;
               | but if your FSD software is as good as a drunk human,
               | then every single one of your FSD vehicles is a road
               | hazard.
        
               | oneplane wrote:
               | That difference is also a benefit, fix one problem, and
               | it's fixed for every instance.
               | 
               | On the other hand, it's not like the software is always
               | bad and always in the same situation. That is a big
               | difference with a human analogy; a drunk driver taking a
               | trip in the car is drunk for the entire trip (unless it's
               | a very long trip etc..), so would be impaired for the
               | entire duration.
               | 
               | There are plenty of people and vehicles that are road
               | hazards and are allowed to drive (or be driven) anyway,
               | so if we really cared about that aspect on its own we
               | could probably do with better tests and rules in general.
        
               | dundarious wrote:
               | Tesla's "FSD" is only a little bit better than drunk
               | drivers, whom we punish severely whenever caught, even
               | before any accident occurs.
               | 
               | The fact that enforcement is patchy is irrelevant --
               | drunk driving is deemed serious enough to be an automatic
               | infraction.
               | 
               | Also, most of the time drunk drivers are not actually
               | that bad at moment-to-moment driving. That's why almost
               | everyone worldwide used to do it! You can still do the
               | basics even when reasonably drunk. That doesn't make you
               | safe to drive. It's still incredibly dangerous to do.
        
               | oneplane wrote:
               | This assumes FSD-to-drunk-driver analogy means FSD has to
               | be a drunk driver (or a student driver as commented
               | elsewhere) all the time, so always making the bad
               | judgement and slow reaction like a drunk driver would.
               | 
               | I think that some form of responsibility has to be
               | assigned to the FSD in some way (the manufacturer? the
               | user? some other entity or a combination?) regardless but
               | I haven't found any clear case of how that would work.
               | 
               | It also makes me wonder how we would measure or verify a
               | human driver with intermittent drunkenness. Imagine 5
               | seconds out of every minute you temporarily turn into a
               | drunk driver. That's plenty of time to cause a major
               | accident and kill people, but on the other hand that
               | would mean that the combination of a situation where that
               | would happen and the right timing to not be able to judge
               | that situation has to apply. We do of course have the
               | luxury of not having humans constantly swapping drunk and
               | normal driving, so it isn't a realistic scenario, but it
               | would make for a better computer analogy.
               | 
               | Besides drunk drivers we also have just generally crappy
               | drivers that just happened to get lucky when doing their
               | driving test (although there are places where no
               | meaningful test is required so that's a problem in
               | itself).
        
               | dundarious wrote:
               | I think you've missed my point, while adding some
               | additional information.
               | 
               | - drunk drivers are also not uniformly awful drivers:
               | they can drive OK for the most part
               | 
               | - they still drive unacceptably poorly
               | 
               | - we strictly punish them on detection, before any
               | potential accident
               | 
               | - drunk drivers and FSD have more in common with each
               | other than competent drivers and FSD
               | 
               | - why is FSD not held to such a preventative standard?
               | 
               | One can argue that FSD is like a drunk driver driving a
               | student training car with two sets of pedals and two
               | steering wheels, and the Tesla owner/driver is like a
               | driving instructor. But driving instructors are trained
               | and _paid_ to be quite vigilant at all times. Tesla play
               | a sleight of hand and say it's a labor saving technology,
               | but also you need to be able to behave like a trained and
               | paid driving instructor... that is a conspicuous
               | contradiction.
               | 
               | And I'm ignoring future FSD capabilities because while
               | I'd be happy for it to come about, we should discuss the
               | present situation first, and I don't believe it's a good
               | example where sacrificing lives now is acceptable in
               | order to potentially save lives in the future.
        
               | oneplane wrote:
               | Perhaps it is lost in translation; I'm not saying the
               | fact that someone is driving drunk only matters when an
               | accident happens, I'm saying that right until the moment
               | a driver decides to get drunk, the law doesn't do
               | anything. If at the beginning of the day someone decides
               | to start drinking and when they are drunk they get in to
               | a car and start driving, that's when the violation
               | occurs. Not before that like PreCrime would.
               | 
               | The same can't apply to FSD because it isn't consistently
               | 'driving drunk'. That analogy doesn't hold because it is
               | not fixed software like a GPS-based navigation aid would
               | be. Just like humans it does have more or less fixed
               | parameters like the amount of arms and legs you have,
               | that doesn't tend to change depending on your
               | intoxication.
               | 
               | One could make the argument that it's not as much the
               | "haha it is just like a drunk driver zig-zagging", but
               | the uncertainty about the reliability. If a car with some
               | autonomous driving aid drives across an intersections
               | just fine 99 times out of a 100, and that one time it
               | doesn't, that doesn't mean the car software was dunk 100%
               | of the time.
               | 
               | Why FSD is not held to some standard, I don't know. I
               | suppose that depends on how it is defined by local law
               | and how the country it is in allows or disallows its use.
               | 
               | The problem with prevention and detection here is that
               | like humans, the system is not in a static state. The
               | trained neural network might be largely the same with
               | every release, but the context in which it operates
               | isn't, unless the world around it stops completely in
               | which case two trips can be identical and because the
               | input is identical the output can also be identical.
               | Humans do the same, even if well-rested and completely
               | attentive, knee-jerk reactions happen.
               | 
               | Holding FSD to a standard of a drunk driver isn't a valid
               | comparison due to the non-static nature of the state it
               | is in. This isn't even FSD-specific, even lane
               | guidance/keeping assistance and adaptive cruise control
               | isn't static, and those are based on pretty static
               | algorithms. Even the PID-loops used on those will deliver
               | different results on seemingly similar scenarios.
               | 
               | Perhaps we should stop comparing technology to humans
               | since they are simply not the same. The static kind isn't
               | and neither is a NN-based one. We can still explore
               | results or outcomes because those are the ones that have
               | real impact. And let's not fool ourselves, humans are far
               | less reliable in pretty much every man-machine
               | combination. But in human-to-human contexts we factor in
               | those unreliabilities, and with machine-to-human or
               | machine-to-machine we seemingly don't, which is pretty
               | much the same problem you're describing.
               | 
               | This will be an interesting field of development, and if
               | we simply take death toll into account, keep in mind that
               | for some reason seatbelts were thought to have 'two sides
               | of the story' as well when they were first introduced and
               | later required. As with car seats for children and
               | infants, a good idea might start out one way and over
               | time (with the accompanying bodycount) it gets shaped
               | into whatever we expect of it today. Same goes for
               | aerospace, boats and trains, and that's even without
               | taking software into account.
        
               | Draiken wrote:
               | Genuinely curious: how would you even go about advancing
               | autonomous driving without testing it in the streets?
               | 
               | I'm absolutely certain that they've attempted to simulate
               | millions of hours of driving, yet this odd behavior
               | happened in real life, which now can be studied and
               | fixed.
               | 
               | If we never had the software on the wild, how would you
               | ever really test it?
        
               | simion314 wrote:
               | >Genuinely curious: how would you even go about advancing
               | autonomous driving without testing it in the streets?
               | 
               | Genuine response:
               | 
               | 1 don't let customers do the testing, especially if you
               | don't train them (I mean real training about failures not
               | PR videos and tweets and some small letter manual with
               | disclaimers)
               | 
               | 2 use employees, train drivers to test, have some cameras
               | to check the driver to make sure he pays attention.
               | 
               | 3 postpone testing until the hardware and software is
               | good enough so you don't ignore static objects.
               | 
               | 4 make sure you don't do monthly updates that invalidates
               | all your previous tests.
        
               | Draiken wrote:
               | Yeah, I guess you could always be safer about it, but I'm
               | really not sure it would be enough. If we substitute FSD
               | for any software, you have code tests, QA, the developers
               | test it, and bugs still go through. It's inevitable.
               | 
               | Unfortunately it's always about the incentives and on a
               | capitalist society the only incentive is money. So even
               | if they could be safer, they wouldn't do it unless it's
               | more profitable, specially being a publicly traded
               | company.
        
               | emn13 wrote:
               | In a sense, a self-driving car might actually be easier
               | to test for than complex software - at least parts of it.
               | 
               | After all, normal (complex) software tends to have lots
               | of in depth details you need to test for; and a surface
               | area that's pretty irregular in the sense that it's hard
               | to do generalized testing. Some bits can be fuzz tested,
               | but usually that's pretty hard. It's also quite hard for
               | a generalized test to recognize failure, which is why
               | generalized test systems need lots of clever stuff like
               | property testing and approval testing, and even then
               | you're likely having low coverage.
               | 
               | However, a self-driving car is amendable to testing in a
               | sim. And the sim might be end-to-end, but it needn't be
               | the only sim you use; the FSD system almost certainly has
               | many separate components, and some of those might be easy
               | to sim for too; e.g. if you have a perception layer you
               | could sim just that; if you have a prediction system you
               | might sim just that; etc.
               | 
               | And those sims needed be full-sim runs either; if you
               | have actual data feeds, you _might_ even be able to take
               | existing runs, and the extend them with sims; just to
               | test various scenarios while remaining fairly close to
               | real world.
               | 
               | I'm sure there are tons of complexities involved; I don't
               | mean to imply it's easy - but it's probably tractable
               | enough that given the overall challenge, it's worth
               | creating an absolutely _excellent_ sim - and that 's the
               | kind of challenge we actually have tons of software
               | experience for.
        
               | sleepybrett wrote:
               | IMO there is so much 'machine learning' in the tesla self
               | driving system is there any way to know a bug is 'fixed'
               | other than just running it through a probably totally
               | boring set of tests that doesn't even approach covering
               | all scenarios?
        
               | jjav wrote:
               | > Genuinely curious: how would you even go about
               | advancing autonomous driving without testing it in the
               | streets?
               | 
               | The onus is on the company trying to do this to figure
               | out a safe way.
               | 
               | They don't (should not) get to test in production with
               | real innocent lives on the line just because they can't
               | come up with a better answer.
        
         | DrammBA wrote:
         | At the end of the video (0:32 to 0:33) you can see it quickly
         | snap to right turn again. Why is the car attempting multiple
         | right turns while the map indicates a straight line?
        
         | modeless wrote:
         | The car almost certainly decided that the road ahead was
         | blocked. For context, this release of FSD is the first to use a
         | new obstacle detection technique, and as a result it is the
         | first one that doesn't drive directly into the pillars on that
         | street without perceiving them at all. So it's very likely that
         | this new obstacle detection system glitched out here.
        
           | TheJoeMan wrote:
           | Yeah I'd have to say just stop using FSD underneath a
           | monorail. I haven't seen as much failures in suburbs
        
             | modeless wrote:
             | The guy in the video is intentionally stress testing the
             | system by repeatedly driving in an area he knows is poorly
             | handled. But it's totally fair IMO, this is a real road and
             | while it confuses humans too (as evidenced by all the
             | people here claiming that it's illegal to turn right here),
             | FSD _must_ handle it better than this.
        
               | cptskippy wrote:
               | Considering this data is probably fed back to Tesla, it's
               | probably safe to say this guy is actually helping.
        
         | eloff wrote:
         | It seems like basic sanity checking on the proposed actions is
         | not being done. Why not?
         | 
         | With safety systems you always want multiple levels of
         | overlapping checks.
        
         | andreyk wrote:
         | Good catch! That's definitely quite concerning...
        
         | ASalazarMX wrote:
         | I don't own or plan to own a Tesla, but if i did, I'm sure I
         | wouldn't use the self driving feature because it seems more
         | inconvenient than simply driving yourself. It's not even a
         | driving assistant, you become the car's assistant.
         | 
         | Edit: to be clear, I plan to buy an EV in the future, but I'll
         | drive it myself. Most of my driving is in a 15Km radius, urban,
         | and babysitting the self-drive seems like more trouble than
         | it's worth.
        
         | 300bps wrote:
         | Speaking as someone who believed all the hype about having
         | fully autonomous cars by 2020, I think the truth is that it is
         | orders of magnitude harder than we thought it was and that we
         | are orders of magnitude less capable than we thought we were.
        
           | sidlls wrote:
           | Something every software engineer should learn before their
           | insufferable egos solidify.
        
         | ChicagoBoy11 wrote:
         | I've made this argument here a few times and am always shot
         | down, but I think its important to highlight that the airline
         | industry has an extremely robust history of automation and HCI
         | in critical transportation scenarios and it seems to me that
         | all the lessons that we have learned have been chucked out the
         | window with self-driving cars. Being able to effectively reason
         | about what the automation is doing is such an important part of
         | why these technologies have been so successful in flight, and
         | examples like this illustrate how far off we are to something
         | like that in cars. The issue of response time, too, is one we
         | cant ignore, and it is certainly a far greater challenge in
         | automobiles.
         | 
         | I don't have answers, but it does seem to me like we are not
         | placing a premium enough on structuring this tech to optimize
         | driver supervision over the driving behavior. Granted, the
         | whole point is to one day NOT HAVE to supervise it all, but at
         | this rate we're going to kill a lot of people until we get
         | there.
        
           | ReidZB wrote:
           | > Being able to effectively reason about what the automation
           | is doing is such an important part of why these technologies
           | have been so successful in flight, and examples like this
           | illustrate how far off we are to something like that in cars.
           | 
           | Is that actually the case, though?
           | 
           | I would hope, although perhaps I'm mistaken, that the
           | developers of the actual self-driving systems _would_ be able
           | to effectively reason about what 's happening. For example,
           | would a senior dev on Tesla's FSD team look at the video from
           | the article and have an immediate intuitive guess for why the
           | car did what it did? Or better yet, know of an existing issue
           | that triggered the wacky behavior?
           | 
           | Even if not, I'd hope that vehicle logs and metrics would be
           | enough to shed light on the issue.
           | 
           | I don't think I've ever seen a true expert, with access to
           | the full suite of analytic tools and log data, publish a full
           | post-mortem of an issue like this. I'm certain these happen
           | internally at companies, but given how competitive and hyper-
           | secretive the industry is, the public at large never sees
           | them.
        
             | ChicagoBoy11 wrote:
             | They certainly are trying very hard, as far as I can tell.
             | Tesla's efforts on data collection and simulation of their
             | algorithm are incredibly impressive. But part of why it is
             | so necessary is that there is an opaqueness to the ML
             | decision-making that I don't think anyone has quite
             | effectively cracked. I do wonder, for instance, if the
             | decision to go solely with the cameras and no LIDAR will
             | prove to ultimately be a failure. The camera-only solution
             | requires the ML model to accurately account for all
             | obstacles, for example. As crude, and certainly non-human
             | as it is, a LIDAR with super crude rules for "dont hit an
             | actual object" would have even at this point prevented some
             | of their more widely publicized fatal accidents which
             | relied on the algorithm alone.
        
           | aaroninsf wrote:
           | Something I do not understand:
           | 
           | there are keys difference between automation in e.g.
           | aircraft, and what Tesla at al are failing at,
           | 
           | e.g., how constrained the environment is; and what the
           | exposure is to anomalous conditions is; and what the
           | opportunity window usually is to turn control back over to a
           | human.
           | 
           | The thing I don't understand is, we have a much more
           | comparable environment in ground travel: federal highways.
           | 
           | Innumerable regressions and bugs and lapses aside, I do not
           | understand why so much effort is being wasted on a problem
           | which IMO obviously requires AGI to reach a threshold of
           | safety we are collectively liable to consider reasonable;
           | when we could be putting the same effort into the (also IMO)
           | much more valuable and impactful case of optimizing automated
           | traffic flow in highway travel.
           | 
           | Not only is the problem domain much more constrained, there
           | is a single regulatory body, which could e.g. support and
           | mandate coordination and federated and data sharing/emergent
           | networking, to promote collective behavior to optimize flow
           | in ways that humans limited-information self-interested human
           | drivers cannot.
           | 
           | The benefits are legion.
           | 
           | And,
           | 
           | I would pay 10x as much to be able to cede control at the
           | start of a 3-hour trip to LA, than to be able to get to work
           | each morning. Though for a lot of Americans, that also is
           | highway travel.
           | 
           | Not just this, why not start with the low-hanging case of
           | highway travel, and work out from there onto low-density
           | high-speed multi-lane well-maintained roads? Yes that means
           | Tesla techs who live in Dublin don't get it first. Oh well...
           | 
           | IMO there will never be true, safe FSD in areas like my city
           | (SF) absent something more more comparable to AGI. The
           | problem is literally too hard and the last-20% is not
           | amenable to brute forcing with semantically vacuous ML.
           | 
           | I just don't get it.
           | 
           | Unless we take Occam's razor, and assume it's just grift and
           | snake oil to drive valuation and investment.
           | 
           | Maybe the quiet part and reason for engineering musical
           | chairs is just what you'd think, everyone knows this is not
           | happening; but shhhh the VC.
        
             | nemothekid wrote:
             | Highway self driving has been around for decades[1] - and
             | Tesla's general release autopilot can already do all that.
             | As I understand it from ramp on to ramp off, in production
             | vehicles, Tesla can provide an automated experience. I'm
             | not sure how much "better" it can get.
             | 
             | [1] https://www.youtube.com/watch?v=wMl97OsH1FQ
        
               | null_shift wrote:
               | better would be that i can legally go to sleep and let
               | the car drive the highway portion of my trip. then i wake
               | up and complete the final leg of the trip.
               | 
               | as you say, i don't think this is entirely out of reach
               | (even if it required specialized highway infrastructure
               | or car to car communication). seems like lower hanging
               | fruit than trying to get full self driving working on
               | local/city roads.
               | 
               | i would pay a ton for the highway only capability...
        
               | dev_tty01 wrote:
               | >Highway self driving has been around for decades[1]
               | 
               | Driver assisted highway has been around for years...
               | Level 5 driving requires no human attention. Huge
               | difference.
               | 
               | I think what is wanted by many is level 5 on the
               | highways. I want to sleep, watch a movie, whatever. That
               | is much, much "better" than what we have now. Like many
               | others, I would be most interested in full level 5 on
               | highways and me driving in the city. That is also much
               | easier to implement and test. The scope of the problem is
               | greatly reduced. I think Tesla and others are wasting
               | tremendous resources trying to get the in-city stuff
               | working. It makes cool videos, but being able to do other
               | activities during a 5 hour highway drive has much more
               | value (to me at least) than riding inside a high risk
               | video game on the way to work.
               | 
               | (edit) I get that I am misusing the definition of "level
               | 5" a bit, but I think my meaning is clear. Rated for no
               | human attention for the long highway portion of a trip.
        
             | kempbellt wrote:
             | Even lower-level automation for highway driving would be
             | super useful.
             | 
             | I would appreciate a simple "keep-distance-wrt-speed"
             | function for bumper-to-bumper situations. Where worst case
             | scenario, you rear-end a car at relatively low speeds.
             | 
             | I'd happily keep control over steering in this situation
             | and just keep my foot over the brake, though lane-keep
             | assist would probably be handy here as well. A couple radar
             | sensors/cameras/or lidar sensors would probably be enough
             | for basic functionality.
             | 
             | Disable if the steering-wheel is turned more than X degrees
             | - maybe 20 or 25?. Disable if speed goes over X speed -
             | maybe 15mph? Most cruise controls require a minimum speed
             | (like 25mph) to activate.
             | 
             | Trying to do _full_ driving automation, especially in a
             | city like Seattle, is like diving into the ocean to learn
             | how to swim.
             | 
             | As cool as that sounds, I'd trust incremental automation
             | advancements much more.
        
               | kaba0 wrote:
               | Also, what could possibly save the most lives: simply ML
               | the hell out of people's faces to notice when they are
               | getting sleepy. That's almost trivial and should be
               | mandatory in a few years.
               | 
               | A more advanced problem would be safely stopping in case
               | the driver falls asleep/looses consciousness, eg. on a
               | highway. that short amount of self-driving is less error-
               | prone than the alternative.
        
               | liber8 wrote:
               | _I would appreciate a simple "keep-distance-wrt-speed"
               | function for bumper-to-bumper situations._
               | 
               | This has been widespread for at least a decade. I'm not
               | even aware of a mainstream auto brand that _doesn 't_
               | offer adaptive cruise control at this point. Every one
               | I've used works in stop and go traffic.
               | 
               | The other features you want are exactly what Tesla has
               | basically perfected in their autopilot system, and work
               | almost exactly as you describe (not the FSD, just the
               | standard autopilot).
        
               | kempbellt wrote:
               | > This has been widespread for at least a decade.
               | 
               | I can't say that my experience agrees with this. Maybe
               | some higher-end vehicles had it a decade ago, but it
               | seems to be getting more popular over the past 5 years or
               | so. I still don't see it offered on lower priced vehicles
               | where basic cruise functionality is there, but I doubt
               | ever will be a part of "entry level" considering the tech
               | required.
               | 
               | None of the vehicles my family owns have a "stop-and-go"
               | cruise control function - all newer than 10 years. ACC at
               | higher speeds is available on one, but it will not auto-
               | resume if the vehicle stops.
        
               | sleepybrett wrote:
               | In this same vein cars with automated parallel/back-in-
               | angle parking.
               | 
               | I think an even more 'sensor fusion' approach needs to be
               | adopted. I think the roads need to be 'smart' or at least
               | 'vocal'. Markers of some kind placed in the roadway to
               | hint cars about things like speed limit, lane edge, etc.
               | Anything that would be put on a sign that matters should
               | be broadcast by the road locally.
               | 
               | Combine that with cameras/lidar/whatever for distance
               | keeping and transient obstacle detection. Then network
               | all the cars cooperatively to minimize further the
               | impacts of things like traffic jams or accident re-
               | routing. Perfect zipper merges around debris in the
               | roadway.
               | 
               | Once a road is fully outfitted with the marker system,
               | then and only then would I be comfortable with a 'full
               | autopilot' style system. Start with the freeway/highway
               | system, get the logistics industry on board with special
               | lanes dedicated to them and it becomes essentially a
               | train where any given car can just detach itself to make
               | it's drop off.
        
             | liber8 wrote:
             | I'm on my third Tesla. FSD on highways has improved so much
             | in the last 6 years. On my first Tesla, autopilot would
             | regularly try to kill you by running you into a gore point
             | or median (literally once per trip on my usual commute). I
             | now can't even remember the last time I had an issue on the
             | highway.
             | 
             | Anywhere else is basically a parlor trick. Yes, it sorta
             | works, a lot of the time, but you have to monitor it so
             | closely that it isn't really beneficial. As you point out,
             | its going to take some serious advances (which in all
             | likelihood are 30+ years away) for FSD to reliably work in
             | city centers.
             | 
             | I think the issue you've highlighted is one of governance.
             | There's only so much Tesla can do regarding highways. You
             | really need the government to step in to mandate
             | coordination of the type I think you're envisioning. And
             | the government is pretty much guaranteed to fuck it up and
             | adopt some dumb standard that kills all innovation after
             | about 6 months, so it never actually becomes usable.
             | 
             | I think automakers will eventually figure this out
             | themselves. As you say, there are too many benefits for
             | this not to happen organically. Once vehicles can talk to
             | each other, everything will change.
        
               | ChuckNorris89 wrote:
               | _> On my first Tesla, autopilot would regularly try to
               | kill you by running you into a gore point or median
               | (literally once per trip on my usual commute)_
               | 
               | And people paid money for this privilege?
        
               | liber8 wrote:
               | To be fair, it still felt like magic. My car would drive
               | me 20 miles without me really having to do anything,
               | other than make sure it didn't kill me at an interchange.
               | 
               | And I'm now trying to remember, but I think autopilot was
               | initially free (or at least was included with every car I
               | looked at, so it didn't seem like an extra fee).
               | Auotpilot is now standard on all Teslas, but FSD is an
               | extra $10k, which IMO is a joke.
        
               | kaba0 wrote:
               | Humans are ridiculously bad at overseeing something that
               | mostly works. That's why it is insanely more dangerous.
               | 
               | Also, the problem is "easy" for the general case, but the
               | edge cases are almost singularity-requiring. The former
               | is robot vacuum level, the latter is out of our reach for
               | now.
        
               | ChuckNorris89 wrote:
               | I bet it felt magic, but if my car would actively try to
               | kill me, it would go back to the dealer ASAP.
               | 
               | I'm not paying with money and my life to be a
               | corporation's guinea pig.
        
               | aaroninsf wrote:
               | Part of what I don't get so to speak,
               | 
               | is why there we haven't seen the feds stepping in via the
               | transportation agency to develop and regulate exactly
               | this, with appropriate attention paid to commercial,
               | personal, and emergency vehicle travel.
               | 
               | The opportunities there appear boundless and the
               | mechanisms for stimulating development equally so...
               | 
               | I really don't get it. Then I think about DiFi and I kind
               | of do.
        
             | ghaff wrote:
             | >Unless we take Occam's razor, and assume it's just grift
             | and snake oil to drive valuation and investment.
             | 
             | I think there's some of that. Some overconfidence because
             | of the advances that have been made. General techno-
             | optimism. And certainly a degree of their jobs depending on
             | a belief.
             | 
             | I know there is a crowd of mostly young urbanites who don't
             | want to own a car and want to be driven around. But I
             | agree. Driving to the grocery store is not a big deal for
             | me. Driving for hours on a highway is a pain. I would
             | totally shell out $10K or more for a full self-driving
             | system even if it only worked on interstates in decent
             | weather.
        
           | hiddencost wrote:
           | These lessons have been chucked out the window by second tier
           | (e.g., GM/Cruise) and third tier (e.g. Tesla and Uber)
           | competitors, who have recognized that the only way they can
           | hope to catch up is by gambling that what happened to Uber
           | won't happen to them.
        
           | rtkwe wrote:
           | Cars are vastly more complex to do navigation for than planes
           | too so we need to be even more careful when making auto
           | autos. Plane autopilots are basically dealing with just the
           | physical mechanics of flying the plane which while complex
           | are quite predictable and modellable. All of the obstacle
           | avoidance and collision avoidance takes place outside of
           | autopilots through ATC and the routes are known and for most
           | purposes completely devoid of any obstacles.
           | 
           | Cars have a vastly harder job because they're navigating
           | through an environment that is orders of magnitude more
           | complex because there are other moving objects to deal with.
        
             | arcticbull wrote:
             | Sounds more comparable to ATTOL does it not? Planes in
             | development now are capable of automatic taxi, take-off and
             | landing.
        
               | rtkwe wrote:
               | They're still in a vastly more controlled environment
               | than cars and moving at much lower speeds as well. If an
               | auto taxiing plane has to avoid another airport vehicle
               | something has gone massively wrong. Judging by this video
               | [0] I'm not sure they're even worrying about collisions
               | and are counting on the combination of pilots and ATC
               | ground controllers to avoid issues while taxiing. It
               | looks like the cameras are entirely focused on line
               | following.
               | 
               | [0] https://www.youtube.com/watch?v=9TIBeso4abU
        
             | toxik wrote:
             | Interesting point of view: autonomous cars as a form of
             | contact-rich manipulation
        
               | rtkwe wrote:
               | I'm not sure what you mean by that. Care to expand?
        
             | ghaff wrote:
             | Which is one reason I come back to thinking that you may
             | see full automation in environments such as limited access
             | highways in good weather but likely not in, say, Manhattan
             | into the indefinite future.
             | 
             | Unexpected things can happen on highways but a lot fewer of
             | them and it's not like humans driving 70 mph are great at
             | avoiding that unexpected deer or erratic driver either.
             | 
             | ADDED: You'd actually think the manufacturers would prefer
             | this from a liability perspective as well. In a busy city,
             | pedestrians and cyclists do crazy stuff all the time (as do
             | drivers) and it's a near certainty that FSD vehicles _will_
             | get into accidents and kill people that aren 't really
             | their fault. That sort of thing is less common on a
             | highway.
        
               | rtkwe wrote:
               | Highways are probably the best case scenario for full
               | automation but we've seen scenarios where even that
               | idealized environment has deadly failures in a handful of
               | Tesla crashes.
        
               | ra7 wrote:
               | > Which is one reason I come back to thinking that you
               | may see full automation in environments such as limited
               | access highways in good weather but likely not in, say,
               | Manhattan into the indefinite future.
               | 
               | This is why there are SAE levels of automation and
               | specifically Level 4 is what you're describing. Anyone
               | claiming their system will be Level 5 is flat out lying.
        
               | ghaff wrote:
               | Even just in the US, there are some cities that can be
               | fairly challenging for an experienced human driver,
               | especially one unfamiliar with them. And there are plenty
               | of even paved mountain roads in the West which can be a
               | bit stressful as well. And that's in good weather.
        
               | mkr-hn wrote:
               | This is what I'd be happy with. Something to get me the
               | 20-50 miles between cities (Atlanta<->Winder<->Athens in
               | particular), or through the closed highway loops around
               | them. Driving within them isn't so boring that my focus
               | wanders before I notice it's wandering.
               | 
               | We could just expand MARTA, but the NIMBY crowd won't
               | allow it. People are still hopped up on 1980s
               | fearmongering about the sorts of people who live in
               | cities and don't want them infesting their nice, quiet
               | suburbs that still have the Sherriff posting about huge
               | drug and gun busts.
        
               | ghaff wrote:
               | Public transit isn't a panacea for suburbs/small cities.
               | I'm about a 7 minute drive from the commuter rail into
               | Boston but because of both schedule and time to take a
               | non-express train, it's pretty much impractical to take
               | into the city except for a 9-5 workday schedule,
               | especially if I need to take a subway once I get into
               | town.
               | 
               | For me, it's more the 3-5 hour drive, mostly on highways,
               | to get up to northern New England.
        
           | sam0x17 wrote:
           | To echo this, as someone who has done some work with formal
           | specifications, I have to say it seems like the self-driving
           | car folks are taking a "move fast and break things" approach
           | across the board, which is horrifying.
        
           | Lendal wrote:
           | The mechanism by which those lessons were learned involved
           | many years full of tragedy and many fatalities including many
           | famous celebrities dying in those plane crashes. Obviously,
           | we do not want to follow that same path, but at the moment
           | that's exactly the path we're on.
           | 
           | The US govt isn't going to do anything until there's a public
           | outcry, and historically there won't be a public outcry until
           | there's a bunch of famous victims to point to.
        
             | wolverine876 wrote:
             | > The US govt isn't going to do anything until ...
             | 
             | I think this attitude is defeatist and absolves us of doing
             | anything. It's a democracy; things happen because citizens
             | act. 'The US government isn't going to do anything' as long
             | as citizens keep saying that to each other.
        
           | Someone wrote:
           | > it seems to me that all the lessons that we have learned
           | have been chucked out the window with self-driving cars.
           | 
           | I think it's unfair to lump all self driving car
           | manufacturers together.
           | 
           | The traditional car companies have been doing research for
           | decades (see for example https://en.wikipedia.org/wiki/VaMP),
           | but only slowly brought self-driving features to the market
           | with part of the slowdown because they are aware of the human
           | factors involved. That's why there's decades of research on
           | ways to keep drivers paying attention and/or detecting that
           | they don't.
           | 
           | "Move fast and break things" isn't their way of working.
        
           | dotancohen wrote:
           | Why didn't you post this as a top-level comment? What does
           | this have to do with the post you are replying to?
        
           | darkmarmot wrote:
           | I recently avoided a startup prospect because they were
           | looking to build a car OS that wasn't hard real-time. The
           | very idea that they're trying to develop such things that
           | might intermittently pause to do some garbage collection is
           | freaking terrifying.
        
             | errantspark wrote:
             | That's horrifying on such a deep level. There should be
             | mandatory civil service for programmers, but you just get
             | sent somewhere cold and you gotta write scene demos and
             | motion control software for a year to get your head on
             | straight. :P
        
             | xxs wrote:
             | What has hard real time have to do with garbage collection?
             | You can have concurrent GCs (with [sub]millisecond pauses,
             | or no Stop-the-world at all) but you also need 'hard real
             | time' OS to begin with. Heck, even opening files is far
             | from real-time.
             | 
             | Then you need: not-blocking data strcutures, not just lock-
             | free - that are much easier to develop. Pretty much you
             | need forward guarantees on any data structure you'd use.
        
               | Matthias247 wrote:
               | You usually need garbage collection because you are
               | allocating in the first place. And allocating and
               | releasing adds some non-determinism. You apparently don't
               | know how much exactly needs to be allocated - otherwise
               | you wouldn't have opted for the allocations and GC. That
               | non-determinism translates to a non-determinism in CPU
               | load, as well as to "I'm not sure whether my program can
               | fulfill the necessary timing constrains anymore".
               | 
               | So I kind of would agree that producing any garbage
               | during runtime makes it much harder to prove that a
               | program can fulfill hard realtime guarantees.
        
             | spywaregorilla wrote:
             | That doesn't seem especially bad. The car could, for
             | instance, predict whether or not it was safe to do garbage
             | collection. Humans do the same when they decide to look
             | away while driving.
        
               | toxik wrote:
               | Heck, some humans even do literal garbage collection
               | while driving!
        
               | giantrobot wrote:
               | A human that looks around while driving is still taking
               | in a lot of real-time input from the environment.
               | Assuming they're not a terrible driver they examined the
               | upcoming environment and made a prediction that at their
               | current speed there were no turns, obstacles, or apparent
               | dangers before looking away. If they didn't fully turn
               | their head they can quickly bring their eyes back to the
               | road in the middle of their task to update their
               | situation.
               | 
               | If a GC has to pause the world to do its job there's none
               | of that background processing happening while it's
               | "looking away".
        
             | silisili wrote:
             | To play devil's advocate...and because I'm just not that
             | educated in the space, is this really a huge deal? If we're
             | talking couple ms at a time delays, isn't that still vastly
             | superior to what a human could achieve?
        
               | gameswithgo wrote:
               | GC pauses can be hundreds of milliseconds. You could
               | perhaps use a particular GC scheme that guarantees you
               | never have more than a couple millisecond pause, but then
               | you have lots of pauses. That might have unintended
               | consequences as well. I'm also not sure that such GCs,
               | like golangs, can really mathematically guarantee a
               | minimum pause time.
        
               | quotemstr wrote:
               | Hard real time garbage collectors have existed for
               | decades. Of course you can mathematically guarantee a
               | minimum pause time given a cap on allocation rate. What's
               | stopping you?
        
               | xxs wrote:
               | You don't even need a cap on allocation rate, GC can have
               | during allocation w/o fully blocking, it'd 'gracefully'
               | degrade the allocation, itself. It'd be limited by
               | CPU/memory latency and throughput.
        
               | xxs wrote:
               | Fully concurrent GCs exist with read-barriers and no
               | stop-the-world phase. The issues with "hard" real-time
               | are not gc-related.
        
               | AlotOfReading wrote:
               | If you can set a guaranteed maximum on the delays
               | (regardless of what those limits are), you're hard real-
               | time by definition. The horror is that they weren't
               | building a system that could support those guarantees.
        
               | silisili wrote:
               | I see, thanks.
               | 
               | What if say, a system is written against an indeterminate
               | timed GC like say, Azul or Go's, but code is written in a
               | way that proves GC times never exceed X, whether by
               | theory or stress testing. Is this still seen as
               | 'unguaranteed'?
        
               | kaba0 wrote:
               | I think that would at most be soft-real time.
        
               | AlotOfReading wrote:
               | It depends on your system model (e.g. do you consider
               | memory corruption to be a valid threat), but it could be.
               | In practice, actually establishing that proof purely in
               | code is almost always impractical or doesn't address the
               | full problem space. You use hardware to help out and
               | limit the amount of code you actually have to validate.
        
               | sam0x17 wrote:
               | If ernie the intern decides to use a hashmap to store a
               | class of interesting objects as we drive, you could end
               | up with seconds of GC collection + resizing if it grows
               | big enough.
        
           | _moof wrote:
           | Hi, aerospace software engineer and flight instructor here. I
           | think you get shot down because the problems just aren't
           | comparable. While I agree that there may be some
           | philosophical transfer from aircraft automation, the
           | environments are so radically different that it's difficult
           | to imagine any substantial technological transfer.
           | 
           | Aircraft operate in an extremely controlled environment that
           | is almost embarrassingly simple from an automation
           | perspective. Almost everything is a straight line and the
           | algorithms are intro control theory stuff. Lateral nav gets
           | no more complicated than the difference between a great
           | circle and a rhumb line.
           | 
           | The collision avoidance systems are cooperative and punt
           | altogether on anything that isn't the ground or another
           | transponder-equipped airplane. The software amounts to little
           | more than "extract reported altitude from transponder reply,
           | if abs(other altitude - my altitude) < threshold, warn pilot
           | and/or set vertical speed." It's a very long way from a
           | machine learning system that has to identify literally any
           | object in a scene filled with potentially thousands of
           | targets. There's very little to worry about running into in
           | the sky, and minimum safe altitudes are already mapped out
           | for pretty much the entire world.
           | 
           | Any remaining risk is managed by centralized control and
           | certification, which just isn't going to happen for cars. We
           | aren't going to live in a world where every street has to be
           | (the equivalent of) an FAA certified airport with controls to
           | remove any uncertainty about what the vehicle will encounter
           | when it gets there. Nor are we going to create a centralized
           | traffic control system that provides guarantees you won't
           | collide with other vehicles on a predetermined route.
           | 
           | So it's just a completely different world with completely
           | different requirements. Are there things the aerospace world
           | could teach other fields? Yeah, absolutely. Aerospace is
           | pretty darn good at quality control. But the applications
           | themselves are worlds apart.
        
             | jet_32951 wrote:
             | Also, it is regrettable that cars don't have the FAA-
             | required electronics, software, or integration processes.
             | When I read that a Jeep's braking system was compromised
             | through its entertainment system it was apparent that the
             | aircraft lessons had not been taken aboard by the auto
             | industry.
        
             | ChicagoBoy11 wrote:
             | I'm actually in complete agreement! What sticks out to me
             | is your assessment that the flight environment is
             | "embarrassingly simple from an automation perspective",
             | which I agree as well (as compared to cars). And yet
             | despite that simplicity and decades at it, we still run it
             | with an incredible robust infrastructure to have a human
             | oversee the tech. We have super robust procedures for
             | checking and cross-checking the automation, defined minimus
             | and tolerances for when the automation needs to cease to
             | operate the aircraft, training solely focused on operating
             | the automation etc. But with cars, we somehow are super
             | comfortable with cars severely altering behavior in a
             | split-second, super poor driver insight or feedback on the
             | automation, no training at all, with a human behind the
             | wheel who in every marketing material known to man has been
             | encouraged to trust the system far more than the tech (or
             | law), would ever have you prudently do.
             | 
             | I'm with you that they are super different, and that the
             | auto case is likely much, much harder. But I see that and
             | can't help but think that the path we should be following
             | here is one with a much greater and healthy skepticism (and
             | far greater human agency) in this automation journey than
             | we are currently thinking is needed.
        
               | _moof wrote:
               | I agree completely. It's a very difficult problem from a
               | technical perspective, and from a systems perspective,
               | we've got untrained operators who can't even stay off
               | their phones in a _non-_ self-driving car. (Not high-
               | horsing it here; I'm as guilty of this as anyone.)
               | Frankly I'll be amazed if anyone can get this to actually
               | work without significant changes to the total system.
               | Right now self-driving car folks are working in isolation
               | - they're only working on the car - and I just don't
               | think it's going to happen until everyone else in the
               | system gets involved.
        
               | yunohn wrote:
               | > we still run it with an incredible robust
               | infrastructure to have a human oversee the tech
               | 
               | Airplanes are responsible for 200-300+ lives at a time,
               | so it's quite incomparable to road vehicles. Of course it
               | makes sense to have human oversight in case something
               | goes wrong.
               | 
               | On the flip side, the average car driver is not very
               | skilled nor equipped to deal with most surprises - hence
               | the ever present danger of road traffic.
               | 
               | I'm not sure why AI drivers are held to such insanely
               | high standards.
        
               | joe_the_user wrote:
               | The claim that self-driving cars are being held-up to a
               | higher standard than human drivers is simply false. Self-
               | driving cars so far have a far worse record than the
               | average of human drivers, which is remarkably good. Human
               | accidents are measure in term of "per _million miles
               | driven_ ". Self-driving cars have driven a tiny total
               | distance compared to all the miles human drivers have
               | driven.
               | 
               | See: https://en.wikipedia.org/wiki/Motor_vehicle_fatality
               | _rate_in...
        
             | kryogen1c wrote:
             | > We aren't going to live in a world where every street has
             | to be (the equivalent of) an FAA certified airport with
             | controls to remove any uncertainty
             | 
             | actually ive been thinking this is exactly the win self
             | driving vehicles have been looking for. upgrade, certify,
             | and maintain cross country interstates like I-70 for fully
             | autonomous, no driver vehicles like freight, mail, hell
             | even passengers. maybe that means one lane with high-vis
             | paint and predefined gas station stops and/or some other
             | requirements. i bet the government could even subsidize
             | with an infrastructure spending bill, politics
             | notwithstanding.
             | 
             | there cant possibly be a problem with _predefined_ highways
             | that is harder to solve than neighborhood and city driving
             | with unknown configurations and changing obstacles. i feel
             | like everyones so rabid for fully autonomous Uber that the
             | easier wins and use cases are being overlooked.
        
               | joe_the_user wrote:
               | Well, to control things, you'd have to have a highway
               | that's only for self-driving vehicles. And then you'd
               | need to get them there - with what, human drivers?
               | (losing the cost savings) Maybe you could use this for
               | self-driving trucks between freight depots.
               | 
               | The problem with this is - why not just use trains at
               | this point? Trains already an economical solution for
               | point to point transportation.
        
           | dexen wrote:
           | The car FSD - aircraft autopilot analogy is deeply flawed,
           | and nowhere near instructive. Let's consider some details:
           | 
           | What aircraft autopilot does is following a pre-planned route
           | to the T, with any changes being input by humans. The
           | aircraft autopilot doesn't do its own detection of obstacles,
           | nor of router markings; it follows the flight plan and reacts
           | to conditions of the aircraft. Even when executing automatic
           | take-off and landing, the autopilot doesn't try to detect
           | other vehicles or obstacles - just executes the plan, safe in
           | knowledge that there are humans actively monitoring for
           | safety. There is always at least two humans in the loop: the
           | pilot in command who prepared and inputed the original flight
           | plan and also inputs any route changes when needed (collision
           | and weather avoidance), and an air traffic controller that
           | continuously observes flight paths of several aircrafts and
           | is responsible for ensuring safe separation between aircraft
           | in his zone of responsibility. Beyond that, an ATController
           | has equal influence on all aricraft in his zone of
           | responsibility, and in case one does something unexpected, it
           | can equally well redirect that one or any other one in
           | vicinity. Lastly, due to much less dense traffic, the
           | separation between aircraft is significantly larger than
           | between cars [1] providing time for pilots to perform evasive
           | maneuvers - and that's in 3d space, where there are
           | effectively two axes to evade along.
           | 
           | Conversely with car FSD - the system is tasked both with
           | following the route, and also with continuously updating the
           | route according to markings, traffic, obstacles, and any
           | contingencies encountered. This is a significant difference
           | in quantity from the above - the law and the technology
           | demands _one_ human in the loop, and that human can only
           | really influence his own car at most. Even worse, due to
           | density of traffic, the separation between cars is quite
           | often on the order of seconds of travel time, making hand-
           | over to driver a much more rapid process.
           | 
           | I am moderately hopeful for FSD "getting there" eventually,
           | but at the same time I'm wary of narrative making unwarranted
           | parallels between FSD and aircraft autopilot.
           | 
           | [1] https://www.airservicesaustralia.com/about-us/our-
           | services/h...
        
         | taneq wrote:
         | On the one hand I applaud Tesla for being so open about what
         | their system is thinking with their visualisations. That could
         | be interpreted to show a deep belief in their system's
         | capabilities.
         | 
         | On the other hand, it's always terrified me how jittery any
         | version of AutoPilot's perception of the world is. Would you
         | let your car be driven by someone with zero object permanence,
         | 10/20 vision and only a vague idea of the existence or
         | properties of any object other than lane markings and number
         | plates?
        
           | wolverine876 wrote:
           | > I applaud Tesla for being so open about what their system
           | is thinking with their visualisations
           | 
           | How do you know that what is on the screen matches what the
           | system is 'thinking'? What reason or obligation would Tesla
           | have to engineer accurate visualizations for the real thing
           | and show them to you (remember the definition of _accuracy_ :
           | correct, complete, consistent)? Would Tesla show its
           | customers something that would make them uncomfortable or
           | show Tesla in a bad light, or even question the excitement
           | Tesla hopes to generate?
           | 
           | I think it's likely that the display is marketing, not
           | engineering.
        
           | bhauer wrote:
           | Presumably you are aware that the visualization you see in
           | any retail car is old software, and several iterations behind
           | what the linked video is about (FSD Beta v10). Plus the
           | visualization is quite a bit different (and in many ways
           | simplified) versus what's used for the actual piloting in
           | retail vehicles.
        
           | macNchz wrote:
           | I test drove a Model 3 yesterday and this was something that
           | really jumped out at me. I didn't try any of the automatic
           | driving features, but driving around Brooklyn watching the
           | way the car was perceiving the world around it did not
           | inspire confidence at all.
           | 
           | Tesla's over the top marketing and hype seems at once to have
           | been a key ingredient in their success, but also so
           | frustrating because their product is genuinely awesome. I've
           | long been kind of a skeptic but I could not have been more
           | impressed with the test drive I took. It had me ready to buy
           | in about 90 seconds. I wish there were actually-competitive
           | competitors with similar range and power that aren't weird
           | looking SUVs, from brands that don't lean into pure hype.
        
             | zaptrem wrote:
             | I think the guy above meant to reply to you:
             | 
             | > Presumably you are aware that the visualization you see
             | in any retail car is old software, and several iterations
             | behind what the linked video is about (FSD Beta v10). Plus
             | the visualization is quite a bit different (and in many
             | ways simplified) versus what's used for the actual piloting
             | in retail vehicles.
        
           | epistasis wrote:
           | The extremely poor performance of the visualizations are
           | disturbing to me. It's also completely wasted space on the
           | display that I wish were devoted to driving directions
           | instead of telling me what I can already see outside of the
           | car.
        
             | taneq wrote:
             | I think whether it's wasted space or not depends entirely
             | on the reliability of the system. For a many-9's system
             | which for all intents and purposes isn't going to fail
             | dangerously, I agree, the visualisation is just peacocking.
             | For a beta-quality system, knowing what the car is thinking
             | is an important driver feedback which gives additional
             | warning before it does something really stupid.
        
         | tommymachine wrote:
         | Seems like the hairpin portion may have been an animation
         | between the two paths, to smooth the UX? Also, it would give
         | the driver a chance to correct the motion, as happened in the
         | video.
        
         | TrainedMonkey wrote:
         | Maybe it thought it was a roundabout. Aside from getting a
         | scare, pedestrian risk was likely quite low - there is a
         | separate emergency stop circuit.
        
         | belter wrote:
         | The problem shown here:
         | https://twitter.com/robinivski/status/1438580718813261833/ph...
         | 
         | Is the reason why applying Silicon Valley hubris to self
         | driving type problems is not going to work.
         | 
         | It's not an exaggeration to say, in 2050 ...We still be beta
         | testing FSD on Mars roads :-)
        
         | Philip-J-Fry wrote:
         | I've watched quite a few FSD videos and in almost every single
         | one the car barely knows what it wants. The route jumps all
         | over the place. I'm pretty sure it's just the nature of their
         | system using cameras and the noisy data that can generate.
         | 
         | The sat nav didn't update the route to go that way. The FSD
         | system decided to. It probably completely lost track of the
         | road ahead because of the unmarked section of the road and just
         | locked on to the nearest road it could find, which was that
         | right turn.
         | 
         | I've seen videos where it cross over unmarked road and then
         | wants to go into the wrong lane on the other side because it
         | saw it first. It seems like it will just panic search for a
         | road to lock onto because the greatest contributor to their
         | navigation algorithm is just the white lines it can follow.
        
           | cptskippy wrote:
           | If you watch the Tesla AI day presentation they explain the
           | separation of duties that kind of explains what's happening.
           | 
           | The actual routing comes from something like Google Maps that
           | is constantly evaluating driving conditions and finding the
           | optimal route to the destination. It does this based on the
           | vehicle's precise location but irrespective of the vehicle's
           | velocity or time to next turn.
           | 
           | The actual AI driving the car is trying to find a path along
           | a route that can change suddenly and without consideration.
           | It's like when your passenger is navigating and says "Turn
           | here now!" instead of giving you advanced notice.
        
             | mike_d wrote:
             | But the navigation guidance didn't change in this case.
             | 
             | If such a trivial and obvious edge case as navigation
             | changing during the route isn't handled, it just shows how
             | hopelessly behind Tesla is.
        
           | angelzen wrote:
           | That could be selection bias. The [edit, was '99%'] vast
           | majority of the time the car does the boring thing are not
           | click-worthy. Have you driven a Tesla for a reasonably long
           | period of time?
           | 
           | There is a bigger lesson in there: click-driven video
           | selection creates a very warped view of the world. The video
           | stream is dominated by 'interesting' events, ignoring the
           | boringly mundane that overwhelmingly dominates real life. A
           | few recent panics come to mind.
        
             | lanstin wrote:
             | I have an intersection near where I live where the Tesla
             | cannot go thru in the right lane without wanting to end up
             | in the left lane past the intersection (I guess it's a bit
             | of a twisty local road). At first, it would be going in a
             | straight line, then when it hit the moment of confusion
             | would snap into the left lane so quickly, some 200 ms or
             | something. Never tried it with a car in that spot
             | fortunately. After a nav update, it now panics and drops
             | out of auto-pilot there and has you steer into the correct
             | lane. Nothing to do with poor visibility or anything, just
             | a smooth input that neatly divides its "where is the
             | straight line of this lane" ML model, or whatever.
             | 
             | It's actually fascinating to watch - it just clearly has no
             | semantics of "I'm in a lane, and the lane will keep going
             | forward, and if the paint fades for a bit or I'm goign
             | thrur a big intersection, the lane and the line of the lane
             | is still there."
             | 
             | It also doesn't seem to have any very long view of the
             | road. I got a Tesla at the same time of training my boys to
             | drive, and with them I'm alwys emphasizing when far away
             | things happen that indicate you have increased uncertainty
             | about what's going on down the road. (Why did that car 3
             | cars ahead break? Is that car edging towards the road going
             | to cut infront? Is there a slowdown past the hill?) The
             | Tesla has quick breaking reflexes but no sign of explicitly
             | reasoning about semantic layer uncertainty
        
               | antattack wrote:
               | FSD beta does have mechanism for object permanence, as
               | explained on Tesla AI Day.
        
             | margalabargala wrote:
             | 99% perfect is not good enough, it's horrifyingly bad for a
             | car on public roads.
             | 
             | When I drive, I don't spend 1 minute plowing into
             | pedestrians for every 1 hour and 39 minutes I spend driving
             | normally.
             | 
             | If a FSD Tesla spends just 99% of its time doing boring,
             | non-click-worthy things, that is itself interesting in how
             | dangerous it is.
             | 
             | To your point, I'm definitely interested in knowing how
             | many minutes of boring driving tend to elapse between these
             | events. The quantity of these sorts of videos that have
             | been publicized recently gives me the impression that one
             | of these cars would not be able to spend 100 minutes of
             | intervention-free driving around a complex urban
             | environment with pedestrians and construction without
             | probable damage to either a human or an inanimate object.
        
               | angelzen wrote:
               | 99% is a an very rough colloquial estimate meaning 'the
               | vast majority of the time' to drive the point. Could well
               | be 99.999999%. What really matters is how it compares
               | with human performance, I don't have data to do that
               | comparison. The only ones that can make the comparison
               | are Tesla, modulo believing data from a megacorp in the
               | wake of the VW scandal.
        
               | flavius29663 wrote:
               | FYI, that 99.999999 number you quoted is still bad. It
               | means around 30 minutes of the machine actively trying to
               | kill you or others while driving on public roads. I
               | assumed a person driving 2 hours a day for a year.
               | 
               | FSD should not be allowed on the road, or if it is it
               | should be labeled as what it really is: lane assist.
        
               | angelzen wrote:
               | I'm not 'quoting' any numbers. I don't own a Tesla. I
               | don't trust lane assist technology, the probability of
               | catastrophic failure is much larger even compared with
               | dynamic cruise control. I'll steer the wheel thank you
               | very much. I'm not a techno-optimist, rather the
               | contrary. I would like independent verification of the
               | safety claims Tesla makes, or any other claims made by
               | vendors of safety-critical devices.
               | 
               | What I am saying is that selection bias has exploded in
               | the age of viral videos, and this phenomenon doesn't
               | receive anywhere near the attention it deserves. We can't
               | make sound judgements based on online videos, we need
               | quality data.
        
               | itsoktocry wrote:
               | > _Could well be 99.999999%_
               | 
               | That's up to you (or Tesla) to prove, isn't it? Taking
               | your (or Tesla's) word that it's good most of the time is
               | utterly meaningless.
        
             | kaba0 wrote:
             | 1% is ridiculously high for something endangering the ones
             | inside and outside.
        
             | ddoolin wrote:
             | Yes, for 4 years I did, and what they're saying is
             | absolutely true. It desperately tries to lock on to
             | something when it loses whatever it's focused on. Diligent
             | owners just learn those scenarios so you know when they're
             | coming. Others may not be so lucky.
             | 
             | For example, short, wavy hills in the road would often
             | crest _just_ high enough that once it couldn 't see on to
             | the other side, it would immediately start veering into the
             | oncoming lane. I have no idea why it happened, but it did,
             | and it still wasn't fixed when I turned in my car in 2019.
             | I drove on these roads constantly so I learned to just turn
             | off AP around them, and it helped traffic on the other side
             | was sparse, but if those weren't true, I'd only have a
             | literal second to response.
             | 
             | EDIT: IMO the best thing it could do in that scenario is
             | just continue on whatever track it was on for some length
             | of time before its focus was lost. Because it "sees" these
             | lines it's following go off to the right/left (such as when
             | you crest a hill, visually the lines curve, or when the
             | lines disappear into a R/L turn) but only in the split
             | second before they disappear. Maybe that idea doesn't fit
             | into their model but that was always my thought about it.
        
         | joakleaf wrote:
         | As I understood the Tesla AI presentations the path is
         | determined using a Monte Carlo Beamsearch which looks for a
         | feasible path while optimizing an objective that includes
         | minimizing sideways g-forces and keeping the paths derivatives
         | low (smooth path).
         | 
         | Form the videos I have seen this fails often (in that the path
         | doesn't go straight even if it can). Knowing a bit about
         | randomized met heuristics myself, I am not surprised.
         | 
         | I think, they need to perform some post processing on these
         | paths (a low iteration local search).
         | 
         | I think, they should also start with a guess (like just go
         | straight here, or do a standard turn), and then check if the
         | guess is ok. I think, that could help with a lot of the
         | problems they have.
        
           | joakleaf wrote:
           | For anyone interested the Monto Carlo Tree Search used for
           | the planning (that the tentacle shows) is described here
           | (from the AI day video, at the 1h21m50s):
           | 
           | https://youtu.be/j0z4FweCy4M?t=4910
        
         | notJim wrote:
         | > I am astounded that software capable of these outputs is
         | allowed on the roads.
         | 
         | Well, great news for you then! Elon Musk just announced they're
         | adding a button to allow a lot more people to get access to
         | this beta software.
         | https://twitter.com/elonmusk/status/1438751064765906945
        
         | knicholes wrote:
         | Maybe their reinforcement learning algorithm allows for a bit
         | of exploration, and that was one of the very unlikely actions
         | for it to take.
        
           | josefresco wrote:
           | Google Maps will update (prompt first) a route in progress
           | depending on traffic. If you don't notice right away it can
           | be very surprising! At least it's not actually driving the
           | car.
        
           | ohgodplsno wrote:
           | You do not run reinforcement learning algorithms with a two
           | ton car on the road, unless you are an absolute psychopath.
        
             | zaptrem wrote:
             | Or your name is Wayve https://wayve.ai/
        
         | sh1mmer wrote:
         | My Audi e-tron has this habit of switching the adaptive cruise
         | control to the on-ramp speed limit even when I'm in the middle
         | of the freeway.
         | 
         | It's something I've learned to deal with but the sudden attempt
         | to break from 70 to 55 is pretty bad especially as it's
         | unexpected for other drivers around you.
         | 
         | While I'm sure Audi are much worse at updating their software
         | to fix known issues than Tesla I find myself skeptical that the
         | mix of hacks cars use to implement these features scale well,
         | especially in construction zones. Hence I'm pretty content with
         | radar based cruise control and some basic lane maintenance and
         | then doing the rest of the driving myself.
         | 
         | I can imagine if my car were slightly better at some of these
         | things I'd be a significantly worse safety driver as I'd start
         | to be lulled into a false sense of security.
        
         | ggrrhh_ta wrote:
         | There is a "no right turn" sign just before it decides to turn
         | right. Could that have anything to do with it?
        
           | Imnimo wrote:
           | There's also a "One way ->" sign pointing to the right, maybe
           | it thinks it's only allowed to go that way?
        
             | ggrrhh_ta wrote:
             | I see.
        
             | CyanBird wrote:
             | There are actually two of these, one besides the traffic
             | lights and other one on the traffic lights post where the
             | people where crossing the street
             | 
             | Id say that yeah, it seems that the car detected these
             | signs and read them as "you can only go that way" even when
             | the street was open
        
           | notJim wrote:
           | That sign only applies to the left lane. It's to stop people
           | from turning right from the left lane.
        
         | Arnavion wrote:
         | Computer-vision-based self-driving is a mistake. Self-driving
         | cars should drive on their own roads with no non-self-driving
         | vehicles. Other vehicles, traffic lights, road hazards, route
         | changes, etc should be signals that are directly sent to the
         | vehicles rather than the vehicles relying on seeing and
         | identifying cars and traffic lights and road signs.
         | 
         | We know how to make computers navigate maps perfectly. We don't
         | know how to make computers see perfectly. The other
         | uncontrollable humans on the map just make it worse.
         | 
         | Yes this'll not work in existing cities. That's fine. That's
         | the point. An inscrutable black box "ML model" that can be
         | fooled by things that humans aren't fooled by should not be in
         | a situation where it can harm humans. I as a pedestrian did not
         | consent to being in danger of being run over by an algorithm
         | that nobody understands and can only tweak. Build new cities
         | that are self-driving first or even self-driving only, where
         | signaling is set up as I wrote in the first paragraph so that a
         | car reliably knows where to drive and how to not drive over
         | people. Take the opportunity to fix the other problems that old
         | cities have like thin roads and not enough walking paths.
        
         | sydd wrote:
         | > what was the car even trying to do?
         | 
         | I guess it thought that the road ahead is too narrow to
         | continue. But if this is the case why did it simply not stop?
        
         | ncr100 wrote:
         | I wonder if it's an upgrade bug related to calibration?
         | 
         | There was a video by one Tesla user over the past week that
         | talked about how their Tesla v10 FSD software would attempt to
         | turn into driveways, repeatedly, when driving down a straight
         | road.
         | 
         | The user did a recalibration of their cameras, rebooted the
         | car, and the problem went away.
         | 
         | https://www.youtube.com/watch?v=A5sbargRd3g
        
       | GDC7 wrote:
       | Can anybody update on who Tesla fans perceive as their mortal
       | enemy right now?
       | 
       | First it was big oil, then it was shortsellers, but I found those
       | narratives sort of died out.
       | 
       | Who are they beefing with right now?
        
         | [deleted]
        
         | cool_dude85 wrote:
         | Pedestrians.
        
         | jeffbee wrote:
         | The NTSB.
        
       | klik99 wrote:
       | On the video - "almost hitting" is a VERY misleading way to
       | phrase it, I see worse driving on a daily basis from human
       | beings. However, I'm glad for the high visibility and scrutiny
       | because we should hold FSD to a higher standard and the pressure
       | will create a better product. (I drive one and use self-driving)
       | 
       | On the DMCA takedown - that's pretty sketchy.
        
         | sandos wrote:
         | I agree this was not even a close call, also maybe its all due
         | to driver interrupting the autopilot, but this video still
         | shows to many problems for FSD.
        
       ___________________________________________________________________
       (page generated 2021-09-17 23:02 UTC)