[HN Gopher] US probes Tesla's Full Self-Driving software after f...
       ___________________________________________________________________
        
       US probes Tesla's Full Self-Driving software after fatal crash
        
       Author : jjulius
       Score  : 364 points
       Date   : 2024-10-18 16:01 UTC (2 days ago)
        
 (HTM) web link (www.reuters.com)
 (TXT) w3m dump (www.reuters.com)
        
       | jqpabc123 wrote:
       | By now, most people have probably heard that Tesla's attempt at
       | "Full Self Driving" is really anything but --- after a decade of
       | promises. The vehicle owners manual spells this out.
       | 
       | As I understand it, the contentious issue is the fact that unlike
       | most others, their attempt works mostly from visual feedback.
       | 
       | In low visibility situations, their FSD has limited feedback and
       | is essentially driving blind.
       | 
       | It appears that Musk may be seeking a political solution to this
       | technical problem.
        
         | whamlastxmas wrote:
         | It's really weird how much you comment about FSD being fake. My
         | Tesla drives me 10+ miles daily and the only time I touch any
         | controls is pulling in and out of my garage. Literally daily. I
         | maybe disengage once every couple days just to be on the safe
         | side in uncertain situations, it I'm sure it'd likely do fine
         | there too.
         | 
         | FSD works. It drives itself fine 99.99% of the time. It is
         | better than most human drivers. I don't know how you keep
         | claiming it doesn't or doesn't exist.
        
           | jqpabc123 wrote:
           | So you agree with Musk, the main problem with FSD is
           | political?
           | 
           |  _Tesla says on its website its "Full Self-Driving" software
           | in on-road vehicles requires active driver supervision and
           | does not make vehicles autonomous._
           | 
           | https://www.reuters.com/business/autos-
           | transportation/nhtsa-...
        
           | sottol wrote:
           | The claim was about _full_ driving being anything but, ie not
           | _fully_ self-driving, not being completely fake. Disengaging
           | every 10-110 miles is just not "full", it's partial.
           | 
           | And then the gp went into details in which specific
           | situations fsd is especially problematic.
        
           | peutetre wrote:
           | The problem is Tesla and Musk have been lying about full
           | self-driving for years. They have made specific claims of
           | full autonomy with specific timelines and it's been a lie
           | every time: https://motherfrunker.ca/fsd/
           | 
           | In 2016 a video purporting to show full self-driving with the
           | driver there purely "for legal reasons" was staged and faked:
           | https://www.reuters.com/technology/tesla-video-promoting-
           | sel...
           | 
           | In 2016 Tesla said that "as of today, all Tesla vehicles
           | produced in our factory - including Model 3 - will have the
           | hardware needed for full self-driving capability at a safety
           | level substantially greater than that of a human driver."
           | That was a lie: https://electrek.co/2024/08/24/tesla-deletes-
           | its-blog-post-s...
           | 
           | Musk claimed there would be 1 million Tesla robotaxis on the
           | road in 2020. That was a lie:
           | https://www.thedrive.com/news/38129/elon-musk-
           | promised-1-mil...
           | 
           | Tesla claimed Hardware 3 would be capable of full self-
           | driving. When asked about Hardware 3 at Tesla's recent
           | robotaxi event, Musk didn't want to "get nuanced". That's
           | starting to look like fraud:
           | https://electrek.co/2024/10/15/tesla-needs-to-come-clean-
           | abo...
           | 
           | Had Tesla simply called it "driver assistance" that wouldn't
           | be a lie. But they didn't do that. They doubled, tripled,
           | quadrupled down on the claim that it is "full self-driving"
           | making the car "an appreciating asset" that it would be
           | "financially insane" not to buy:
           | 
           | https://www.cnbc.com/2019/04/23/elon-musk-any-other-car-
           | than...
           | 
           | https://edition.cnn.com/2024/03/03/cars/musk-tesla-cars-
           | valu...
           | 
           | It's not even bullshit artistry. It's just bullshit.
           | 
           | Lying is part of the company culture at Tesla. Musk keeps
           | lying because the lies keep working.
        
             | whamlastxmas wrote:
             | Most of this is extreme hyperbole and it's really hard to
             | believe this is a genuine good faith attempt at
             | conversation instead of weird astroturfing, bc these tired
             | inaccurate talking points are what come up in literally
             | every single even remotely associated to Elon. It's like
             | there's a dossier of talking points everyone is sharing
             | 
             | The car drives itself. This is literally undeniable. You
             | can test it today for free. Yeah it doesn't have the last
             | 0.01% done yet and yeah that's probably a lot of work. But
             | commenting like the GP is exhausting and just not
             | reflective of reality
        
               | jqpabc123 wrote:
               | _... not reflective of reality_
               | 
               | Kinda like repeated claims of "Full Self Driving" for
               | over a decade.
        
               | peutetre wrote:
               | > _bc these tired inaccurate talking points are what come
               | up in literally every single even remotely associated to
               | Elon_
               | 
               | You understand that the false claims, the inaccuracies,
               | and the lies come _from_ Elon, right? They 're associated
               | with him because he is the source of them.
               | 
               | They're only tired because he's been telling the same lie
               | year after year.
        
           | dham wrote:
           | It's similar to when DHH said they were not bundling code in
           | production and all the Javascript bros said "No you can't do
           | that it won't work". DHH was like "yes but I'm doing it"
           | 
           | That's how it feels in FSD land right now. Everyone's saying
           | FSD doesn't work and it'll never be here, but I'm literally
           | using it every day lol.
        
         | enslavedrobot wrote:
         | Here's a video of FSD driving the same route as a waymo 42%
         | faster with zero interventions. 23 min vs 33. This is my
         | everyday. Enjoy.
         | 
         | https://youtu.be/Kswp1DwUAAI?si=rX4L5FhMrPXpGx4V
        
           | ck2 wrote:
           | There are also endless videos of teslas driving into
           | pedestrians, plowing full speed into emergency vehicles
           | parked with flashing lights, veering wildly from strange
           | markings on the road, etc. etc.
           | 
           | "works for me" is a very strange response for someone on
           | Hacker News if you have any coding background - you should
           | realize you are a beta tester unwittingly if not a full blown
           | alpha tester in some cases
           | 
           | All it will take is a non-standard event happening on your
           | daily drive. Most certainly not wishing it on you, quite the
           | opposite, trying to get you to accept that a perfect drive 99
           | times out of 100 is not enough.
        
             | enslavedrobot wrote:
             | Those are Autopilot videos this discussion is about FSD.
             | FSD has driven ~2 billion miles at this point and had
             | potentially 2 fatal accidents.
             | 
             | The US average is 1.33 deaths/100 million miles. Tesla on
             | FSD is easily 10x safer.
             | 
             | Every day it gets safer.
        
               | diggernet wrote:
               | How many miles does it have on the latest software?
               | Because any miles driven on previous software are no
               | longer relevant. Especially with that big change in v12.
        
               | enslavedrobot wrote:
               | The miles driven are rising exponentially as the versions
               | improve according to company filings. If the miles driven
               | on previous versions are no longer relevant how can the
               | NHTSA investigation of previous versions impact FSD
               | regulation today?
               | 
               | Given that the performance has improved dramatically over
               | the last 6 months, it is very reasonable to assume that
               | the miles driven to fatality ratio also improving.
               | 
               | Using the value of 1.33 deaths per 100 million miles
               | driven vs 2 deaths in 2 billion miles driven, FSD has
               | saved approximately 24 lives so far.
        
               | hilux wrote:
               | Considering HN is mostly technologists, the extent of
               | Tesla-hate in here surprises me. My best guess is that it
               | is sublimated Elon-hate. (Not a fan of my former neighbor
               | myself, but let's separate the man from his creations.)
               | 
               | People seem to be comparing Tesla FSD to perfection, when
               | the more fair and relevant comparison is to real-world
               | American drivers. Who are, on average, pretty bad.
               | 
               | Sure, I wouldn't trust data coming from Tesla. But we
               | have government data.
        
               | lowbloodsugar wrote:
               | That seems an odd take. This is a technologist website,
               | and a good number of technologists believe in building
               | robust systems that don't fail in production. We don't
               | stand for demos, and we have to fight off consultants
               | peddling crapware that demos well but dies in production.
               | I own a Tesla, despite my dislike of Musk, because it is
               | an insanely fun car. I will never enable FSD, did not
               | even do so when it was free. I see even the best teams
               | have production outages. Until Tesla legally accepts, and
               | the laws allows them to, legal responsibility, and until
               | it's good enough that it doesn't disengage, ever, then
               | I'm never using it and nobody else should.
        
           | jqpabc123 wrote:
           | Can it drive the same route without a human behind the wheel?
           | 
           | Not legally and not according to Tesla either --- because
           | Tesla's FSD is not "Fully Self Driving" --- unlike Waymo.
        
       | knob wrote:
       | Didn't Uber have something similar happen? Ran over a woman in
       | Phoenix?
        
         | BugsJustFindMe wrote:
         | Yes. And Uber immediately shut down the program in the entire
         | state of Arizona, halted all road testing for months, and then
         | soon later eliminated their self driving unit entirely.
        
           | leoh wrote:
           | Elon is taking the SBF approach of double or nothing -- he
           | hopes Trump will win, in which case Elon can continue to do
           | as much harm as he likes.
        
       | daghamm wrote:
       | While at it, please also investigate why it is sometimes
       | impossible to leave a damaged vehicle. This has resulted in
       | people dying more than once:
       | 
       | https://apnews.com/article/car-crash-tesla-france-fire-be8ec...
        
         | MadnessASAP wrote:
         | The why is pretty well understood, no investigation needed. I
         | don't like the design but it's because the doors are electronic
         | and people don't know where the manual release is.
         | 
         | In a panic people go on muscle memory, which is push the
         | useless button. They don't remember to pull the unmarked
         | unobtrusive handle that they may not even know exists.
         | 
         | If it was up to me, sure have your electronic release, but make
         | the manual release a big handle that looks like the ejection
         | handle on a jet (yellow with black stripes, can't miss it).
         | 
         | * Or even better, have the standard door handle mechanically
         | connected to the latch through a spring loaded solenoid that
         | disengages the mechanism. Thus when used under normal
         | conditions it does the thing electronically but the moment
         | power fails the door handle connects to the manual release.
        
           | daghamm wrote:
           | There are situations where manual release has not worked
           | 
           | https://www.businessinsider.com/how-to-manually-open-
           | tesla-d...
        
             | willy_k wrote:
             | The article you provided does not say that. The only
             | failure related to the manual release it mentions is that
             | using it breaks the window.
             | 
             | > Exton said he followed the instructions for the manual
             | release to open the door, but that this "somehow broke the
             | driver's window."
        
           | Clamchop wrote:
           | Or just use normal handles, inside and outside, like other
           | cars. What they've done is made things worse by any objective
           | metric in exchange for a "huh, nifty" that wears off after a
           | few weeks.
        
             | nomel wrote:
             | I think this is the way. Light pull does the electronic
             | thing. Hard pull does the mechanical thing. They could have
             | done this with the mechanical handle that's there already
             | (that I have pulled almost every time I've used a Tesla,
             | getting anger and weather stripping inspection from the
             | owner).
        
           | carimura wrote:
           | it's worse than that, at least in ours, the backseat latches
           | are under some mat, literally hidden. i had no idea it was
           | there for the first 6 months.
        
             | Schiendelman wrote:
             | That's at the behest of the federal government. Child lock
             | rules require they aren't accessible.
        
           | Zigurd wrote:
           | The inside trunk release on most cars has a glow-in-the-dark
           | fluorescent color handle
        
           | amluto wrote:
           | I've seen an innovative car with a single door release. As
           | you pull it, it first triggers the electronic mechanism
           | (which lowers the window a bit, which is useful in a door
           | with no frame above the window) and then, as you pull it
           | farther, it mechanically unlatches the door.
           | 
           | Tesla should build their doors like this. Oh, wait, the car
           | I'm talking about is an older Tesla. Maybe Tesla should
           | _remember_ how to build doors like this.
        
             | crooked-v wrote:
             | It's not very 'innovative' these days. My 2012 Mini Cooper
             | has it.
        
           | leoh wrote:
           | Crazy that with all of NHTSA's regulations, this is still
           | legal.
        
           | greenie_beans wrote:
           | that's just bad ux
        
       | aanet wrote:
       | About damn time NHTSA opened this full scale investigation.
       | Tesla's "autonowashing" has gone on for far too long.
       | 
       | Per Reuters [1] "The probe covers 2016-2024 Model S and X
       | vehicles with the optional system as well as 2017-2024 Model 3,
       | 2020-2024 Model Y, and 2023-2024 Cybertruck vehicles. The
       | preliminary evaluation is the first step before the agency could
       | seek to demand a recall of the vehicles if it believes they pose
       | an unreasonable risk to safety."
       | 
       | Roughly 2.4 million Teslas in question, with "Full Self Driving"
       | software after 4 reported collisions and one fatality.
       | 
       | NHTSA is reviewing the ability of FSD's engineering controls to
       | "detect and respond appropriately to reduced roadway visibility
       | conditions."
       | 
       | Tesla has, of course, rather two-facedly called its FSD as SAE
       | Level-2 for regulatory purposes, while selling its "full self
       | driving" but also requiring supervision. -\\_(tsu)_/-
       | -\\_(tsu)_/-
       | 
       | No other company has been so irresponsible to its users, and
       | without a care for any negative externalities imposed on non-
       | consenting road users.
       | 
       | I treat every Tesla driver as a drunk driver, steering away
       | whenever I see them on highways.
       | 
       | [FWIW, yes, I work in automated driving and know a thing or two
       | about automotive safety.]
       | 
       | [1]
       | https://archive.is/20241018151106/https://www.reuters.com/bu...
        
         | ivewonyoung wrote:
         | > Roughly 2.4 million Teslas in question, with "Full Self
         | Driving" software after 4 reported collisions and one fatality.
         | 
         | 45000 people die yearly just in the US in auto accidents. Those
         | numbers and timeline you quoted seem insignificant at first
         | glance magnified by people with an axe to grind like that guy
         | running anti Tesla superbowl ads, who makes self driving
         | software like you.
        
         | buzzert wrote:
         | > I treat every Tesla driver as a drunk driver, steering away
         | whenever I see them on highways.
         | 
         | Would you rather drive near a drunk driver using Tesla's FSD,
         | or one without FSD?
        
       | dietsche wrote:
       | I would like more details. There are definitely situations where
       | neither a car nor a human could respond quickly enough to a
       | situation on the road.
       | 
       | for example, I recently hit a deer. The dashcam shows that I had
       | less than 100 feet from when the deer became visible due to
       | terrain to impact while driving at 60 mph. Keeping in mind that
       | stopping a car in 100 feet at 60 mph is impossible. Most vehicles
       | need more than triple that without accounting for human reaction
       | time.
        
         | ra7 wrote:
         | Unfortunately, Tesla requests NHTSA to redact almost all useful
         | information from their crash reports. So it's impossible to get
         | more details.
         | 
         | Here is the public database of all ADAS crashes:
         | https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...
        
         | Log_out_ wrote:
         | just have a drone fly ahead and have the lidar pointcloud on
         | hud. This are very bio-logic excuses :)
        
         | nomel wrote:
         | I've had a person, high on drugs, walk out from between bushes
         | that were along the road. I screeched to a halt in front of
         | them, but 1 second later and physics would have made it
         | impossible, regardless of reaction time (or non-negligible
         | speed).
        
         | arcanemachiner wrote:
         | This is called "overdriving your vision", and it's so common
         | that it boggles my mind. (This opinion might have something to
         | do with the deer I hit when I first started driving...)
         | 
         | Drive according to the conditions, folks.
        
           | Kirby64 wrote:
           | On many roads if a deer jumps across the road at the wrong
           | time there's literally nothing you can do. You can't always
           | drive at 30mph on back country roads just because a deer
           | might hop out at you.
        
             | seadan83 wrote:
             | World of difference between, 30, 40, 50 and 60. Feels like
             | something I have noticed between west and east coast
             | drivers. Latter really send it on country turns and just
             | trust the road. West coast, particularly montana, when
             | vision is reduced, speed slows down. Just too many animals
             | or road obstacles (eg: rocks, planks of wood) to just trust
             | the road.
        
               | dragonwriter wrote:
               | > West coast, particularly montana
               | 
               | Montana is not "West coast".
        
               | seadan83 wrote:
               | Yeah, I was a bit glib. My impression is more
               | specifically of the greater northwest vs rest. Perhaps
               | just "the west" vs "the east".
               | 
               | Indiana drivers for example really do send it (in my
               | experience). Which is not east coast of course.
               | 
               | There is a good bit of nuance... I would perhaps say more
               | simply east of Mississippi vs west, but Texas varies by
               | region and so-Cal drivers vary a lot as well,
               | particularly compared to nor-Cal and central+eastern
               | california. (I don't have an impression for nevada and
               | new mexico drivers - I dont have any experience on
               | country roads in those states)
        
               | Kirby64 wrote:
               | Road obstacles are static and can be seen by not "out
               | driving your headlights". Animals flinging themselves
               | into the road cannot, in many instances.
        
               | amenhotep wrote:
               | You are responding in a thread about a person saying they
               | were driving at 60 when the deer only became visible "due
               | to terrain" at 100 feet away, and therefore hitting it is
               | no reflection on their skill or choices as a driver.
               | 
               | I suppose we're meant to interpret charitably here, but
               | it really seems to me like there is a big difference
               | between the scenario described and the one you're talking
               | about, where the deer really does fling itself out in
               | front of you.
        
               | dietsche wrote:
               | op here. you nailed it on the head. also, the car started
               | breaking before i could!
               | 
               | incidentally, i've also had the tesla dodge a deer
               | successfully!
               | 
               | autopilot has improved in BIG ways over the past 2 years.
               | went 700 miles in one day on autopilot thru the
               | mountains. no issues at all.
               | 
               | that said expecting perfection from a machine or a human
               | is a fools errand.
        
           | Zigurd wrote:
           | We will inevitably see "AVs are too cautious! Let me go
           | faster!" complaints as AVs drive in more places. But, really
           | humans just suck at risk assessment. And at driving. Driving
           | like a human is comforting in some contexts, but that should
           | not be a goal when it trades away too much safety.
        
           | thebruce87m wrote:
           | There is a difference between driving too fast around a
           | corner to stop for something stationary on the road and
           | driving through countryside where something might jump out.
           | 
           | I live in a country with deer but the number of incidences of
           | them interacting with road users is so low that it does not
           | factor in to my risk tolerance.
        
             | Zigurd wrote:
             | The risks vary with speed. At 30mph a deer will be injured
             | and damage your car, and you might have to call animal
             | control to find the deer if it was able to get away. At
             | 45mph there is a good chance the deer will impact your
             | windshield. If it breaks through, that's how people die in
             | animal collisions. They get kicked to death by a frantic,
             | panicked, injured animal.
        
         | freejazz wrote:
         | The article explains the investigation is based upon visibility
         | issues... what is your point? I don't think any reasonable
         | person doubts there are circumstances where nothing could
         | adequately respond in order to prevent a crash. It seems a
         | rather odd assumption to reach that these crashes would be in
         | one of those scenarios such that we should be explained to
         | otherwise, no less so when the report facially explains this to
         | not be the case.
        
       | ivewonyoung wrote:
       | > NHTSA said it was opening the inquiry after four reports of
       | crashes where FSD was engaged during reduced roadway visibility
       | like sun glare, fog, or airborne dust. A pedestrian was killed in
       | Rimrock, Arizona, in November 2023 after being struck by a 2021
       | Tesla Model Y, NHTSA said. Another crash under investigation
       | involved a reported injury
       | 
       | > The probe covers 2016-2024 Model S and X vehicles with the
       | optional system as well as 2017-2024 Model 3, 2020-2024 Model Y,
       | and 2023-2024 Cybertruck vehicles.
       | 
       | This is good, but also for context 45 thousand people are killed
       | in auto accidents in just the US every year, making 4 report
       | crashes and 1 reported fatality for 2.4 million vehicles over 8
       | years look miniscule by comparison, or even better than many
       | human drivers.
        
         | whiplash451 wrote:
         | Did you scale your numbers in proportion of miles driven
         | autonomously vs manually?
        
           | josephg wrote:
           | Yeah, that'd be the interesting figure: How many deaths per
           | million miles driven? How does Tesla's full self driving
           | stack up against human drivers?
        
             | gostsamo wrote:
             | Even that is not good enough, because the "autopilot"
             | usually is not engaged in challenging conditions making any
             | direct comparisons not really reliable. You need similar
             | roads in simila weather and similar time of the day for
             | approximating good comparison.
        
               | ivewonyoung wrote:
               | How many of the 45,000 deaths on US roads( and an order
               | of magnitude more injuries) occur due to 'challenging
               | conditions' ?
        
         | dekhn wrote:
         | Those numbers aren't all the fatalities associated with tesla
         | cars; IE, you can't compare the 45K/year (roughly 1 per 100M
         | miles driven) to the limited number of reports.
         | 
         | What they are looking for is whether there are systematic
         | issues with the design and implementation that make it unsafe.
        
           | moduspol wrote:
           | Unsafe relative to what?
           | 
           | Certainly not to normal human drivers in normal cars. Those
           | are killing people left and right.
        
             | dekhn wrote:
             | I don't think the intent is to compare it to normal human
             | drivers, although having some level of estimate of
             | accident/injury/death rates (to both the driver, passenger,
             | and people outside the car) with FSD enabled/disabled would
             | be very interesting.
        
               | moduspol wrote:
               | > I don't think the intent is to compare it to normal
               | human drivers
               | 
               | I think our intent should be focused on where the
               | fatalities are happening. To keep things comparable, we
               | could maybe do 40,000 studies on distracted driving in
               | normal cars for every one or two caused by Autopilot /
               | FSD.
               | 
               | Alas, that's not where our priorities are.
        
             | llamaimperative wrote:
             | Those are good questions. We should investigate to find
             | out. (It'd be different from this one but it raises a good
             | question. What _is_ FSD safe compared to?)
        
             | AlexandrB wrote:
             | No they're not. And if you do look at human drivers you're
             | likely to see a Pareto distribution where 20% of drivers
             | cause most of the accidents. This is completely unlike
             | something like FSD where accidents would be more evenly
             | distributed. It's entirely possible that FSD would make 20%
             | of the drivers safer and ~80% less safe even if the overall
             | accident rate was lower.
        
             | Veserv wrote:
             | What? Humans are excellent drivers. Humans go ~70 years
             | between injury-causing accidents and ~5,000 years between
             | fatal accidents even if we count the drunk drivers. If you
             | started driving when the Pyramids were still new, you would
             | still have half a millennium until you reach the expected
             | value between fatalities.
             | 
             | The only people pumping the line that human drivers are bad
             | are the people trying to sell a dream that they can make a
             | self-driving car in a weekend, or "next year", if you just
             | give them a pile of money and ignore all the red flags and
             | warning signs that they are clueless. The problem is
             | shockingly hard and underestimating it is the first step to
             | failure. Reckless development will not get you there safely
             | with known technology.
        
         | throwup238 wrote:
         | _> The agency is asking if other similar FSD crashes have
         | occurred in reduced roadway visibility conditions, and if Tesla
         | has updated or modified the FSD system in a way that may affect
         | it in such conditions._
         | 
         | Those four crashes are just the ones that sparked the
         | investigation.
        
         | tapoxi wrote:
         | I don't agree with this comparison. The drivers are licensed,
         | they have met a specific set of criteria to drive on public
         | roads. The software is not.
         | 
         | We are not sure when FSD is engaged with all of these miles
         | driven, and if FSD is making mistakes a licensed human driver
         | would not. I would at the very least expect radical
         | transparency.
        
           | fallingknife wrote:
           | I too care more about bureaucratic compliance than what the
           | actual chances of something killing me are. When I am on that
           | ambulance I will be thinking "at least that guy met the
           | specific set of criteria to be licensed to drive on public
           | roads."
        
             | tapoxi wrote:
             | Are we really relegating drivers licenses to "bureaucratic
             | compliance"?
             | 
             | If FSD is being used in a public road, it impacts everyone
             | on that road, not just the person who opted-in to using
             | FSD. I absolutely want an independent agency to ensure it's
             | safe and armed with the data that proves it.
        
               | fallingknife wrote:
               | What else are they? You jump through hoops to get a piece
               | of plastic from the government that declares you "safe."
               | And then holders of those licenses go out and kill 40,000
               | people every year just in the US.
        
               | tapoxi wrote:
               | And you're comparing that against what? That's 40,000
               | with regulation in place. Imagine if we let anyone drive
               | without training.
        
               | fallingknife wrote:
               | We do. Nobody crazy enough to drive without knowing how
               | is going to not be crazy enough to drive without a piece
               | of plastic from the government.
        
         | insane_dreamer wrote:
         | > making 4 report crashes and 1 reported fatality for 2.4
         | million vehicles over 8 years look miniscule by comparison,
         | 
         | that's the wrong comparison
         | 
         | the correct comparison is the number of report crashes and
         | fatalities for __unsupervised FSD__ miles driven (not counting
         | Tesla pilot tests, but actual customers)
        
           | jandrese wrote:
           | That seems like a bit of a chicken and egg problem where the
           | software is not allowed to go unsupervised until it racks up
           | a few million miles of successful unsupervised driving.
        
             | AlotOfReading wrote:
             | There's a number of state programs to solve this problem
             | with testing permits. The manufacturer puts up a bond and
             | does testing in a limited area, sending reports on any
             | incidents to the state regulator. The largest of these,
             | California's, has several dozen companies with testing
             | permits.
             | 
             | Tesla currently does not participate in any of these
             | programs.
        
             | insane_dreamer wrote:
             | Similar to a Phase 3 clinical trial (and for similar
             | reasons).
        
         | enragedcacti wrote:
         | > making 4 report crashes and 1 reported fatality for 2.4
         | million vehicles over 8 years look miniscule by comparison, or
         | even better than many human drivers.
         | 
         | This is exactly what people were saying about the NHTSA
         | Autopilot investigation when it started back in 2021 with 11
         | reported incidents. When that investigation wrapped earlier
         | this year it had identified 956 Autopilot related crashes
         | between early 2018 and August 2023, 467 of which were confirmed
         | the fault of autopilot and an inattentive driver.
        
           | fallingknife wrote:
           | So what? How many miles were driven and what is the record vs
           | human drivers? Also Autopilot is a standard feature that is
           | much less sophisticated than and has nothing to do with FSD.
        
       | graeme wrote:
       | Will the review assess overall mortality of the vehicles compared
       | to similar cars, and overall mortality while FSD is in use?
        
         | dekhn wrote:
         | No, that is not part of a review. They may use some reference
         | aggregated industry data, but it's out of scope to answwer the
         | question I think you're trying to imply.
        
         | infamouscow wrote:
         | Lawyers are not known for their prowess in mathematics, let
         | alone statistics.
         | 
         | Making these arguments from the standpoint of an engineer is
         | counterproductive.
        
           | fallingknife wrote:
           | Which is why they are the wrong people to run the country
        
             | paulryanrogers wrote:
             | Whom? Because math is important and so is law, among a
             | variety of other things.
             | 
             | s/ Thankfully the US presidential choices are at least
             | rational, of sound mind, and well rounded people. Certainly
             | no spoiled man children among them. /s
        
         | johnthebaptist wrote:
         | Yes, if tesla complies and provides that data
        
         | bbor wrote:
         | I get where you're coming from and would also be interested to
         | see, but based on the clips I've seen that wouldn't be enough
         | in this case. Of course the bias is inherent in what people
         | choose to post (not normal _and_ not terrible /litigable), but
         | I think there's enough at this point to perceive a stable
         | pattern.
         | 
         | Long story short, my argument is this: it doesn't matter if you
         | reduce serious crashes from 100PPM to 50PPM if 25PPM of those
         | are _new_ crash sources, speaking from a psychological and
         | sociological perspective. Everyone should know that driving
         | drunk, driving distracted, driving in bad weather, and in rural
         | areas at dawn or dusk is dangerous, and takes appropriate
         | precautions. But what do you do if your car might crash because
         | someone ahead flashed their high beams, or because the sun was
         | reflecting off another car in an unusual way? Could you really
         | load up your kids and take your hands off the wheel knowing
         | that at any moment you might hit an unexpected edge condition?
         | 
         | Self driving cars are (presumably!) hard enough to trust
         | already, since you're giving away so much control. There's a
         | reason planes have to be way more than "better, statistically
         | speaking" -- we expect them to be nearly flawless, safety-wise.
        
           | dragonwriter wrote:
           | > But what do you do if your car might crash because someone
           | ahead flashed their high beams, or because the sun was
           | reflecting off another car in an unusual way?
           | 
           | These are -- like drunk driving, driving distract, and
           | driving in bad weather -- things that actually do cause
           | accidents with human drivers.
        
             | hunter-gatherer wrote:
             | The point is the choice of taking precaution part that you
             | left out of the quote. The other day I was taking my kid to
             | school, and when we turned east the sun was in my eyes and
             | I couldn't see anything, so I pulled over as fast as I
             | could and changed my route. Had I chosen to press forward
             | and been in an accident, it is explainable (albeit still
             | unfortunate and often unnecessary!). However, if I'm under
             | the impression that my robot car can handle such
             | circumstances because it does most of the time and then it
             | glitches, that is harder to explain.
        
             | dfxm12 wrote:
             | This language is a bit of a sticking point for me. If
             | you're drunk driving or driving distracted, there's no
             | "accident". You're _intentionally_ doing something wrong
             | and _committing a crime_.
        
             | paulryanrogers wrote:
             | Indeed, yet humans can anticipate such things and rely on
             | their experience to reason about what's happening and how
             | to react. Like slow down or shift lanes or just move ones
             | head for a different perfective. A Tesla with only two
             | cameras ("because that's all humans need") is unlikely to
             | provably match that performance for a long time.
             | 
             | Tesla could also change its software without telling the
             | driver at any point.
        
         | FireBeyond wrote:
         | If you're trying to hint at Tesla's own stats, then at this
         | point those are hopelessly, and knowingly, misleading.
         | 
         | All they compare is "On the subsets of driving on only the
         | roads where FSD is available, active, and has not or did not
         | turn itself off because of weather, road, traffic or any other
         | conditions" versus "all drivers, all vehicles, all roads, all
         | weather, all traffic, all conditions".
         | 
         | There's a reason Tesla doesn't release the raw data.
        
           | rblatz wrote:
           | I have to disengage FSD multiple times a day and I'm only
           | driving 16 miles round trip. And routinely have to stop it
           | from doing dumb things like stopping at green traffic lights,
           | attempting to do a u turn from the wrong turn lane, or
           | switching to the wrong lane right before a turn.
        
             | rad_gruchalski wrote:
             | Why would you even turn it on at this point...
        
         | akira2501 wrote:
         | Fatalities per passenger mile driven is the only statistic that
         | would matter. I actually doubt this figure differs much, either
         | way, from the overall fleet of vehicles.
         | 
         | This is because "inattentive driving" is _rarely_ the cause of
         | fatalities on the road. The winner there is, and probably
         | always will be, Alcohol.
        
           | dylan604 wrote:
           | > The winner there is, and probably always will be, Alcohol.
           | 
           | I'd imagine mobile device use will overtake alcohol soon
           | enough
        
             | akira2501 wrote:
             | Mobile devices have been here for 40 years. The volume of
             | alcohol sold every year suggests this overtake point will
             | never occur.
        
           | porphyra wrote:
           | Distracted driving cost 3308 lives in 2022 [1].
           | 
           | Alcohol is at 13384 in 2021 [2].
           | 
           | Although you're right that alcohol does claim more lives,
           | distracted driving is still highly dangerous and isn't all
           | that rare.
           | 
           | [1] https://www.nhtsa.gov/risky-driving/distracted-driving
           | 
           | [2] https://www.nhtsa.gov/book/countermeasures-that-
           | work/alcohol...
        
             | akira2501 wrote:
             | They do a disservice by not further breaking down
             | distracted driving by age. Once you see it that way it's
             | hard to accept that distracted driving on it's own is the
             | appropriate target.
             | 
             | Anyways.. NHTSA publishes the FARS. This is the definitive
             | source if you want to understand the demographics of
             | fatalities in the USA.
             | 
             | https://www.nhtsa.gov/research-data/fatality-analysis-
             | report...
        
       | xvector wrote:
       | My Tesla routinely tries to kill me on absolutely normal
       | California roads in normal sunny conditions, especially when
       | there are cars parked on the side of the road (it often brakes
       | thinking I'm about to crash into them, or even swerves into them
       | thinking that's the "real" lane).
       | 
       | Elon's Unsupervised FSD dreams are a good bit off. I do hope they
       | happen though.
        
         | delichon wrote:
         | Why do you drive a car that routinely tries to kill you? That
         | would put me right off. Can't you just turn off the autopilot?
        
           | ddingus wrote:
           | My guess is the driver tests it regularly.
           | 
           | How does it do X, Y, ooh Z works, etc...
        
           | xvector wrote:
           | It's a pretty nice car when it's not trying to kill me
        
         | jrflowers wrote:
         | > My Tesla routinely tries to kill me
         | 
         | > Elon's Unsupervised FSD dreams are a good bit off. I do hope
         | they happen though.
         | 
         | It is very generous that you would selflessly sacrifice your
         | own life so that others might one day enjoy Elon's dream of
         | robot taxis without steering wheels
        
           | massysett wrote:
           | Even more generous to selflessly sacrifice the lives and
           | property of others that the vehicle "self-drives" itself
           | into.
        
           | judge2020 wrote:
           | If the data sharing checkboxes are clicked, OP can still help
           | send in training data while driving on his own.
        
         | Renaud wrote:
         | And what if the car swerves, and you aren't able to correct in
         | time and end up killing someone?
         | 
         | Is that your fault or the car's?
         | 
         | I would bet that since it's your car, and you're using a
         | knowingly unproven technology, it would be your fault?
        
           | ra7 wrote:
           | The driver's fault. Tesla never accepts liability.
        
             | LunicLynx wrote:
             | And they have been very clear about that
        
         | bogantech wrote:
         | > My Tesla routinely tries to kill me
         | 
         | Why on earth would you continue to use it? If it does succeed
         | someday that's on you
        
           | newdee wrote:
           | > that's on you
           | 
           | They'd be dead, doubt it's a concern at that point.
        
         | left-struck wrote:
         | That's hilariously ironic because I have a pretty standard
         | newish Japanese petrol car (I'm not mentioning the brand
         | because my point isn't that brand x is better than brand y),
         | and it has no ai self driving functions just pretty basic radar
         | adaptive cruise control and emergency brake assist where it
         | will stop if there's a car brake hard in front of you... and it
         | does a remarkable job at rejecting cars which are slowing down
         | or stopped in other lanes, even when you're going around a
         | corner and the car is pointing straight towards the other cars
         | but not actually heading towards them since it's turning. I
         | assume they are using the steering input to help reject other
         | vehicles and dopler effects to detect differences in speed, but
         | it's remarkable how accurate it is at matching the speed of the
         | car in front of you and only the car in front of you, even when
         | that car is over 15 seconds in front of you. If teslas can't
         | beat that, it's sad
        
         | gitaarik wrote:
         | I wonder, how are you "driving"? Are you sitting behind the
         | wheel doing nothing except watch really good everything the car
         | does so you can take over when needed? Isn't that a stressful
         | experience? Wouldn't it be more comfortable to just do
         | everything yourself so you know nothing weird can happen?
         | 
         | Also, if the car does something crazy, how much time do you
         | have to react? I can imagine in some situations you might have
         | too little time to prevent the accident the car is creating.
        
           | xvector wrote:
           | > Isn't that a stressful experience?
           | 
           | It's actually really easy and kind of relaxing. For long
           | drives, it dramatically reduces cognitive load leading to
           | less fatigue and more alertness on the road.
           | 
           | My hand is always on the wheel so I can react as soon as I
           | feel the car doing something weird.
        
       | botanical wrote:
       | Only the US government can allow corporations to beta test
       | unproven technology on the public.
       | 
       | Governments should carry out comprehensive tests on a self-
       | driving car's claimed capabilities. This is the same as cars
       | without proven passenger safety (Euro NCAP) aren't allowed to be
       | on roads carrying passengers.
        
         | krasin wrote:
         | > Only the US government can allow corporations to beta test
         | unproven technology on the public.
         | 
         | China and Russia do it too. It's not an excuse, but definitely
         | not just the US.
        
         | CTDOCodebases wrote:
         | Meh. Happens all around the world. Even if the product works
         | there is no guarantee that it will be safe.
         | 
         | Asbestos products are a good example of this. A more recent one
         | is Teflon made with PFOAs or engineered stone like Caesarstone.
        
         | dzhiurgis wrote:
         | If it takes 3 months to approve where steel rocket falls you
         | might as well give up iterating something as complex as FSD.
        
           | bckr wrote:
           | Drive it in larger and larger closed courses. Expand to
           | neighboring areas with consent of the communities involved.
           | Agree on limited conditions until enough data has been
           | gathered to expand those conditions.
        
             | romon wrote:
             | While controlled conditions promote safety, they do not
             | yield effective training data.
        
               | AlotOfReading wrote:
               | That's how all autonomous testing programs currently work
               | around the world. That is, every driverless vehicle
               | system on roads today was developed this way. You're
               | going to have to be more specific when you say that it
               | doesn't work.
        
           | AlotOfReading wrote:
           | There _are_ industry standards for this stuff. ISO 21448,
           | UL-4600, UNECE R157 for example, and even commercial
           | certification programs like the one run by TUV Sud for
           | European homologation. It 's a deliberate series of decisions
           | on Tesla's part to make their regulatory life as difficult as
           | possible.
        
         | akira2501 wrote:
         | > Only the US government
         | 
         | Any Legislative body can do so. There's no reason to limit this
         | strictly to the federal government. States and municipalities
         | should have a say in this as well. The _citizens_ are the only
         | entity that _decide_ if beta technology can be used or not.
         | 
         | > comprehensive tests on a self-driving car's claimed
         | capabilities.
         | 
         | This presupposes the government is naturally capable of
         | performing an adequate job at this task or that the automakers
         | won't sue the government to interfere with the testing regime
         | and efficacy of it's standards.
         | 
         | > aren't allowed to be on roads carrying passengers.
         | 
         | According to Wikipedia Euro NCAP is a _voluntary_ organization
         | and describes the situation thusly "legislation sets a minimum
         | compulsory standard whilst Euro NCAP is concerned with best
         | possible current practice." Which effectively highlights the
         | above problems perfectly.
        
         | dham wrote:
         | Uhh, have you heard of the FDA? It's approved hundreds of
         | chemicals that are put in all of food. And we're not talking
         | about a few deaths, we're talking hundreds of thousands if not
         | millions.
        
       | dzhiurgis wrote:
       | What is FSD uptake rate. I bet it's less than 1% since in most
       | countries it's not even available...
        
       | massysett wrote:
       | "Tesla says on its website its FSD software in on-road vehicles
       | requires active driver supervision and does not make vehicles
       | autonomous."
       | 
       | Despite it being called "Full Self-Driving."
       | 
       | Tesla should be sued out of existence.
        
         | bagels wrote:
         | It didn't always say that. It used to be more misleading, and
         | claim that the cars have "Full Self Driving Hardware", with an
         | exercise for the reader to deduce that it didn't come with
         | "Full Self Driving Software" too.
        
           | peutetre wrote:
           | And Musk doesn't want to "get nuanced" about the hardware:
           | 
           | https://electrek.co/2024/10/15/tesla-needs-to-come-clean-
           | abo...
        
         | fhdsgbbcaA wrote:
         | "Sixty percent of the time, it works every time"
        
         | hedora wrote:
         | Our non-Tesla has steering assist. In my 500 miles of driving
         | before I found the buried setting that let me completely
         | disable it, the active safety systems never made it more than
         | 10-20 miles without attempting to actively steer the car left-
         | of-center or into another vehicle, even when it was "turned
         | off" via the steering wheel controls.
         | 
         | When it was turned on according to the dashboard UI, things
         | were even worse. It'd disengage less than every ten miles.
         | However, there wasn't an alarm when it disengaged, just a tiny
         | gray blinking icon on the dash. A second or so after the
         | blinking, it'd beep once and then pull crap like attempt a
         | sharp left on an exit ramp that curved to the right.
         | 
         | I can't imagine this model kills fewer people per mile than
         | Tesla FSD.
         | 
         | I think there should be a recall, but it should hit pretty much
         | all manufacturers shipping stuff in this space.
        
           | shepherdjerred wrote:
           | My Hyundai has a similar feature and it's excellent. I don't
           | think you should be painting with such a broad brush.
        
           | noapologies wrote:
           | I'm not sure how any of this is related to the article. Does
           | this non-Tesla manufacturer claim that their steering assist
           | is "full self driving"?
           | 
           | If you believe their steering assist kills more people than
           | Tesla FSD then you're welcome, encouraged even, to file a
           | report with the NHTSA here [1].
           | 
           | [1] https://www.nhtsa.gov/report-a-safety-problem
        
           | gamblor956 wrote:
           | If what you say is true, name the car model and file a report
           | with the NHTSA.
        
           | HeadsUpHigh wrote:
           | Ive had similar experience with a Hyundai with steering
           | assist. It would get confused by messed road lining all the
           | time. Meanwhile it had no problem climbing a road curb that
           | was unmarked. And it would try to constantly nudge the
           | steering wheel meaning I had to put force into holding it in
           | place all the time since it which was extra fatigue.
           | 
           | Oh and it was on by default, meaning I had to disable it
           | every time I turned the car on.
        
             | shepherdjerred wrote:
             | What model year? I'm guessing it's an older one?
             | 
             | My Hyundai is a 2021 and I have to turn on the steering
             | assist every time which I find annoying. My guess is that
             | you had an earlier model where the steering assist was more
             | liability than asset.
             | 
             | It's understandable that earlier versions of this kind of
             | thing wouldn't function as well, but it is very strange
             | that they would have it on by default.
        
         | m463 wrote:
         | I believe it's called "Full Self Driving (Supervised)"
        
           | maeil wrote:
           | The part in parentheses has only recently been added.
        
             | rsynnott wrote:
             | And is, well, entirely contradictory. An absolute
             | absurdity; what happens when the irresistible force of the
             | legal department meets the immovable object of marketing.
        
             | tharant wrote:
             | Prior to that, FSD was labeled 'Full Self Driving (Beta)'
             | and enabling it triggered a modal that required two
             | confirmations explaining that the human driver must always
             | pay attention and is ultimately responsible for the
             | vehicle. The feature also had/has active driver monitoring
             | (via both vision and steering-torque sensors) that would
             | disengage FSD if the driver ignored the loud audible alarm
             | to "Pay attention". Since changing the label to
             | '(Supervised)', the audible nag is significantly reduced.
        
               | tsimionescu wrote:
               | The problem is not so much the lack of disclaimers, it is
               | the adberitising. Tesla is asking for something like 15
               | 000 dollars for access to this "beta", and you don't get
               | two modal dialogs before you sign up for that.
               | 
               | This is called "false advertising", and even worse -
               | recognizing revenue on a feature you are not delivering
               | (a beta is not a delivered feature) is not GAAP.
        
               | rty32 wrote:
               | Do they have warnings as big as "full self driving" texts
               | in advertisements? And if it is NOT actually full self
               | driving, why call it full self driving?
               | 
               | That's just false advertising. You can't get around that.
               | 
               | I can't believe our current laws let Tesla get away like
               | that.
        
           | tsimionescu wrote:
           | The correct name would be "Not Self Driving". Or, at least,
           | Partial Self Driving.
        
         | innocentoldguy wrote:
         | It's called "Full Self-Driving (Supervised) Beta" and you agree
         | that you understand that you have to pay attention and are
         | responsible for the safety of the car before you turn it on.
        
           | kelnos wrote:
           | So the name of it is a contradiction, and the fine print
           | contradicts the name. "Full self driving" (the concept, not
           | the Tesla product) does not need to be supervised.
        
           | rty32 wrote:
           | Come on, you know it's an oxymoron. "full" and "supervised"
           | don't belong to the same sentence. Ask any 10 year old or a
           | non native English speaker who only learned the language from
           | textbooks for 5 years can tell you that. Just... stop
           | defending Tesla.
        
       | Aeolun wrote:
       | I love how the image in the article has a caption that says it
       | tells you to pay attention to the road, but I had to zoom in all
       | the way to figure out where that message actually was.
       | 
       | I'd expect something big and red with a warning triangle or
       | something, but it's a tiny white message in the center of the
       | screen.
        
         | valine wrote:
         | It gets progressively bigger and louder the longer you ignore
         | it. After 30ish seconds it sounds an alarm and kicks you out.
        
           | FireBeyond wrote:
           | > After 30ish seconds it sounds an alarm and kicks you out.
           | 
           | That's much better. When AP functionality was introduced, the
           | alarm was _fifteen MINUTES_.
        
         | taspeotis wrote:
         | Ah yes, red with a warning like "WARNING: ERROR: THE SITUATION
         | IS NORMAL!"
         | 
         | Some cars that have cruise control but an analog gauge cluster
         | that can't display WARNING ERRORs even hide stuff like "you
         | still have to drive the car" in a manual you have to read yet
         | nobody cares about that.
         | 
         | Honestly driving a car should require some sort of license for
         | a bare minimum of competence.
        
       | 23B1 wrote:
       | "Move fast and kill people"
       | 
       | Look, I don't know who needs to hear this, but just stop
       | supporting this asshole's companies. You don't need internet when
       | you're camping, you don't need a robot to do your laundry, you
       | don't need twitter, you can find more profitable and reliable
       | places to invest.
        
         | hnburnsy wrote:
         | Move slow and kill peo
        
         | hnburnsy wrote:
         | Move slow and kill people...
         | 
         | GM ignition switch deaths 124
         | 
         | https://money.cnn.com/2015/12/10/news/companies/gm-recall-ig...
        
           | 23B1 wrote:
           | And?
        
         | CrimsonRain wrote:
         | Nobody needs to hear your nonsense rants. A 50k model 3 makes
         | almost all offerings up to 80k (including electrics) from
         | legacy automakers look like garbage.
        
           | leoh wrote:
           | I read their "nonsense rant" and I appreciated it.
           | 
           | >A 50k model 3 makes almost all offerings up to 80k
           | (including electrics) from legacy automakers look like
           | garbage.
           | 
           | This is a nonsense rant in my opinion.
        
           | 23B1 wrote:
           | Found the guy who bought $TSLA at its ATH
        
       | wg0 wrote:
       | In all the hype of AI etc, if you think about it then the
       | foundational problem is that even Computer Vision is not a solved
       | problem at the human level of accuracy and that's at the heart of
       | the issue of both Tesla and that Amazon checkout.
       | 
       | Otherwise as thought experiment, imagine just a tiny 1 Inch tall
       | person glued to the grocery trolley and another sitting on each
       | shelf - just these two alone are all you need for "automated
       | checkout".
        
         | vineyardmike wrote:
         | > Otherwise as thought experiment, imagine just a tiny 1 Inch
         | tall person glued to the grocery trolley and another sitting on
         | each shelf - just these two alone are all you need for
         | "automated checkout".
         | 
         | I don't think this would actually work, as silly a thought
         | experiment as it is.
         | 
         | The problem isn't the vision, it's state management and cost.
         | It was very easy (but expensive) to see and classify via CV if
         | a person picked something up, it just requires hundreds of
         | concurrent high resolution streams and a way to stitch the
         | global state from all the videos.
         | 
         | A little 1 inch person on each shelf needs a good way to
         | communicate to every other tiny person what they say, and come
         | to consensus. If 5 people/cameras detect person A picking
         | something up, you need to differentiate between every
         | permutation within 5 discrete actions and 1 seen 5 times.
         | 
         | In case you didn't know, Amazon actually hired hundreds of
         | people in India to review the footage and correct mistakes (for
         | training the models). They literally had a human on each shelf.
         | And they still had issues with the state management. With
         | people.
        
           | wg0 wrote:
           | Yeah - that's exactly is my point that humans were required
           | to recognize and computer vision is NOT a solved problem
           | regardless of tech bros misleading techno optimism.
           | 
           | Distributed communication and state management on the other
           | hand is a solved problem already mostly with known
           | parameters. How else do you think thousand and thousands of
           | Kubernetes work in the wild.
        
             | Schiendelman wrote:
             | I think you're missing the point GP made: humans couldn't
             | do it. They tried to get humans to do it, and humans had an
             | unacceptable error rate.
             | 
             | This is important. The autonomous driving problem and the
             | grocery store problem are both about trade-offs, one isn't
             | clearly better than the other.
        
       | gnuser wrote:
       | I worked in 18 a wheeler automation unicorn.
       | 
       | Never rode in one once for a reason.
        
         | akira2501 wrote:
         | Automate the transfer yards, shipping docks, and trucking
         | terminals. Make movement of cargo across these limited use
         | areas entirely automated and as smooth as butter. Queue drivers
         | up and have their loads automatically placed up front so they
         | can drop and hook in a few minutes and get back on the road.
         | 
         | I honestly think that's the _easier_ problem to solve by at
         | least two orders of magnitude.
        
           | dylan604 wrote:
           | Did you miss the news about the recent strike by the very
           | people you are suggesting to eliminate? This automation was
           | one of the points of contention.
           | 
           | Solving the problem might not be as easy as you suggest as
           | long as their are powerful unions involved
        
             | akira2501 wrote:
             | This automation is inevitable. The ports are a choke point
             | created by unnatural monopoly and a labor union is the
             | incorrect solution. Particularly because their labor
             | actions have massive collateral damage to other labor
             | interests.
             | 
             | I believe that if trucking were properly unionized the port
             | unions would be crushed. They're not that powerful they've
             | just outlived this particular modernization the longest out
             | of their former contemporaries.
        
               | dylan604 wrote:
               | So a union is okay for the trucking industry, but not for
               | the dock workers?
               | 
               | And what exactly will the truckers be trucking if the
               | ports are crushed?
        
           | porphyra wrote:
           | There are a bunch of companies working on that. So far off
           | the top of my head I know of:
           | 
           | * Outrider: https://www.outrider.ai/
           | 
           | * Cyngn: https://www.cyngn.com/
           | 
           | * Fernride: https://www.fernride.com/
           | 
           | Any ideas what other ones are out there?
        
             | akira2501 wrote:
             | Promising. I'm actually more familiar with the actual
             | transportation and logistics side of the operation and
             | strictly within the USA. I haven't seen anything new put
             | into serious operation out here yet but I'll definitely be
             | watching for them.
        
       | whiplash451 wrote:
       | Asking genuinely: is FSD enabled/accessible in EU?
        
         | AlotOfReading wrote:
         | FSD is currently neither legal nor enabled in the EU. That may
         | change in the future.
        
       | UltraSane wrote:
       | I'm astonished at how long Musk has been able to keep his
       | autonomous driving con going. He has been lying about it to
       | inflate Tesla shares for 10 years now.
        
         | ryandrake wrote:
         | Without consequences, there is no reason to stop.
        
           | UltraSane wrote:
           | When is the market going to realize Tesla is NEVER going to
           | have real level 4 autonomy where Tesla takes legal liability
           | for crashes the way Waymo has?
        
             | tstrimple wrote:
             | Market cares far more about money than lives. Until the
             | lives lost cost more than their profit, they give less than
             | zero fucks. Capitalism. Yay!
        
         | porphyra wrote:
         | Just because it has taken 10 years longer than promised doesn't
         | mean that it will never happen. FSD has made huge improvements
         | this year and is on track to keep up the current pace so it
         | actually does seem closer than ever.
        
           | UltraSane wrote:
           | The current vision-only system is a clear technological dead-
           | end that can't go much more than 10 miles between
           | "disengagements". To be clear, "disengagements" would be
           | crashes if a human wasn't ready to take over. And not needing
           | a human driver is THE ENTIRE POINT! I will admit Musk isn't a
           | liar when Tesla has FSD at least as good as Waymo's system
           | and Tesla accepts legal liability for any crashes.
        
             | valval wrote:
             | You're wrong. Nothing about this is clear, and you'd be
             | silly to claim otherwise.
             | 
             | You should explore your bias and where it's coming from.
        
               | UltraSane wrote:
               | No Tesla vehicle has legally driven even a single mile
               | with no driver in the driver's seat. They aren't even
               | trying to play Waymo's game. The latest FSD software's
               | failure rate is at least 100 times higher than it needs
               | to be.
        
               | fallingknife wrote:
               | That's a stupid point. I've been in a Tesla that's driven
               | a mile by itself. It makes no difference if a person is
               | in the seat.
        
               | UltraSane wrote:
               | "It makes no difference if a person is in the seat." It
               | does when Musk is claiming that Tesla is going to sell a
               | car with no steering wheel!
               | 
               | The current Tesla FSD fails so often that a human HAS to
               | be in the driver seat ready to take over at any moment.
               | 
               | You really don't understand the enormous difference
               | between the current crappy level 2 Tesla FSD and Waymo's
               | level 4 system?
        
               | valval wrote:
               | The difference is that Tesla has a general algorithm,
               | while Waymo is hard coding scenarios.
               | 
               | I never really got why people bring Waymo up every time
               | Tesla's FSD is mentioned. Waymo isn't competing with
               | Tesla's vision.
        
               | porphyra wrote:
               | Waymo uses a learned planner and is far from "hardcoded".
               | In any case, imo both of these can be true:
               | 
               | * Tesla FSD works surprisingly well and improving
               | capabilities to hands free actual autonomy isn't as far
               | fetched as one might think.
               | 
               | * Waymo beat them to robotaxi deployment and scaling up
               | to multiple cities may not be as hard as people say.
               | 
               | It seems that self driving car fans are way too tribal
               | and seem to be convinced that the "other side" sucks and
               | is guaranteed to fail. In reality, it is very unclear as
               | both strategies have their merits and only time will tell
               | in the long run.
        
               | UltraSane wrote:
               | " Tesla FSD works surprisingly well and improving
               | capabilities to hands free actual autonomy isn't as far
               | fetched as one might think"
               | 
               | Except FSD doesn't work surprisingly well and there is no
               | way it will get as good as Waymo using vision-only.
               | 
               | "It seems that self driving car fans are way too tribal
               | and seem to be convinced that the "other side" sucks and
               | is guaranteed to fail."
               | 
               | I'm not being tribal, I'm being realistic based on the
               | very public performance of both systems.
               | 
               | If Musk was serious about his Robotaxi claims then Tesla
               | would be operating very differently. Instead it is pretty
               | obvious it all a con to inflate Tesla shares beyond all
               | reason.
        
               | UltraSane wrote:
               | The difference is that Waymo has a very well engineered
               | system using vision, LIDAR, and millimeter wave RADAR
               | that works well enough in limited areas to provide tens
               | of thousands of actual driver-less rides. Tesla has a
               | vision only system that sucks so bad a human has to be
               | ready to take over for it at any time like a parent
               | monitoring a toddler near stairs.
        
               | fallingknife wrote:
               | Who said anything about Waymo? Waymo is building a very
               | high cost commercial grade system intended for use on
               | revenue generating vehicles. Tesla is building a low cost
               | system intended for personal vehicles where Waymo's
               | system would be cost prohibitive. Obviously Waymo's
               | system is massively more capable. But that is about as
               | surprising as the fact that a Ferrari is faster than a
               | Ford Ranger.
               | 
               | But this is all irrelevant to my point. You said a Tesla
               | is not capable of driving itself for a mile. I have
               | personally seen one do it. Whether a person is sitting in
               | the driver's seat, or the regulators will allow it, has
               | nothing to do with the fact that the vehicle does, in
               | fact, have that capability.
        
           | gitaarik wrote:
           | Just like AGI and the year of the Linux desktop ;P
        
             | porphyra wrote:
             | Honestly LLMs were a big step towards AGI, and gaming on
             | Linux is practically flawless now. Just played through
             | Black Myth Wukong with no issues out of the box.
        
               | UltraSane wrote:
               | LLMs are to AGI
               | 
               | as
               | 
               | A ladder is to getting to orbit.
               | 
               | I can seem LLMs serving as a kind of memory for an AGI
               | but something fundamentally different will be needed for
               | true reasoning and continues self-improvement.
        
         | heisenbit wrote:
         | "I'm a technologist, I know a lot about computers," Musk told
         | the crowd during the event. "And I'm like, the last thing I
         | would do is trust a computer program, because it's just too
         | easy to hack."
        
       | DoesntMatter22 wrote:
       | Each version has improved. FSD is realistically the hardest thing
       | humanity as ever tried to do. It involves an enormous amount of
       | manpower, compute power and human discoveries, and has to work
       | right in billions of scenarios.
       | 
       | Building a self flying plane is comically easy by comparison.
       | Building Starship is easier by comparison.
        
         | gitaarik wrote:
         | Ah ok, first it is possible within 2 years, and now it is
         | humanity's hardest problem? If it's really that hard I think we
         | better put our resources into something more useful, like new
         | energy solutions, seems we have an energy crisis.
        
           | DoesntMatter22 wrote:
           | It's the hardest thing humans have ever tried to do yes. It
           | took less time to go to the moon.
           | 
           | There are tons of companies and governments working on energy
           | solutions, there is ample time for Tesla to work on self
           | driving.
           | 
           | Also, do we really have an energy crisis? Are you
           | experiencing rolling blackouts?
        
       | Animats wrote:
       | If Trump is elected, this probe will be stopped.
        
         | leoh wrote:
         | Almost certainly would happen and very depressing.
        
       | gitaarik wrote:
       | It concerns me that these Tesla's can suddenly start acting
       | differently after a software update. Seems like a great target
       | for a cyber attack. Or just a fail from the company. A little bug
       | that is accidentally spread to millions of cars all over the
       | world.
       | 
       | And how is this regulated? Say the software gets to a point that
       | we deem it safe for full self driving, then it gets approved on
       | the road, and then Tesla adds a new fancy feature to their
       | software and rolls out an update. How are we to be confident that
       | it's safe?
        
         | rightbyte wrote:
         | Imagine all Teslas doing a full left right now. And full right
         | in left steer countries.
         | 
         | OTA updates and auto updates in general is just a thing that
         | should not be in vehicles. The ecu:s should have to be air
         | gaped to the internet to be considered road worthy.
        
         | boshalfoshal wrote:
         | > how are we to be confident that its safe?
         | 
         | I hope you realize that these companies dont just push updates
         | to your car like vscode does.
         | 
         | Every change has to be unit tested, integration tested, tested
         | in simulation, driven on a multiple cars on an internal fleet
         | (in multiple countries) for multiple days/weeks, then is sent
         | out in waves, then finally, once a bunch of metrics/feedback
         | comes back, they start sending it out wider.
         | 
         | Admittedly you pretty much have to just trust that the above
         | catches most egregious issues, but there will always be unknown
         | unknowns that will be hard to account for, even with all that.
         | Either that or legitimately willful negligence, in which case,
         | yes they should be held accountable.
         | 
         | These aren't scrappy startups pushing fast and breaking things,
         | there is an actual process to this.
        
           | madeforhnyo wrote:
           | https://news.ycombinator.com/item?id=17835760
        
       | soerxpso wrote:
       | For whatever it's worth, Teslas with Autopilot enabled crash
       | about once every 4.5M miles driven, whereas the overall rate in
       | the US is roughly one crash every 70K miles driven. Of course,
       | the selection effects around that stat can be debated (people
       | probably enable autopilot in situations that are safer than
       | average, the average tesla owner might be driving more carefully
       | or in safer areas than the average driver, etc), but it is a
       | pretty significant difference. (Those numbers are what I could
       | find at a glance; DYOR if you'd like more rigor).
       | 
       | We have a lot of traffic fatalities in the US (in some states, an
       | entire order of magnitude worse than in some EU countries), but
       | it's generally not considered an issue. Nobody asks, "These
       | agents are crashing a lot; are they really competent to drive?"
       | when the agent is human, but when the agent is digital it becomes
       | a popular question even with a much lower crash rate.
        
         | deely3 wrote:
         | > Gaps in Tesla's telematic data create uncertainty regarding
         | the actual rate at which vehicles operating with Autopilot
         | engaged are involved in crashes. Tesla is not aware of every
         | crash involving Autopilot even for severe crashes because of
         | gaps in telematic reporting. Tesla receives telematic data from
         | its vehicles, when appropriate cellular connectivity exists and
         | the antenna is not damaged during a crash, that support both
         | crash notification and aggregation of fleet vehicle mileage.
         | Tesla largely receives data for crashes only with pyrotechnic
         | deployment, which are a minority of police reported crashes.3 A
         | review of NHTSA's 2021 FARS and Crash Report Sampling System
         | (CRSS) finds that only 18 percent of police-reported crashes
         | include airbag deployments.
        
       | alexjplant wrote:
       | > The collision happened because the sun was in the Tesla
       | driver's eyes, so the Tesla driver was not charged, said Raul
       | Garcia, public information officer for the department.
       | 
       | Am I missing something or is this the gross miscarriage of
       | justice that it sounds like? The driver could afford a $40k
       | vehicle but not $20 polarized shades from Amazon? Negligence is
       | negligence.
        
         | theossuary wrote:
         | You know what they say, if you want to kill someone in the US,
         | do it in a car.
        
           | littlestymaar wrote:
           | _Crash Course: If You Want to Get Away With Murder Buy a Car_
           | Woodrow Phoenix
        
           | immibis wrote:
           | In the US it seems you'd do it with a gun, but in Germany
           | it's cars.
           | 
           | There was this elderly driver who mowed down a family in a
           | bike lane waiting to cross the road in Berlin, driving over
           | the barriers between the bike lane and the car lane because
           | the cars in the car lane were too slow. Released without
           | conviction - it was an unforeseeable accident.
        
         | macintux wrote:
         | I have no idea what the conditions were like for this incident,
         | but I've blown through a 4-way stop sign when the sun was
         | setting. There's only so much sunglasses can do.
        
           | eptcyka wrote:
           | If environmental factors incapacitate you, should you not
           | slow down or stop?
        
           | vortegne wrote:
           | You shouldn't be on the road then? If you can't see, you
           | should slow down. If you can't handle driving in given
           | conditions safely for everyone involved, you should slow down
           | or stop. If everybody would drive like you, there'd be a
           | whole lot more death on the roads.
        
           | alexjplant wrote:
           | -\\_(tsu)_/- If I can't see because of rain, hail, intense
           | sun reflections, frost re-forming on my windshield, etc. then
           | I pull over and put my flashers on until the problem
           | subsides. Should I have kept the 4700 lb vehicle in fifth
           | gear at 55 mph without the ability to see in front of me in
           | each of these instances? I submit that I should not have and
           | that I did the right thing.
        
           | ablation wrote:
           | Yet so much more YOU could have done, don't you think?
        
           | Doctor_Fegg wrote:
           | Yes, officer, this one right here.
        
           | singleshot_ wrote:
           | > There's only so much sunglasses can do.
           | 
           | For everything else, you have brakes.
        
           | IshKebab wrote:
           | I know right? Once I got something in my eye so I couldn't
           | see at all, but I decided that since I couldn't do anything
           | about it the best thing was to keep driving. I killed a few
           | pedestrians but... eh, what was I going to do?
        
           | kelnos wrote:
           | Your license should be suspended. If conditions don't allow
           | you to see things like that, you slow down until you can. If
           | you still can't, then you need to pull over and wait until
           | conditions make it safe to drive again.
           | 
           | Gross.
        
         | smdyc1 wrote:
         | Not to mention that when you can't see, you slow down? Does the
         | self-driving system do that sufficiently in low visibility?
         | Clearly not if it hit a pedestrian with enough force to kill
         | them.
         | 
         | The article mentions that Tesla's only use cameras in their
         | system and Musk believes they are enough, because humans only
         | use their eyes. Well firstly, don't you want self-driving
         | systems to be _better_ than humans? Secondly, humans don 't
         | just respond to visual cues as a computer would. We also hear
         | and respond to feelings, like the sudden surge of anxiety or
         | fear as our visibility is suddenly reduced at high speed.
        
           | pmorici wrote:
           | The Tesla knows when it's cameras and blinded by sun and act
           | accordingly or tells the human to take over.
        
             | kelnos wrote:
             | Expect when it doesn't actually do that, I guess? Like when
             | this pedestrian was killed?
        
             | eptcyka wrote:
             | If we were able to know when a neural net is failing to
             | categorize something, wouldn't we get AGI for free?
        
           | plorg wrote:
           | I would think one relevant factor is that human vision is
           | different than and in some ways significantly better than
           | cameras.
        
           | hshshshshsh wrote:
           | I think one of the reasons they focus only on vision is
           | basically the entire transportation infra is designed using
           | human eyes a primary way to channel information.
           | 
           | Useful information for driving are communicated through
           | images in form of road signs, traffic signals etc.
        
             | nkrisc wrote:
             | I dunno, knowing the exact relative velocity of the car in
             | front of you seems like it could be useful and is something
             | humans can't do very well.
             | 
             | I've always wanted a car that shows my speed and the
             | relative speed (+/-) of the car in front of me. My car's
             | cruise control can maintain a set distance so obviously
             | it's capable of it but it doesn't show it.
        
               | dham wrote:
               | If your car is maintaining speed of the car in front then
               | the car in front is going speed that is showing on your
               | speedometer.
        
             | SahAssar wrote:
             | We are "designed" (via evolution) to perceive and
             | understand the environment around us. The signage is
             | designed to be easily readable for us.
             | 
             | The models that drive these cars clearly either have some
             | more evolution to do or for us to design the world more to
             | their liking.
        
               | hshshshshsh wrote:
               | Yes. I was talking why Tesla choose to use vision. Since
               | they can't control designing the transport infra to their
               | liking at least for now.
        
           | jsight wrote:
           | Unfortunately there is also an AI training problem embedded
           | in this. As Mobileye says, there are a lot of driver
           | decisions that are common, but wrong. The famous example is
           | rolling stops, but also failing to slow down for conditions
           | is really common.
           | 
           | It wouldn't shock me if they don't have nearly enough
           | training samples of people slowing appropriately for
           | visibility with eyes, much less slowing for the somewhat
           | different limitations of cameras.
        
         | jabroni_salad wrote:
         | Negligence is negligence but people tend to view vehicle
         | collisions as "accidents", as in random occurrences dealt by
         | the hand of fate completely outside of anyone's control. As
         | such, there is a chronic failure to charge motorists with
         | negligence, even when they have killed someone.
         | 
         | If you end up in court, just ask for a jury and you'll be okay.
         | I'm pretty sure this guy didnt even go to court, sounds like it
         | got prosecutor's discretion.
        
           | sokoloff wrote:
           | Negligence is the failure to act with the level of care that
           | a reasonable person would exercise in a similar situation; if
           | a reasonable person likely would have done the things that
           | led to that person's death, they're not guilty of negligence.
        
           | tsimionescu wrote:
           | That sounds like the justice system living up to its ideals.
           | If the 12 jurors know they would have done the same in your
           | situation, as would their family and friends, then they can't
           | in good conscience convict you for negligence.
        
             | pessimizer wrote:
             | It sounds like the kind of narcissism that perverts
             | justice. People understand things they could see themselves
             | doing, don't understand things that they can't see
             | themselves doing, and disregard the law entirely. It makes
             | non-doctors and non-engineers incapable of judging doctors
             | and engineers, rich people incapable of judging poor
             | people, and poor people incapable of judging rich people.
             | 
             | It's just a variation of letting off the defendant that
             | looks like your kid, or brutalizing someone whose victim
             | looks like your kid, it's no ideal of justice.
        
         | renewiltord wrote:
         | Yeah, I have a couple of mirrors placed around my car that
         | reflect light into my face so that I can get out of running
         | into someone. Tbh I understand why they do this. Someone on HN
         | explained it to me: Yield to gross tonnage. So I just drive
         | where I want. If other people die, that's on them: the
         | graveyards are full of people with the right of way, as people
         | say.
        
         | crazygringo wrote:
         | I'm genuinely not sure what the answer is.
         | 
         | When you're driving directly in the direction of a setting sun,
         | polarized sunglasses won't help you at all. That's what sun
         | visors are for, but they won't always work if you're short, and
         | can block too much of the environment if you're too tall.
         | 
         | The only truly safe answer is really to pull to the side of the
         | road and wait for the sun to set. But in my life I've never
         | seen anybody do that ever, and it would absolutely wreck
         | traffic with little jams all over the city that would cascade.
        
           | kjkjadksj wrote:
           | No, polarized sunglasses work fine. I drive into a setting
           | sun probably once a week to no incident.
        
             | crazygringo wrote:
             | That doesn't make any sense to me.
             | 
             | First of all, polarization is irrelevant when looking at
             | the sun. It only affects light that is _reflected_ off
             | things like other cars ' windows, or water on the street.
             | In fact, it's often recommended _not_ to use polarized
             | sunglasses while driving because you can miss wet or icy
             | patches on the road.
             | 
             | Secondly, standard sunglasses don't let you look directly
             | at the sun, even a setting one. The sun is still
             | dangerously bright.
        
               | kjkjadksj wrote:
               | I'm not looking directly at the sun I am looking at the
               | road. Either way it makes a big difference and you don't
               | get much black ice here in sunny southern california.
        
               | crazygringo wrote:
               | But the scenario we're talking about is when the sun is
               | just a few degrees away from the road. It's still
               | entering your eyeball directly. It's still literally
               | blinding, so I just... don't understand how you can do
               | that? Like, I certainly can't. Sunglasses -- polarized or
               | otherwise -- don't make the slightest difference. It's
               | why sun visors exist.
               | 
               | Also, I'm assuming you get rain in SoCal at least
               | sometimes, that then mostly dries up but not completely?
               | Or leaking fire hydrants and so forth? It's the
               | unexpected wet patches.
        
       | rKarpinski wrote:
       | 'Pedestrian' in this context seems pretty misleading
       | 
       | "Two vehicles collided on the freeway, blocking the left lane. A
       | Toyota 4Runner stopped, and two people got out to help with
       | traffic control. A red Tesla Model Y then hit the 4Runner and one
       | of the people who exited from it. "
       | 
       | edit: Parent article was changed... I was referring to the title
       | of the NPR article.
        
         | Retric wrote:
         | More clarity may change people's opinion of the accident, but
         | IMO pedestrian meaningfully represents someone who is limited
         | to human locomotion and lacks any sort of protection in a
         | collision.
         | 
         | Which seems like a reasonable description of the type of
         | failure involved in the final few seconds before impact.
        
           | rKarpinski wrote:
           | Omitting that the pedestrian was on a freeway meaningfully
           | mis-represents the situation.
        
             | Retric wrote:
             | People walking on freeways may be rare from the perspective
             | of an individual driver but not a self driving system
             | operating on millions of vehicles.
        
               | rKarpinski wrote:
               | What does that have to do with the original article's
               | misleading title?
        
               | Retric wrote:
               | I don't think it's misleading. It's a tile not some
               | hundred word description of what exactly happened.
               | 
               | Calling them motorists would definitely be misleading by
               | comparison. Using the simple "fatal crash" of the linked
               | title implies the other people might in be responsible
               | which is misleading.
               | 
               | Using accident but saying Tesla was at fault could open
               | them up to liability and therefore isn't an option.
        
               | rKarpinski wrote:
               | > I don't think it's misleading. It's a tile not some
               | hundred word description of what exactly happened.
               | 
               | "Pedestrian killed on freeway" instead of "pedestrian
               | killed" doesn't take 100 words and doesn't give the
               | impression Tesla's are mowing people down on crosswalks
               | (although that's a feature to get clicks, not a bug).
        
               | Retric wrote:
               | Without context that implies the pedestrians shouldn't
               | have been on the freeway.
               | 
               | It's not an issue for Tesla, but it does imply bad things
               | about the victims.
        
               | rKarpinski wrote:
               | A title of "U.S. to probe Tesla's 'Full Self-Driving'
               | system after pedestrian killed on freeway" would in no
               | way imply bad things about the pedestrian who was killed.
        
               | Retric wrote:
               | It was my first assumption when I was read pedestrian on
               | freeway in someone's comment without context. Possibly
               | due to Uber self driving fatality.
               | 
               | Stranded motorists who exit their vehicle, construction
               | workers, first responders, tow truck drivers, etc are the
               | most common victims but that's not the association I had.
        
             | Arn_Thor wrote:
             | Why? I would hope we all expect pedestrian detection (and
             | object detection in general) to be just as good on a
             | freeway as on a city street? It seems the Tesla barreled
             | full-speed into an accident ahead of it. I would call it
             | insane but that would be anthropomorphizing it.
        
             | nkrisc wrote:
             | No, you're not allowed to hit pedestrians on the freeway
             | either.
             | 
             | There are many reasons why a pedestrian might be on the
             | freeway. It's not common but I see it at least once a month
             | and I drive extra carefully when I do, moving over if I can
             | and slowing down.
        
           | potato3732842 wrote:
           | This sort of framing you're engaging in is exactly what the
           | person you're replying to is complaining about.
           | 
           | Yeah, the person who got hit was technically a pedestrian but
           | just using that word with no other context doesn't covey that
           | it was a pedestrian on a limited access highway vs somewhere
           | pedestrians are allowed and expected. Without additional
           | explanation people assume normalcy and think that the
           | pedestrian was crossing a city street or something
           | pedestrians do all the time and are expected to do all the
           | time when that is very much not what happened here.
        
             | Retric wrote:
             | Dealing with people on freeways is the kind of edge case
             | humans aren't good at but self driving cars have zero
             | excuses. It's a common enough situation that someone will
             | exit a vehicle after a collision to make it a very
             | predictable edge case.
             | 
             | Remember all of the bad press Uber got when a pedestrian
             | was struck and killed walking their bike across the middle
             | of a street at night? People are going to be on limited
             | access freeways and these systems need to be able to deal
             | with it. https://www.bbc.com/news/technology-54175359
        
               | potato3732842 wrote:
               | I'd make the argument that people are very good at
               | dealing with random things that shouldn't be on freeways
               | as long as they don't coincide with blinding sun or other
               | visual impairment.
               | 
               | Tesla had a long standing issue detecting partial lane
               | obstructions. I wonder if the logic around that has
               | anything to do with this.
        
               | Retric wrote:
               | 17 percent of pedestrian fatalities occur on freeways.
               | Considering how rarely pedestrians are on freeways that
               | suggests to me people aren't very good at noticing them
               | in time to stop / avoid them.
               | 
               | https://usa.streetsblog.org/2022/06/09/why-20-of-
               | pedestrians...
        
               | Arn_Thor wrote:
               | That, and/or freeway speeds make the situation inherently
               | more dangerous. When the traffic flows freeway speeds are
               | fine but if a freeway-speed car has to handle a
               | stationary object...problem.
        
         | neom wrote:
         | That is the correct use of pedestrian as a noun.
        
           | echoangle wrote:
           | Sometimes using a word correctly is still confusing because
           | it's used in a different context 90% of the time.
        
           | szundi wrote:
           | I think parent commenter emphasized the context.
           | 
           | Leaving out context that would otherwise change the
           | interpretation of most or targeted people is the main way to
           | misled those people without technically lying.
        
             | neom wrote:
             | I mean it's the literal language they use in the report[1].
             | Personally, would much prefer a publication to be
             | technically correct, a person on foot on a motorway is
             | referred to as a pedestrian, that is the name for that.
             | 
             | [1]https://static.nhtsa.gov/odi/inv/2024/INOA-
             | PE24031-23232.pdf
        
           | varenc wrote:
           | By a stricter definition, a pedestrian is one who _travels_
           | by foot. Of course, they are walking, but they're traveling
           | via their car, so by some interpretations you wouldn't call
           | them a pedestrian. You could call them a "motorist" or a
           | "stranded vehicle occupant".
           | 
           | For understanding the accident it does seem meaningful that
           | they were motorists that got out of their car on a highway
           | and not pedestrians at a street crossing. (Still inexcusable
           | of course, but changes the context)
        
             | bastawhiz wrote:
             | Cars and drivers ideally shouldn't hit people who exited
             | their vehicles after an accident on a highway. Identifying
             | and avoiding hazards is part of driving.
        
             | neom wrote:
             | As far as I am aware, pes doesn't carry an inherent meaning
             | of travel. Pedestrian just means foot on, they don't need
             | to be moving, they're just not in carriage. As an aside,
             | distinguishing a person's mode of presence is precisely
             | what reports aim to capture.
             | 
             | (I also do tend to avoid this level of pedantry, the points
             | here are all well taken to be clear. I do think the
             | original poster was fine in their comment, I was just
             | sayin' - but this isn't a cross I would die on :))
        
           | sebzim4500 wrote:
           | That's why he said misleading rather than an outright lie. He
           | is not disputing that it is techincally correct to refer to
           | the deceased as a pedestrian, but this scenario (someone out
           | of their car on a freeway) is not what is going to spring to
           | the mind of someone just reading the headline.
        
         | danans wrote:
         | > Pedestrian' in this context seems pretty misleading
         | 
         | What's misleading? The full quote:
         | 
         | "A red Tesla Model Y then _hit the 4Runner and one of the
         | people who exited from it_. A 71-year-old woman from Mesa,
         | Arizona, was pronounced dead at the scene. "
         | 
         | If you exit a vehicle, and are on foot, you are a pedestrian.
         | 
         | I wouldn't expect FSD's object recognition system to treat a
         | human who has just exited a car differently than a human
         | walking across a crosswalk. A human on foot is a human on foot.
         | 
         | However, from the sound of it, the object recognition system
         | didn't even see the 4Runner, much less a person, so perhaps
         | there's a more fundamental problem with it?
         | 
         | Perhaps this is something that lidar or radar, if the car had
         | them, would have helped the OR system to see.
        
           | jfoster wrote:
           | The description has me wondering if this was definitely a
           | case where FSD was being used. There have been other cases in
           | the past where drivers had an accident and claimed they were
           | using autopilot when they actually were not.
           | 
           | I don't know for sure, but I would think that the car could
           | detect a collision. I also don't know for sure, but I would
           | think that FSD would stop once a collision has been detected.
        
             | bastawhiz wrote:
             | Did the article say the Tesla didn't stop after the
             | collision?
        
               | jfoster wrote:
               | If it hit the vehicle and then hit one of the people who
               | had exited the vehicle with enough force for it to result
               | in a fatality, it sounds like it might not have applied
               | any braking.
               | 
               | Of course, that depends on the speed it was traveling at
               | to begin with.
        
             | FireBeyond wrote:
             | > FSD would stop once a collision has been detected.
             | 
             | Fun fact, at least until very recently, if not even to this
             | moment, AEB (emergency braking) is not a part of FSD.
        
               | modeless wrote:
               | I believe AEB can trigger even while FSD is active.
               | Certainly I have seen the forward collision warning
               | trigger during FSD.
        
             | pell wrote:
             | > There have been other cases in the past where drivers had
             | an accident and claimed they were using autopilot when they
             | actually were not.
             | 
             | Wouldn't this be protocoled by the event data recorder?
        
             | danans wrote:
             | > There have been other cases in the past where drivers had
             | an accident and claimed they were using autopilot when they
             | actually were not.
             | 
             | If that were the case here, there wouldn't be a government
             | probe, right? It would be a normal "multi car pileup with a
             | fatality" and added to statistics.
             | 
             | With the strong incentive on the part of both the driver
             | and Tesla to lie about this, there should strong
             | regulations around event data recorders [1] for self
             | driving systems, and huge penalties for violating those. A
             | search across that site doesn't return a hit for the word
             | "retention" but it's gotta be expressed in some way there.
             | 
             | 1. https://www.ecfr.gov/current/title-49/subtitle-B/chapter
             | -V/p...
        
           | potato3732842 wrote:
           | Tesla's were famously poor at detecting partial lane
           | obstructions for a long time. I wonder if that's what
           | happened here.
        
       | AlchemistCamp wrote:
       | The interesting question is how good self-driving has to be
       | before people tolerate it.
       | 
       | It's clear that having half the casualty rate per distance
       | traveled of the median human driver isn't acceptable. How about a
       | quarter? Or a tenth? Accidents caused by human drivers are one of
       | the largest causes of injury and death, but they're not
       | newsworthy the way an accident involving automated driving is.
       | It's all too easy to see a potential future where many people die
       | needlessly because technology that could save lives is regulated
       | into a greatly reduced role.
        
         | iovrthoughtthis wrote:
         | at least 10x better than a human
        
           | becquerel wrote:
           | I believe Waymo has already beaten this metric.
        
             | szundi wrote:
             | Waymo is limited to cities that their engineers has to map
             | and this map maintained.
             | 
             | You cannot put a waymo in a new city before that. With
             | Tesla, what you get is universal.
        
               | RivieraKid wrote:
               | Waymo is robust to removing the map / lidars / radars /
               | cameras or adding inaccuracies to any of these 4 inputs.
               | 
               | (Not sure if this is true for the production system or
               | the one they're still working on.)
        
               | dageshi wrote:
               | I think the Waymo approach is the one that will actually
               | deliver some measure of self driving cars that people
               | will be comfortable to use.
               | 
               | It won't operate everywhere, but it will gradually expand
               | to cover large areas and it will keep expanding till it's
               | near ubiquitous.
               | 
               | I'm dubious that the Tesla approach will actually ever
               | work.
        
               | kelnos wrote:
               | Waymo is safe where they've mapped and trained and
               | tested, because they track when their test drivers have
               | to take control.
               | 
               | Tesla FSD is just everywhere, without any accountability
               | or trained testing on all the roads people use them on.
               | We have no idea how often Tesla FSD users have to take
               | control from FSD due to a safety issue.
               | 
               | Waymo is objectively safer, and their entire approach is
               | objectively safer, and is actually measurable, whereas
               | Tesla FSD's safety cannot actually be accurately
               | measured.
        
         | triyambakam wrote:
         | Hesitation around self-driving technology is not just about the
         | raw accident rate, but the nature of the accidents. Self-
         | driving failures often involve highly visible, preventable
         | mistakes that seem avoidable by a human (e.g., failing to stop
         | for an obvious obstacle). Humans find such incidents harder to
         | tolerate because they can seem fundamentally different from
         | human error.
        
           | crazygringo wrote:
           | Exactly -- it's not just the overall accident rate, but the
           | rate _per accident type_.
           | 
           | Imagine if self-driving is 10x safer on freeways, but on the
           | other hand is 3x more likely to run over your dog in the
           | driveway.
           | 
           | Or it's 5x safer on city streets overall, but actually 2x
           | _worse_ in rain and ice.
           | 
           | We're fundamentally wired for loss aversion. So I'd say it's
           | less about what the total improvement rate is, and more about
           | whether it has categorizable scenarios where it's still worse
           | than a human.
        
         | becquerel wrote:
         | My dream is of a future where humans are banned from driving
         | without special licenses.
        
           | gambiting wrote:
           | So.........like right now you mean? You need a special
           | licence to drive on a public road right now.
        
             | seizethecheese wrote:
             | Geez, clearly they mean like a CDL
        
             | nkrisc wrote:
             | The problem is it's obviously too easy to get one and keep
             | one, based on some of the drivers I see on the road.
        
               | gambiting wrote:
               | That sounds like a legislative problem where you live,
               | sure it can be fixed by overbearing technology but we
               | already have all the tools we need to fix it, we are just
               | choosing not to for some reason.
        
             | kelnos wrote:
             | No, you need an entirely common, unspecial license drive on
             | a public road right now.
        
           | FireBeyond wrote:
           | And yet Tesla's FSD never passed a driving test.
        
             | grecy wrote:
             | And it can't legally drive a vehicle
        
         | Arainach wrote:
         | This is about lying to the public and stoking false
         | expectations for years.
         | 
         | If it's "fully self driving" Tesla should be liable for when
         | its vehicles kill people. If it's not fully self driving and
         | Tesla keeps using that name in all its marketing, regardless of
         | any fine print, then Tesla should be liable for people acting
         | as though their cars could FULLY self drive and be sued
         | accordingly.
         | 
         | You don't get to lie just because you're allegedly safer than a
         | human.
        
           | jeremyjh wrote:
           | I think this is the answer: the company takes on full
           | liability. If a Tesla is Fully Self Driving then Tesla is
           | driving it. The insurance market will ensure that dodgy
           | software/hardware developers exit the industry.
        
             | blagie wrote:
             | This is very much what I would like to see.
             | 
             | The price of insurance is baked into the price of a car. If
             | the car is as safe as I am, I pay the same price in the
             | end. If it's safer, I pay less.
             | 
             | From my perspective:
             | 
             | 1) I would *much* rather have Honda kill someone than
             | myself. If I killed someone, the psychological impact on
             | myself would be horrible. In the city I live in, I dread
             | ageing; as my reflexes get slower, I'm more and more likely
             | to kill someone.
             | 
             | 2) As a pedestrian, most of the risk seems to come from
             | outliers -- people who drive hyper-aggressively. Replacing
             | all cars with a median driver would make me much safer (and
             | traffic, much more predictable).
             | 
             | If we want safer cars, we can simply raise insurance
             | payouts, and vice-versa. The market works everything else
             | out.
             | 
             | But my stress levels go way down, whether in a car, on a
             | bike, or on foot.
        
               | gambiting wrote:
               | >> I would _much_ rather have Honda kill someone than
               | myself. If I killed someone, the psychological impact on
               | myself would be horrible.
               | 
               | Except that we know that it doesn't work like that. Train
               | drivers are ridden with extreme guilt every time "their"
               | train runs over someone, even though they know that
               | logically there was absolutely nothing they could have
               | done to prevent it. Don't see why it would be any
               | different here.
               | 
               | >>If we want safer cars, we can simply raise insurance
               | payouts, and vice-versa
               | 
               | In what way? In the EU the minimum covered amount for any
               | car insurance is 5 million euro, it has had no impact on
               | the safety of cars. And of course the recent increase in
               | payouts(due to the general increase in labour and parts
               | cost) has led to a dramatic increase in insurance
               | premiums which in turn has lead to a drastic increase in
               | the number of people driving without insurance. So now
               | that needs increased policing and enforcement, which we
               | pay for through taxes. So no, market doesn't "work
               | everything out".
        
               | blagie wrote:
               | > Except that we know that it doesn't work like that.
               | Train drivers are ridden with extreme guilt every time
               | "their" train runs over someone, even though they know
               | that logically there was absolutely nothing they could
               | have done to prevent it. Don't see why it would be any
               | different here.
               | 
               | It's not binary. Someone dying -- even with no
               | involvement -- can be traumatic. I've been in a position
               | where I could have taken actions to prevent someone from
               | being harmed. Rationally not my fault, but in retrospect,
               | I can describe the exact set of steps needed to prevent
               | it. I feel guilty about it, even though I know rationally
               | it's not my fault (there's no way I could have known
               | ahead of time).
               | 
               | However, it's a manageable guilt. I don't think it would
               | be if I knew rationally that it was my fault.
               | 
               | > So no, market doesn't "work everything out".
               | 
               | Whether or not a market works things out depends on
               | issues like transparency and information. Parties will
               | offload costs wherever possible. In the model you gave,
               | there is no direct cost to a car maker making less safe
               | cars or vice-versa. It assumes the car buyer will even
               | look at insurance premiums, and a whole chain of events
               | beyond that.
               | 
               | That's different if it's the same party making cars,
               | paying money, and doing so at scale.
               | 
               | If Tesla pays for everyone damaged in any accident a
               | Tesla car has, then Tesla has a very, very strong
               | incentive to make safe cars to whatever optimum is set by
               | the damages. Scales are big enough -- millions of cars
               | and billions of dollars -- where Tesla can afford to hire
               | actuaries and a team of analysts to make sure they're at
               | the optimum.
               | 
               | As an individual car buyer, I have no chance of doing
               | that.
               | 
               | Ergo, in one case, the market will work it out. In the
               | other, it won't.
        
               | kelnos wrote:
               | Being in a vehicle that collides with someone and kills
               | them is going to be traumatic regardless of whether or
               | not you're driving.
               | 
               | But it's almost certainly going to be more traumatic and
               | more guilt-inducing if you _are_ driving.
               | 
               | If I only had two choices, I would much rather my car
               | kill someone than I kill someone with my car. I'm gonna
               | feel bad about it either way, but one is much worse than
               | the other.
        
             | tensor wrote:
             | I'm for this as long as the company also takes on liability
             | for human errors they could prevent. I'd want to see cars
             | enforcing speed limits and similar things. Humans are too
             | dangerous to drive.
        
             | stormfather wrote:
             | That would be good because it would incentivize all FSD
             | cars communicating with each other. Imagine how safe
             | driving would be if they are all broadcasting their speed
             | and position to each other. And each vehicle
             | sending/receiving gets cheaper insurance.
        
               | Terr_ wrote:
               | It goes kinda dsytopic if access to the network becomes a
               | monopolistic barrier.
        
               | tmtvl wrote:
               | Not to mention the possibility of requiring pedestrians
               | and cyclists to also be connected to the same network.
               | Anyone with access to the automotive network could track
               | any pedestrian who passes by the vicinity of a road.
        
               | Terr_ wrote:
               | It's hard to think of a good blend of traffic safety,
               | privacy guarantees, and resistance to bad-actors.
               | Having/avoiding persistent identification is certainly a
               | factor.
               | 
               | Perhaps one approach would be to declare that automated
               | systems are responsible for determining the
               | position/speed of everything around them using regular
               | sensors, but may elect to take hints from anonymous
               | "notice me" marks or beacons.
        
             | KoolKat23 wrote:
             | That's just reducing the value of a life to a number. It
             | can be gamed to a situation where it's just more profitable
             | to mow down people.
             | 
             | What's an acceptable number/financial cost is also just an
             | indirect approximated way of implementing a more
             | direct/scientific regulation. Not everything needs to be
             | reduced to money.
        
               | jeremyjh wrote:
               | There is no way to game it successfully; if your
               | insurance costs are much higher than your competitors you
               | will lose in the long run. That doesn't mean there can't
               | be other penalties when there is gross negligence.
        
               | KoolKat23 wrote:
               | Who said management and shareholders are in it for the
               | long run. Plenty of examples where businesses are purely
               | run in the short term. Bonuses and stock pumps.
        
           | SoftTalker wrote:
           | It's your car, so ultimately the liability is yours. That's
           | why you have insurance. If Tesla retains ownership, and just
           | lets you drive it, then they have (more) liability.
        
             | kelnos wrote:
             | > _It's your car, so ultimately the liability is yours_
             | 
             | No, that's not how it works. The driver and the driver's
             | insurer are on the hook when something bad happens. The
             | owner is not, except when the owner is also the one
             | driving, or if the owner has been negligent with
             | maintenance, and the crash was caused by mechanical failure
             | related to that negligence.
             | 
             | If someone else is driving my car and I'm a passenger, and
             | they hurt someone with it, the driver is liable, not me. If
             | that "someone else" is a piece of software, and that piece
             | of software has been licensed/certified/whatever to drive a
             | car, why should I be liable for its failures? That piece of
             | software needs to be insured, certainly. It doesn't matter
             | if I'm required to insure it, or if the manufacturer is
             | required to insure it.
             | 
             | Tesla FSD doesn't fit into this scenario because it's not
             | the driver. You are still the driver when you engage FSD,
             | because despite its name, FSD is not capable of filling
             | that role.
        
           | mrpippy wrote:
           | Tesla officially renamed it to "Full Self Driving
           | (supervised)" a few months ago, previously it was "Full Self
           | Driving (beta)"
           | 
           | Both names are ridiculous, for different reasons. Nothing
           | called a "beta" should be tested on public roads without a
           | trained employee supervising it (i.e. being paid to pay
           | attention). And of course it was not "full", it always
           | required supervision.
           | 
           | And "Full Self Driving (supervised)" is an absurd oxymoron.
           | Given the deaths and crashes that we've already seen, I'm
           | skeptical of the entire concept of a system that works 98% of
           | the time, but also needs to be closely supervised for the 2%
           | of the time when it tries to kill you or others (with no
           | alerts).
           | 
           | It's an abdication of duty that NHTSA has let this continue
           | for so long, they've picked up the pace recently and I
           | wouldn't be surprised if they come down hard on Tesla (unless
           | Trump wins, in which case Elon will be put in charge of
           | NHTSA, the SEC, and FAA)
        
             | ilyagr wrote:
             | I hope they soon rename it into "Fully Supervised Driving".
        
           | awongh wrote:
           | Also force other auto makers to be liable when their over-
           | tall SUVs cause more deaths than sedan type cars.
        
         | gambiting wrote:
         | >>. How about a quarter? Or a tenth?
         | 
         | The answer is zero. An airplane autopilot has increased the
         | overall safety of airplanes by several orders of magnitude
         | compared to human pilots, but literally no errors in its
         | operation are tolerated, whether they are deadly or not. The
         | exact same standard has to apply to cars or any automated
         | machine for that matter. If there is any issue discovered in
         | any car with this tech then it should be disabled worldwide
         | until the root cause is found and eliminated.
         | 
         | >> It's all too easy to see a potential future where many
         | people die needlessly because technology that could save lives
         | is regulated into a greatly reduced role.
         | 
         | I really don't like this argument, because we could already
         | prevent literally all automotive deaths tomorrow through
         | existing technology and legislation and yet we are choosing not
         | to do this for economic and social reasons.
        
           | esaym wrote:
           | You can't equate airplane safety with automotive safety. I
           | worked at an aircraft repair facility doing government
           | contracts for a number of years. In one instance, somebody
           | lost the toilet paper holder for one of the aircraft. This
           | holder was simply a piece of 10 gauge wire that was bent in a
           | way to hold it and supported by wire clamps screwed to the
           | wall. Making a new one was easy but since it was a new part
           | going on the aircraft we had to send it to a lab to be
           | certified to hold a roll of toilet paper to 9 g's. In case
           | the airplane crashed you wouldn't want a roll of toilet paper
           | flying around I guess. And that cost $1,200.
        
             | gambiting wrote:
             | No, I'm pretty sure I can in this regard - any automotive
             | "autopilot" has to be held to the same standard. It's
             | either zero accidents or nothing.
        
               | murderfs wrote:
               | This only works for aerospace because everything and
               | everyone is held to that standard. It's stupid to hold
               | automotive autopilots to the same standard as a plane's
               | autopilot when a third of fatalities in cars are caused
               | by the pilots being drunk.
        
               | kelnos wrote:
               | I don't think that's a useful argument.
               | 
               | I think we should start allowing autonomous driving when
               | the "driver" is at least as safe as the median driver
               | when the software is unsupervised. (Teslas may or may not
               | be that safe when supervised, but they absolutely are not
               | when unsupervised.)
               | 
               | But once we get to that point, we should absolutely
               | ratchet those standards so automobile safety over time
               | becomes just as safe as airline safety. Safer, if
               | possible.
               | 
               | > _It 's stupid to hold automotive autopilots to the same
               | standard as a plane's autopilot when a third of
               | fatalities in cars are caused by the pilots being drunk._
               | 
               | That's a weird argument, because both pilots and drivers
               | get thrown in jail if they fly/drive drunk. The standard
               | is the same.
        
           | travem wrote:
           | > The answer is zero
           | 
           | If autopilot is 10x safer then preventing its use would lead
           | to more preventable deaths and injuries than allowing it.
           | 
           | I agree that it should be regulated and incidents thoroughly
           | investigated, however letting perfect be the enemy of good
           | leads to stagnation and lack of practical improvement and
           | greater injury to the population as a whole.
        
             | gambiting wrote:
             | >>If autopilot is 10x safer then preventing its use would
             | lead to more preventable deaths and injuries than allowing
             | it.
             | 
             | And yet whenever there is a problem with any plane
             | autopilot it's preemptively disabled fleet wide and pilots
             | have to fly manually even though we absolutely beyond a
             | shadow of a doubt know that it's less safe.
             | 
             | If an automated system makes a wrong decision and it
             | contributes to harm/death then it cannot be allowed on
             | public roads full stop, no matter how many lives it saves
             | otherwise.
        
               | exe34 wrote:
               | > And yet whenever there is a problem with any plane
               | autopilot it's preemptively disabled fleet wide and
               | pilots have to fly manually even though we absolutely
               | beyond a shadow of a doubt know that it's less safe.
               | 
               | just because we do something dumb in one scenario isn't a
               | very persuasive reason to do the same in another.
               | 
               | > then it cannot be allowed on public roads full stop, no
               | matter how many lives it saves otherwise.
               | 
               | ambulances sometimes get into accidents - we should ban
               | all ambulances, no matter how many lives they save
               | otherwise.
        
               | CrimsonRain wrote:
               | So your only concern is, when something goes wrong, need
               | someone to blame. Who cares about lives saved. Vaccines
               | can cause adverse effects. Let's ban all of them.
               | 
               | If people like you were in charge of anything, we'd still
               | be hitting rocks for fire in caves.
        
               | Aloisius wrote:
               | Depends on what one considers a "problem." As long as the
               | autopilot's failures conditions and mitigation procedures
               | are documented, the burden is largely shifted to the
               | operator.
               | 
               | Autopilot didn't prevent slamming into a mountain? Not a
               | problem as long as it wasn't designed to.
               | 
               | Crashed on landing? No problem, the manual says not to
               | operate it below 500 feet.
               | 
               | Runaway pitch trim? The manual says you must constantly
               | be monitoring the autopilot and disengage it when it's
               | not operating as expected and to pull the autopilot and
               | pitch trim circuit breakers. Clearly insufficient
               | operator training is to blame.
        
             | penjelly wrote:
             | I'd challenge the legitimacy of the claim that it's 10x
             | safer, or even safer at all. The safety data provided isn't
             | compelling to me, it can be games or misrepresented in
             | various ways, as pointed out by others.
        
               | yCombLinks wrote:
               | That claim wasn't made. It was a hypothetical, what if it
               | was 10x safer? Then would people tolerate it.
        
               | penjelly wrote:
               | yes people would, if we had a reliable metric for safety
               | of these systems besides engaged/disengaged. We don't,
               | and 10x safer with the current metrics is not
               | satisfactory.
        
           | V99 wrote:
           | Airplane autopilots follow a lateral & sometimes vertical
           | path through the sky prescribed by the pilot(s). They are
           | good at doing that. This does increase safety, because it
           | frees up the pilot(s) from having to carefully maintain a
           | straight 3d line through the sky for hours at a time.
           | 
           | But they do not listen to ATC. They do not know where other
           | planes are. They do not keep themselves away from other
           | planes. Or the ground. Or a flock of birds. They do not
           | handle emergencies. They make only the most basic control-
           | loop decisions about the control surface and power (if even
           | autothrottle equipped, otherwise that's still the meatbag's
           | job) changes needed to follow the magenta line drawn by the
           | pilot given a very small set of input data (position,
           | airspeed, current control positions, etc).
           | 
           | The next nearest airplane is typically at least 3 miles
           | laterally and/or 500' vertically away, because the errors
           | allowed with all these components are measured in hundreds of
           | feet.
           | 
           | None of this is even remotely comparable to a car using a
           | dozen cameras (or lidar) to make real-time decisions to drive
           | itself around imperfect public streets full of erratic
           | drivers and other pedestrians a few feet away.
           | 
           | What it is a lot like is what Tesla actually sells (despite
           | the marketing name). Yes it's "flying" the plane, but you're
           | still responsible for making sure it's doing the right thing,
           | the right way, and not and not going to hit anything or kill
           | anybody.
        
             | kelnos wrote:
             | Thank you for this. The number of people conflating Tesla's
             | Autopilot with an airliner's autopilot, and expecting that
             | use and policies and situations surrounding the two should
             | be directly comparable, is staggering. You'd think people
             | would be better at critical thinking with this, but... here
             | we are.
        
               | Animats wrote:
               | Ah. Few people realize how dumb aircraft autopilots
               | really are. Even the fanciest ones just follow a series
               | of waypoints.
               | 
               | There is one exception - Garmin Safe Return. That's
               | strictly an emergency system. If it activates, the plane
               | is squawking emergency to ATC and and demanding that
               | airspace and a runway be cleared for it.[1] This has been
               | available since 2019 and does not seem to have yet been
               | activated in an emergency.
               | 
               | [1] https://youtu.be/PiGkzgfR_c0?t=87
        
               | V99 wrote:
               | It does do that and it's pretty neat, if you have one of
               | the very few modern turboprops or small jets that have
               | G3000s & auto throttle to support it.
               | 
               | Airliners don't have this, but they have a 2nd pilot. A
               | real-world activation needs a single-pilot operation
               | where they're incapacitated, in one of the maybe few
               | hundred nice-but-not-too-nice private planes it's
               | equipped in, and a passenger is there to push it.
               | 
               | But this is all still largely using the current magenta
               | line AP system, and that's how it's verifiable and
               | certifiable. There's still no cameras or vision or AI
               | deciding things, there are a few new bits of relatively
               | simple standalone steps combined to get a good result.
               | 
               | - Pick a new magenta line to an airport (like pressing
               | NRST Enter Enter if you have filtering set to only
               | suitable fields)
               | 
               | - Pick a vertical path that intersects with the runway
               | (Load a straight-in visual approach from the database)
               | 
               | - Ensure that line doesn't hit anything in the
               | terrain/obstacle database. (Terrain warning system has
               | all this info, not sure how it changes the plan if there
               | is a conflict. This is probably the hardest part, with an
               | actual decision to make).
               | 
               | - Look up the tower frequency in DB and broadcast
               | messages. As you said it's telling and not
               | asking/listening.
               | 
               | - Other humans know to get out of the way because this IS
               | what's going to happen. This is normal, an emergency
               | aircraft gets whatever it wants.
               | 
               | - Standard AP and autothrottle flies the newly prescribed
               | path.
               | 
               | - The radio altimeter lets it know when to flare.
               | 
               | - Wheel weight sensors let it know to apply the brakes.
               | 
               | - The airport helps people out and tows the plane away,
               | because it doesn't know how to taxi.
               | 
               | There's also "auto glide" on the more accessible G3x
               | suite for planes that aren't necessarily $3m+. That will
               | do most of the same stuff and get you almost, but not all
               | the way, to the ground in front of a runway
               | automatically.
        
               | Animats wrote:
               | > and a passenger is there to push it.
               | 
               | I think it will also activate if the pilot is
               | unconscious, for solo flights. It has something like a
               | driver alertness detection system that will alarm if the
               | pilot does nothing for too long. The pilot can reset the
               | alarm, but if they do nothing, the auto return system
               | takes over and lands the plane someplace.
        
             | josephcsible wrote:
             | > They do not know where other planes are.
             | 
             | Yes they do. It's called TCAS.
             | 
             | > Or the ground.
             | 
             | Yes they do. It's called Auto-GCAS.
        
               | V99 wrote:
               | Yes those are optional systems that exist, but they are
               | unrelated to the autopilot (in at least the vast majority
               | of avionics).
               | 
               | They are warning systems that humans respond to. For a
               | TCAS RA the first thing you're doing is disengaging the
               | autopilot.
               | 
               | If you tell the autopilot to fly straight into the path
               | of a mountain, it will happily comply and kill you while
               | the ground proximity warnings blare.
               | 
               | Humans make the decisions in planes. Autopilots are a
               | useful but very basic tool, much more akin to cruise
               | control in a 1998 Civic than a self-driving
               | Tesla/Waymo/erc.
        
           | Aloisius wrote:
           | Autopilots aren't held to a zero error standard let alone a
           | zero accident standard.
        
           | peterdsharpe wrote:
           | > literally no errors in its operation are tolerated
           | 
           | Aircraft designer here, this is not true. We typically
           | certify to <1 catastrophic failure per 1e9 flight hours. Not
           | zero.
        
           | AlchemistCamp wrote:
           | > _"The answer is zero..."_
           | 
           | > _"If there is any issue discovered in any car with this
           | tech then it should be disabled worldwide until the root
           | cause is found and eliminated."_
           | 
           | This would literally cost millions of needless deaths in a
           | situation where AI drivers had 1/10th the accident injury
           | rate of human drivers.
        
         | croes wrote:
         | > It's clear that having half the casualty rate per distance
         | traveled of the median human driver isn't acceptable.
         | 
         | Were the Teslas driving under all weather conditions at any
         | location like humans do or is it just cherry picked from the
         | easy travelling conditions?
        
         | jakelazaroff wrote:
         | I think we should not be satisfied with merely "better than a
         | human". Flying is so safe precisely because we treat any
         | casualty as unacceptable. We should aspire to make automobiles
         | _at least_ that safe.
        
           | aantix wrote:
           | Before FSD is allowed on public roads?
           | 
           | It's a net positive, saving lives right now.
        
           | cubefox wrote:
           | > I think we should not be satisfied with merely "better than
           | a human".
           | 
           | The question is whether you want to outlaw automatic driving
           | just because the system is, say, "only" 50% safer than us.
        
           | kelnos wrote:
           | I don't think the question was what we should be satisfied
           | with or what we should aspire to. I absolutely agree with you
           | that we should strive to make autonomous driving as safe as
           | airline travel.
           | 
           | But the question was when should we allow autonomous driving
           | on our public roads. And I think "when it's at least as safe
           | as the median human driver" is a reasonable threshold.
           | 
           | (The thing about Tesla FSD is that it -- unsupervised --
           | would probably fall super short of that metric. FSD needs to
           | be supervised to be safer than the median human driver,
           | assuming that's evn currently the case, and not every driver
           | is going to be equally good at supervising it.)
        
           | josephcsible wrote:
           | Aspire to, yes. But if we say "we're going to ban FSD until
           | it's perfect, even though it already saves lives relative to
           | the average human driver", you're making automobiles _less_
           | safe.
        
         | akira2501 wrote:
         | > traveled of the median human driver isn't acceptable.
         | 
         | It's completely acceptable. In fact the numbers are lower than
         | they have been since we've started driving.
         | 
         | > Accidents caused by human drivers
         | 
         | Are there any other types of drivers?
         | 
         | > are one of the largest causes of injury and death
         | 
         | More than half the fatalities on the road are actually caused
         | by the use of drugs and alcohol. The statistics are very clear
         | on this. Impaired people cannot drive well. Non impaired people
         | drive orders of magnitude better.
         | 
         | > technology that could save lives
         | 
         | There is absolutely zero evidence this is true. Everyone is
         | basing this off of a total misunderstanding of the source of
         | fatalities and a willful misapprehension of the technology.
        
           | blargey wrote:
           | > Non impaired people drive orders of magnitude better.
           | 
           | That raises the question - how many _impaired_ driver-miles
           | are being baked into the collision statistics for  "median
           | human" driver-miles? Shouldn't we demand non-impaired driving
           | as the standard for automation, rather than "averaged with
           | drunk / phone-fiddling /senile" driving? We don't give people
           | N-mile allowances for drunk driving based on the size of the
           | drunk driver population, after all.
        
             | akira2501 wrote:
             | Motorcycles account for a further 15% of all fatalities in
             | a typical year. Weather is often a factor. Road design is
             | sometimes a factor, remembering several rollover crashes
             | that ended in a body of water and no one in the vehicle
             | surviving. Likewise ejections during fatalities due to lack
             | of seatbelt use is also noticeable.
             | 
             | Once you dig into the data you see that almost every crash,
             | at this point in history, is really a mini-story detailing
             | the confluence of several factors that turned a basic
             | accident into something fatal.
             | 
             | Also, and I only saw this once, but if you literally have a
             | heart attack behind the wheel, you are technically a
             | roadway fatality. The driver was 99. He just died while
             | sitting in slow moving traffic.
             | 
             | Which brings me to my final point which is the rear seats
             | in automobiles are less safe than the front seats. This is
             | true for almost every vehicle on the road. You see _a lot_
             | of accidents where two 40 to 50 year old passengers are up
             | front and two 70 to 80 year old passengers are in back. The
             | ones up front survive. One or both passengers in the back
             | typically die.
        
             | kelnos wrote:
             | No, that makes no sense, because we can't ensure that human
             | drivers aren't impaired. We test and compare against the
             | reality, not the ideal we'd prefer.
        
               | akira2501 wrote:
               | We can sample rate of impairment. We do this quite often
               | actually. It turns out the rate depends on the time of
               | day.
        
           | kelnos wrote:
           | > _Are there any other types of drivers [than human
           | drivers]?_
           | 
           | Waymo says yes, there are.
        
         | aithrowawaycomm wrote:
         | Many people don't (and shouldn't) take the "half the casualty
         | rate" at face value. My biggest concern is that Waymo and Tesla
         | are juking the stats to make self-driving cars seem safer than
         | they really are. I believe this is largely an unintentional
         | consequence of bad actuary science coming from bad qualitative
         | statistics; the worst kind of lying with numbers is lying to
         | yourself.
         | 
         | The biggest gap in these studies: I have yet to see a
         | comparison with human drivers that filters out DUIs, reckless
         | speeding, or mechanical failures. Without doing this it is
         | simply not a fair comparison, because:
         | 
         | 1) Self-driving cars won't end drunk driving unless it's made
         | mandatory by outlawing manual driving or ignition is tied to a
         | breathalyzer. Many people will continue to make the dumb
         | decision to drive themselves home because they are drunk and
         | driving is fun. This needs regulation, not technology. And DUIs
         | need to be filtered from the crash statistics when comparing
         | with Waymo.
         | 
         | 2) A self-driving car which speeds and runs red lights might
         | well be more dangerous than a similar human, but the data says
         | nothing about this since Waymo is currently on their best
         | behavior. Yet Tesla's own behavior and customers prove that
         | there is demand for reckless self-driving cars, and
         | manufacturers will meet the demand unless the law steps in.
         | Imagine a Waymo competitor that promises Uber-level ETAs for
         | people in a hurry. Technology could in theory solve this but in
         | practice the market could make things worse for several decades
         | until the next research breakthrough. Human accidents coming
         | from distraction are a fair comparison to Waymo, but speeding
         | or aggressiveness should be filtered out. The difficulty of
         | doing so is one of the many reasons I am so skeptical of these
         | stats.
         | 
         | 3) Mechanical failures are a hornets' nest of ML edge cases
         | that might work in the lab but fail miserably on the road.
         | Currently it's not a big deal because the cars are shiny and
         | new. Eventually we'll have self-driving clunkers owned by
         | drivers who don't want to pay for the maintenance.
         | 
         | And that's not even mentioning that Waymos are not self-
         | driving, they rely on close remote oversight to guide AI
         | through the many billions of common-sense problems that
         | computets will not able to solve for at least the next decade,
         | probably much longer. True self-driving cars will continue to
         | make inexplicably stupid decisions: these machines are still
         | much dumber than lizards. Stories like "the Tesla slammed into
         | an overturned tractor trailer because the AI wasn't trained on
         | overturned trucks" are a huge problem and society will not let
         | Tesla try to launder it away with statistics.
         | 
         | Self-driving cars might end up saving lives. But would they
         | save more lives than adding mandatory breathalyzers and GPS-
         | based speed limits? And if market competition overtakes
         | business ethics, would they cost more lives than they save? The
         | stats say very little about this.
        
           | kelnos wrote:
           | > _My biggest concern is that Waymo and Tesla are juking the
           | stats to make self-driving cars seem safer than they really
           | are_
           | 
           | Even intentional juking aside, you can't really compare the
           | two.
           | 
           | Waymo cars drive completely autonomously, without a
           | supervising driver in the car. If it does something unsafe,
           | there's no one there to correct it, and it may get into a
           | crash, in the same way a human driver doing that same unsafe
           | thing might.
           | 
           | With Tesla FSD, we have no idea how good it really is. We
           | know that a human is supervising it, and despite all the
           | reports we see of people doing super irresponsible things
           | while "driving" a Tesla (like taking a nap), I imagine most
           | Tesla FSD users are actually attentively supervising for the
           | most part. If all FSD users stopped supervising and started
           | taking naps, I suspect the crash rate and fatality rate would
           | start looking like the rate for the worst drivers on the
           | road... or even worse than that.
           | 
           | So it's not that they're juking their stats (although they
           | may be), it's that they don't actually have all the stats
           | that matter. Waymo has and had those stats, because their
           | trained human test drivers were reporting when the car did
           | something unsafe and they had to take over. Tesla FSD users
           | don't report when they have to do that. The data is just not
           | there.
        
         | smitty1110 wrote:
         | There's two things going on here with there average person that
         | you need to overcome: That when Tesla dodges responsibility all
         | anyone sees is a liar, and that people amalgamate all the FSD
         | crashes and treat the system like a dangerous local driver that
         | nobody can get off the road.
         | 
         | Tesla markets FSD like it's a silver bullet, and the name is
         | truly misleading. The fine print says you need attention and
         | all that. But again, people read "Full Self Driving" and all
         | the marketing copy and think the system is assuming
         | responsibility for the outcomes. Then a crash happens, Tesla
         | throws the driver under the bus, and everyone gets a bit more
         | skeptical of the system. Plus, doing that to a person rubs
         | people the wrong way, and is in some respects a barrier to
         | sales.
         | 
         | Which leads to the other point: People are tallying up all the
         | accidents and treating the system like a person, and wondering
         | why this dangerous driver is still on the road. Most accidents
         | with dead pedestrian start with someone doing something stupid,
         | which is when they assume all responsibility, legally speaking.
         | Drunk, speeding, etc. Normal drivers in poor conditions slow
         | down and drive carefully. People see this accident, and treat
         | FSD like a serial drunk driver. It's to the point that I know
         | people that openly say they treat teslas on roads like they're
         | erratic drivers just for existing.
         | 
         | Until Elon figures out how to fix his perception problem, the
         | calls for investigations and to keep his robotaxis is off the
         | road will only grow.
        
         | danans wrote:
         | > The interesting question is how good self-driving has to be
         | before people tolerate it.
         | 
         | It's pretty simple: as good as it can be given available
         | technologies and techniques, without sacrificing safety for
         | cost or style.
         | 
         | With AVs, function and safety should obviate concerns of style,
         | cost, and marketing. If that doesn't work with your business
         | model, well tough luck.
         | 
         | Airplanes are far safer than cars yet we subject their
         | manufacturers to rigorous standards, or seemingly did until
         | recently, as the 737 max saga has revealed. Even still the
         | rigor is very high compared to road vehicles.
         | 
         | And AVs do have to be way better than people at driving because
         | they are machines that have no sense of human judgement, though
         | they operate in a human physical context.
         | 
         | Machines run by corporations are less accountable than human
         | drivers, not at the least because of the wealth and legal
         | armies of those corporations who may have interests other than
         | making the safest possible AV.
        
           | mavhc wrote:
           | Surely the number of cars than can do it, and the price, also
           | matters, unless you're going to ban private cars
        
             | danans wrote:
             | > Surely the number of cars than can do it, and the price,
             | also matters, unless you're going to ban private cars
             | 
             | Indeed, like this: the more cars sold that claim fully
             | autonomous capability, and the more affordable they get,
             | the higher the standards should be compared to their _AV_
             | predecessors, even if they have long eclipsed human driver
             | 's safety record.
             | 
             | If this is unpalatable, then let's assign 100% liability
             | with steep monetary penalties to the AV manufacturer for
             | any crash that happens under autonomous driving mode.
        
         | Terr_ wrote:
         | > It's clear that having half the casualty rate per distance
         | traveled of the median human driver isn't acceptable.
         | 
         | Even if we optimistically assume no "gotchas" in the statistics
         | [0], distilling performance down to a casualty/injury/accident-
         | rate can still be dangerously reductive, when the have a
         | different _distribution_ of failure-modes which do /don't mesh
         | with our other systems and defenses.
         | 
         | A quick thought experiment to prove the point: Imagine a system
         | which compared to human drivers had only half the rate of
         | accidents... But many of those are because it unpredictably
         | decides to jump the sidewalk curb and kill a targeted
         | pedestrian.
         | 
         | The raw numbers are encouraging, but it represents a risk
         | profile that clashes horribly with our other systems of road
         | design, car design, and what incidents humans are expecting and
         | capable of preventing or recovering-from.
         | 
         | [0] Ex: Automation is only being used on certain _subsets_ of
         | all travel which are the  "easier" miles or circumstances than
         | the whole gamut a human would handle.
        
           | kelnos wrote:
           | Re: gotchas: an even easier one is that the Tesla FSD
           | statistics don't include when the car does something unsafe
           | and the driver intervenes and takes control, averting a
           | crash.
           | 
           | How often does that happen? We have no idea. Tesla can
           | certainly tell when a driver intervenes, but they can't count
           | every occurrence as safety-related, because a driver might
           | take control for all sorts of reasons.
           | 
           | This is why we can make stronger statements about the safety
           | of Waymo. Their software was only tested by people trained
           | and paid to test it, who were also recording every time they
           | had to intervene because of safety, even if there was no
           | crash. That's a metric they could track and improve.
        
         | __loam wrote:
         | The problem is that Tesla is way behind the industry standards
         | here and it's misrepresenting how good their tech is.
        
         | alkonaut wrote:
         | > How about a quarter? Or a tenth?
         | 
         | Probably closer to the latter. The "skin in the game"
         | (physically) argument makes me more willing to accept drunk
         | drivers than greedy manufacturers when it comes to making
         | mistakes or being negligent.
        
         | sebzim4500 wrote:
         | >It's clear that having half the casualty rate per distance
         | traveled of the median human driver isn't acceptable.
         | 
         | Are you sure? Right now FSD is active with no one actually
         | knowing its casualty rate, and the for the most part the only
         | people upset about it are terminally online people on twitter
         | or luddites on HN.
        
         | moogly wrote:
         | > Accidents caused by human drivers are one of the largest
         | causes of injury and death
         | 
         | In some parts of the world. Perhaps some countries should look
         | deeper into why and why self-driving cars might not be the No.
         | 1 answer to reduce traffic accidents.
        
         | kelnos wrote:
         | If Tesla's FSD was actually self-driving, maybe half the
         | casualty rate of the median human driver would be fine.
         | 
         | But it's not. It requires constant supervision, and drivers
         | sometimes have to take control (without the system disengaging
         | on its own) in order to correct it from doing something unsafe.
         | 
         | If we had stats for what the casualty rate would be if every
         | driver using it never took control back unless the car signaled
         | it was going to disengage, I suspect that casualty rate would
         | be much worse than the median human driver. But we don't have
         | those stats, so we shouldn't trust it until we do.
         | 
         | This is why Waymo is safe and tolerated and Tesla FSD is not.
         | Waymo test drivers record every time they have to take over
         | control of the car for safety reasons. That was a metric they
         | had to track and improve, or it would have been impossible to
         | offer people rides without someone in the driver's seat.
        
         | fma wrote:
         | Flying is safer than driving but Boeing isn't getting a free
         | pass on quality issues. Why would Tesla?
        
         | jillesvangurp wrote:
         | The key here is insurers. Because they pick up the bill when
         | things go wrong. As soon as self driving becomes clearly better
         | than humans, they'll be insisting we stop risking their money
         | by driving ourselves whenever that is feasible. And they'll do
         | that with price incentives. They'll happily insure you if you
         | want to drive yourself. But you'll pay a premium. And a
         | discount if you are happy to let the car do the driving.
         | 
         | Eventually, manual driving should come with a lot more
         | scrutiny. Because once it becomes a choice rather than an
         | economic necessity, other people on the road will want to be
         | sure that you are not needlessly endangering them. So, stricter
         | requirements for getting a drivers license with more training
         | and fitness/health requirements. This too will be driven by
         | insurers. They'll want to make sure you are fit to drive.
         | 
         | And of course when manual driving people get into trouble,
         | taking away their driving license is always a possibility. The
         | main argument against doing that right now is that a lot of
         | people depend economically on being able to drive. But if that
         | argument goes away, there's no reason to not be a lot stricter
         | for e.g. driving under influence, or routinely breaking laws
         | for speeding and other traffic violations. Think higher fines
         | and driving license suspensions.
        
       | frabjoused wrote:
       | I don't understand why this debate/probing is not just data
       | driven. Driving is all big data.
       | 
       | https://www.tesla.com/VehicleSafetyReport
       | 
       | This report does not include fatalities, which seems to be the
       | key point in question. Unless the above report has some bias or
       | is false, Teslas in autopilot appear 10 times safer than the US
       | average.
       | 
       | Is there public data on deaths reported by Tesla?
       | 
       | And otherwise, if the stats say it is safer, why is there any
       | debate at all?
        
         | bastawhiz wrote:
         | Autopilot is not FSD.
        
           | frabjoused wrote:
           | That's a good point. Are there no published numbers on FSD?
        
         | JTatters wrote:
         | Those statistics are incredibly misleading.
         | 
         | - It is safe to assume that the vast majority of autopilot
         | miles are on highways (although Tesla don't release this
         | information).
         | 
         | - By far the safest roads per mile driven are highways.
         | 
         | - Autopilot will engage least during the most dangerous
         | conditions (heavy rain, snow, fog, nighttime).
        
         | notshift wrote:
         | Without opening the link, the problem with every piece of data
         | I've seen from Tesla is they're comparing apples to oranges.
         | FSD won't activate in adverse driving conditions, aka when
         | accidents are much more likely to occur. And/or drivers are
         | choosing not to use it in those conditions.
        
         | FireBeyond wrote:
         | > Unless the above report has some bias or is false
         | 
         | Welcome to Tesla.
         | 
         | The report measures accidents in FSD mode. Qualifiers to FSD
         | mode: the conditions, weather, road, location, traffic all have
         | to meet a certain quality threshold before the system will be
         | enabled (or not disable itself). Compare Sunnyvale on a clear
         | spring day to Pittsburgh December nights.
         | 
         | There's no qualifier to the "comparison": all drivers, all
         | conditions, all weather, all roads, all location, all traffic.
         | 
         | It's not remotely comparable, and Tesla's data people are not
         | that stupid, so it's willfully misleading.
         | 
         | > This report does not include fatalities
         | 
         | It also doesn't consider any incident where there was not
         | airbag deployment to be an accident. Sounds potentially
         | reasonable until you consider:
         | 
         | - first gen airbag systems were primitive: collision exceeds
         | threshold, deploy. Currently, vehicle safety systems consider
         | duration of impact, speeds, G-forces, amount of intrusion,
         | angle of collision, and a multitude of other factors before
         | deciding what, if any, systems to fire (seatbelt tensioners,
         | airbags, etc.) So hit something at 30mph with the right
         | variables? Tesla: "this is not an accident".
         | 
         | - Tesla also does not consider "incident was so catastrophic
         | that airbags COULD NOT deploy*" to be an accident, because
         | "airbags didn't deploy". This umbrella could also include
         | egregious, "systems failed to deploy for any reason up to and
         | including poor assembly line quality control", as also not an
         | accident and also "not counted".
         | 
         | > Is there public data on deaths reported by Tesla?
         | 
         | They do not.
         | 
         | They also refuse to give the public much of any data beyond
         | these carefully curated numbers. Hell, NHTSA/NTSB also mostly
         | have to drag heavily redacted data kicking and screaming out of
         | Tesla's hands.
        
         | jsight wrote:
         | The report from Tesla is very biased. It doesn't normalize for
         | the difficulty of the conditions involved, and is basically for
         | marketing purposes.
         | 
         | IMO, the challenge for NHTSA is that they can get tremendous
         | detail from Tesla but not from other makes. This will make it
         | very difficult for them to get a solid baseline for collisions
         | due to glare in non-FSD equipped vehicles.
        
       | testfrequency wrote:
       | I was in a Model 3 Uber yesterday and my driver had to serve onto
       | and up a curb to avoid an (idiot) who was trying to turn into
       | traffic going in the other direction.
       | 
       | The Model 3 had every opportunity in the world to brake and it
       | didn't, we were probably only going 25mph. I know this is about
       | FSD here, but that moment 100% made me realize Tesla has awful
       | obstacle avoidance.
       | 
       | I just happen to be looking forward and it was a very plain and
       | clear T-Bone avoidance, and at no point did the car handle or
       | trigger anything.
       | 
       | Thankfully everyone was ok, but the front lip got pretty beat up
       | from driving up the curb. Of course the driver at fault that
       | caused the whole incident drove off.
        
         | averageRoyalty wrote:
         | Was the Uber driver using FSD or autopilot?
         | 
         | Obstacle avoidance and automatic braking can easily be switched
         | on or off by the driver.
        
       | bastawhiz wrote:
       | Lots of people are asking how good the self driving has to be
       | before we tolerate it. I got a one month free trial of FSD and
       | turned it off after two weeks. Quite simply: it's dangerous.
       | 
       | - It failed with a cryptic system error while driving
       | 
       | - It started making a left turn far too early that would have
       | scraped the left side of the car on a sign. I had to manually
       | intervene.
       | 
       | - In my opinion, the default setting accelerates way too
       | aggressively. I'd call myself a fairly aggressive driver and it
       | is too aggressive for my taste.
       | 
       | - It tried to make way too many right turns on red when it wasn't
       | safe to. It would creep into the road, almost into the path of
       | oncoming vehicles.
       | 
       | - It didn't merge left to make room for vehicles merging onto the
       | highway. The vehicles then tried to cut in. The system should
       | have avoided an unsafe situation like this in the first place.
       | 
       | - It would switch lanes to go faster on the highway, but then
       | missed an exit on at least one occasion because it couldn't make
       | it back into the right lane in time. Stupid.
       | 
       | After the system error, I lost all trust in FSD from Tesla. Until
       | I ride in one and _feel_ safe, I can 't have any faith that this
       | is a reasonable system. Hell, even autopilot does dumb shit on a
       | regular basis. I'm grateful to be getting a car from another
       | manufacturer this year.
        
         | frabjoused wrote:
         | The thing that doesn't make sense is the numbers. If it is
         | dangerous in your anecdotes, why don't the reported numbers
         | show more accidents when FSD is on?
         | 
         | When I did the trial on my Tesla, I also noted these kinds of
         | things and felt like I had to take control.
         | 
         | But at the end of the day, only the numbers matter.
        
           | akira2501 wrote:
           | You can measure risks without having to witness disaster.
        
           | ForHackernews wrote:
           | Maybe other human drivers are reacting quickly and avoiding
           | potential accidents from dangerous computer driving? That
           | would be ironic, but I'm sure it's possible in some
           | situations.
        
           | jsight wrote:
           | Because it is bad enough that people really do supervise it.
           | I see people who say that wouldn't happen because the drivers
           | become complacent.
           | 
           | Maybe that could be a problem with future versions, but I
           | don't see it happening with 12.3.x. I've also heard that
           | driver attention monitoring is pretty good in the later
           | versions, but I have no first hand experience yet.
        
             | valval wrote:
             | Very good point. The product that requires supervision and
             | tells the user to keep their hands on the wheel every 10
             | seconds is not good enough to be used unsupervised.
             | 
             | I wonder how things are inside your head. Are you ignorant
             | or affected by some strong bias?
        
               | jsight wrote:
               | Yeah, it definitely isn't good enough to be used
               | unsupervised. TBH, they've switched to eye and head
               | tracking as the primary mechanism of attention monitoring
               | now. It seems to work pretty well, now that I've had a
               | chance to try it.
               | 
               | I'm not quite sure what you meant by your second
               | paragraph, but I'm sure I have my blind spots and biases.
               | I do have direct experience with various versions of 12.x
               | though (12.3 and now 12.5).
        
           | bastawhiz wrote:
           | Is Tesla required to report system failures or the vehicle
           | damaging itself? How do we know they're not optimizing for
           | the benchmark (what they're legally required to report)?
        
             | rvnx wrote:
             | If the question is: "was FSD activated at the time of the
             | accident: yes/no", they can legally claim no, for example
             | if luckily the FSD disconnects half a second before a
             | dangerous situation (eg: glare obstructing cameras), which
             | may coincide exactly with the times of some accidents.
        
               | diebeforei485 wrote:
               | > To ensure our statistics are conservative, we count any
               | crash in which Autopilot was deactivated within 5 seconds
               | before impact, and we count all crashes in which the
               | incident alert indicated an airbag or other active
               | restraint deployed.
               | 
               | Scroll down to Methodology at
               | https://www.tesla.com/VehicleSafetyReport
        
               | rvnx wrote:
               | This is for Autopilot, which is the car following system
               | on highways. If you are in cruise control and staying on
               | your lane, not much is supposed to happen.
               | 
               | The FSD numbers are much more hidden.
               | 
               | The general accident rate is 1 per 400'000 miles driven.
               | 
               | FSD has one "critical disengagement" (aka before accident
               | if human or safety braking doesn't intervene) every 33
               | miles driven.
               | 
               | It means to reach unsupervised with human quality they
               | would need to improve it 10'000 times in few months. Not
               | saying it is impossible, just highly optimistic. In 10
               | years we will be there, but in 2 months, sounds a bit
               | overpromising.
        
             | Uzza wrote:
             | All manufacturers have for some time been required by
             | regulators to report any accident where an autonomous or
             | partially autonomous system was active within 30 seconds of
             | an accident.
        
               | bastawhiz wrote:
               | My question is better rephrased as "what is legally
               | considered an accident that needs to be reported?" If the
               | car scrapes a barricade or curbs it hard but the airbags
               | don't deploy and the car doesn't sense the damage,
               | clearly they don't. There's a wide spectrum of issues up
               | to the point where someone is injured or another car is
               | damaged.
        
               | kelnos wrote:
               | And not to move the goalposts, but I think we should also
               | be tracking any time the human driver feels they need to
               | take control because the autonomous system did something
               | they didn't believe was safe.
               | 
               | That's not a crash (fortunately!), but it _is_ a failure
               | of the autonomous system.
               | 
               | This is hard to track, though, of course: people might
               | take over control for reasons unrelated to safety, or
               | people may misinterpret something that's safe as unsafe.
               | So you can't just track this from a simple "human driver
               | took control".
        
           | timabdulla wrote:
           | > If it is dangerous in your anecdotes, why don't the
           | reported numbers show more accidents when FSD is on?
           | 
           | Even if it is true that the data show that with FSD (not
           | Autopilot) enabled, drivers are in fewer crashes, I would be
           | worried about other confounding factors.
           | 
           | For instance, I would assume that drivers are more likely to
           | engage FSD in situations of lower complexity (less traffic,
           | little construction or other impediments, overall lesser
           | traffic flow control complexity, etc.) I also believe that at
           | least initially, Tesla only released FSD to drivers with high
           | safety scores relative to their total driver base, another
           | obvious confounding factor.
           | 
           | Happy to be proven wrong though if you have a link to a
           | recent study that goes through all of this.
        
             | valval wrote:
             | Either the system causes less loss of life than a human
             | driver or it doesn't. The confounding factors don't matter,
             | as Tesla hasn't presented a study on the subject. That's in
             | the future, and all stats that are being gathered right now
             | are just that.
        
               | unbrice wrote:
               | > Either the system causes less loss of life than a human
               | driver or it doesn't. The confounding factors don't
               | matter.
               | 
               | Confounding factors are what allows one to tell appart
               | "the system cause less loss of life" from "the system
               | causes more loss of life yet it is only enabled in
               | situations were fewer lives are lost".
        
               | kelnos wrote:
               | No, that's absolutely not how this works. Confounding
               | factors are things that make your data not tell you what
               | you are actually trying to understand. You can't just
               | hand-wave that away, sorry.
               | 
               | Consider: what I expect is _actually_ true based on the
               | data is that Tesla FSD is as safe or safer than the
               | average human driver, but only if the driver is paying
               | attention and is ready to take over in case FSD does
               | something unsafe, even if FSD doesn 't warn the driver it
               | needs to disengage.
               | 
               | That's not an autonomous driving system. Which is
               | potentially fine, but the value prop of that system is
               | low to me: I have to pay just as much attention as if I
               | were driving manually, with the added problem that my
               | attention is going to start to wander because the car is
               | doing most of the work, and the longer the car
               | successfully does most of the work, the more I'm going to
               | unconsciously believe I can allow my attention to slip.
               | 
               | I do like current common ADAS features because they hit a
               | good sweet spot: I still need to actively hold onto the
               | wheel and handle initiating lane changes, turns, stopping
               | and starting at traffic lights and stop signs, etc. I
               | look at the ADAS as a sort of "backup" to my own driving,
               | and not as what's primarily in control of the car. In
               | contrast, Tesla FSD wants to be primarily in control of
               | the car, but it's not trustworthy enough to do that
               | without constant supervision.
        
           | nkrisc wrote:
           | What numbers? Who's measuring? What are they measuring?
        
           | rvnx wrote:
           | There is an easy way to know what is really behind the
           | numbers: look who is paying in case of accident.
           | 
           | You have a Mercedes, Mercedes takes responsibility.
           | 
           | You have a Tesla, you take the responsibility.
           | 
           | Says a lot.
        
             | tensor wrote:
             | You have a Mercedes, and you have a system that works
             | virtually nowhere.
        
               | therouwboat wrote:
               | Better that way than "Oh it tried to run red light, but
               | otherwise it's great."
        
               | tensor wrote:
               | "Oh we tried to build it but no one bought it! So we gave
               | up." - Mercedes before Tesla.
               | 
               | Perhaps FSD isn't ready for city streets yet, but it's
               | great on the highways and I'd 1000x prefer we make
               | progress rather than settle for the status quo garbage
               | that the legacy makers put out. Also, human drivers are
               | the most dangerous, by far, we need to make progress to
               | eventual phase them out.
        
               | meibo wrote:
               | 2-ton blocks of metal that go 80mph next to me on the
               | highway is not the place I would want people to go "fuck
               | it let's just do it" with their new tech. Human drivers
               | might be dangerous but adding more danger and
               | unpredictability on top just because we can skip a few
               | steps in the engineering process is crazy.
               | 
               | Maybe you have a deathwish, but I definitely don't. Your
               | choices affect other humans in traffic.
        
               | tensor wrote:
               | It sounds like you are the one with a deathwish, because
               | objectively by the numbers Autopilot on the highway has
               | greatly reduced death. So you are literally advocating
               | for more death.
               | 
               | You have two imperfect systems for highway driving:
               | Autopilot with human oversight, and humans. The first has
               | far far less death. Yet you are choosing the second.
        
             | sebzim4500 wrote:
             | Mercedes had the insight that if no one is able to actually
             | use the system then it can't cause any crashes.
             | 
             | Technically, that is the easiest way to get a perfect
             | safety record and journalists will seemingly just go along
             | with the charade.
        
             | diebeforei485 wrote:
             | While I don't disagree with your point in general, it
             | should be noted that there is more to taking responsibility
             | than just paying. Even if Mercedes Drive Pilot was enabled,
             | anything that involves court appearances and criminal
             | liability is still your problem if you're in the driver's
             | seat.
        
           | lawn wrote:
           | > The thing that doesn't make sense is the numbers.
           | 
           | Oh? Who are presenting the numbers?
           | 
           | Is a crash that fails to trigger the airbags still not
           | counted as a crash?
           | 
           | What about the car turning off FSD right before a crash?
           | 
           | How about adjusting for factors such as age of driver and the
           | type of miles driven?
           | 
           | The numbers don't make sense because they're not good
           | comparisons and are made to make Tesla look good.
        
           | gamblor956 wrote:
           | The numbers collected by the NHTSA and insurance companies do
           | show that FSD is dangerous...that's why the NHTSA started
           | investigating and its why most insurance companies won't
           | insure Tesla vehicles or charge significantly higher rates.
           | 
           | Also, Tesla is known to disable self-driving features right
           | before collisions to give the appearance of driver fault.
           | 
           | And the coup de grace: if Tesla's own data showed that FSD
           | was actually safer, they'd be shouting it from the moon,
           | using that data to get self-driving permits in CA, and
           | offering to assume liability if FSD actually caused an
           | accident (like Mercedes does with its self driving system).
        
           | throwaway562if1 wrote:
           | AIUI the numbers are for accidents where FSD is in control.
           | Which means if it does a turn into oncoming traffic and the
           | driver yanks the wheel or slams the brakes 500ms before
           | collision, it's not considered a crash during FSD.
        
             | Uzza wrote:
             | That is not correct. Tesla counts any accident within 5
             | seconds of Autopilot/FSD turning off as the system being
             | involved. Regulators extend that period to 30 seconds, and
             | Tesla must comply with that when reporting to them.
        
               | kelnos wrote:
               | How about when it turns into oncoming traffic, the driver
               | yanks the wheel, manages to get back on track, and avoids
               | a crash? Do we know how often things like that happen?
               | Because that's also a failure of the system, and that
               | should affect how reliable and safe we rate these things.
               | I expect we don't have data on that.
               | 
               | Also how about: it turns into oncoming traffic, but there
               | isn't much oncoming traffic, and that traffic swerves to
               | get out of the way, before FSD realizes what it's done
               | and pulls back into the correct lane. We _certainly_ don
               | 't have data on that.
        
             | concordDance wrote:
             | Several people in this thread have been saying this or
             | similar. It's incorrect, from Tesla:
             | 
             | "To ensure our statistics are conservative, we count any
             | crash in which Autopilot was deactivated within 5 seconds
             | before impact"
             | 
             | https://www.tesla.com/en_gb/VehicleSafetyReport
             | 
             | Situations which inevitably cause a crash more than 5
             | seconds later seem like they would be extremely rare.
        
               | rvnx wrote:
               | This is Autopilot, not FSD which is an entirely different
               | product
        
           | johnneville wrote:
           | are there even transparent reported numbers available ?
           | 
           | for whatever does exist, it is also easy to imagine how they
           | could be misleading. for instance i've disengaged FSD when i
           | noticed i was about to be in an accident. if i couldn't
           | recover in time, the accident would not be when FSD is on and
           | depending on the metric, would not be reported as a FSD
           | induced accident.
        
           | kelnos wrote:
           | Agree that only the numbers matter, but only if the numbers
           | are comprehensive and useful.
           | 
           | How often does an autonomous driving system get the driver
           | into a dicey situation, but the driver notices the bad
           | behavior, takes control, and avoids a crash? I don't think we
           | have publicly-available data on that at all.
           | 
           | You admit that you ran into some of these sorts of situations
           | during your trial. Those situations are unacceptable. An
           | autonomous driving system should be safer than a human
           | driver, and should not make mistakes that a human driver
           | would not make.
           | 
           | Despite all the YouTube videos out there of people doing
           | unsafe things with Tesla FSD, I expect that most people that
           | use it are pretty responsible, are paying attention, and are
           | ready to take over if they notice FSD doing something wrong.
           | But if people need to do that, it's not a safe, successful
           | autonomous driving system. Safety means everyone can watch
           | TV, mess around on their phone, or even take a nap, and we
           | _still_ end up with a lower crash rate than with human
           | drivers.
           | 
           | The numbers that are available can't tell us if that would be
           | the case. My belief is that we're absolutely not there.
        
           | kybernetikos wrote:
           | > But at the end of the day, only the numbers matter.
           | 
           | Are these the numbers reported by tesla, or by some third
           | party?
        
         | thomastjeffery wrote:
         | It's not just about relative safety compared to all human
         | driving.
         | 
         | We all know that some humans are sometimes terrible drivers!
         | 
         | We also know what that looks like: Driving too fast or slow
         | relative to surroundings. Quickly turning every once in a while
         | to stay in their lane. Aggressively weaving through traffic.
         | Going through an intersection without spending the time to
         | actually look for pedestrians. The list goes on..
         | 
         | Bad human driving can be seen. Bad automated driving _is
         | invisible_. Do you think the people who were about to be hit by
         | a Tesla even realized that was the case? I sincerely doubt it.
        
           | bastawhiz wrote:
           | > Bad automated driving is invisible.
           | 
           | I'm literally saying that it is visible, to me, the
           | passenger. And for reasons that aren't just bad vibes. If I'm
           | in an Uber and I feel unsafe, I'll report the driver. Why
           | would I pay for my car to do that to me?
        
             | wizzwizz4 wrote:
             | GP means that the signs aren't obvious to other drivers. We
             | generally underestimate how important psychological
             | modelling is for communication, because it's transparent to
             | most of us under most circumstances, but AI systems have
             | _very_ different psychology to humans. It is easier to
             | interpret the body language of a fox than a self-driving
             | car.
        
             | thomastjeffery wrote:
             | We are taking about the same thing: unpredictability. If
             | you and everyone else _can 't predict_ what your car will
             | do, then that seems objectively unsafe to me. It also
             | sounds like we agree with each other.
        
         | dekhn wrote:
         | I don't think you're supposed to merge left when people are
         | merging on the highway into your lane- you have right of way. I
         | find even with the right of way many people merging aren't
         | paying attention, but I deal with that by slightly speeding up
         | (so they can see me in front of them).
        
           | sangnoir wrote:
           | You don't have a right of way over a slow moving vehicle that
           | merged _ahead_ of you. Most ramps are not long enough to
           | allow merging traffic to accelerate to highway speeds before
           | merging, so many drivers free up the right-most lane for this
           | purpose (by merging left)
        
             | potato3732842 wrote:
             | Most ramps are more than long enough to accelerate close
             | enough to traffic speed if one wants to, especially in most
             | modern vehicles.
        
               | wizzwizz4 wrote:
               | Unless the driver in front of you didn't.
        
             | SoftTalker wrote:
             | If you can safely move left to make room for merging
             | traffic, you should. It's considerate and reduces the
             | chances of an accident.
        
             | dekhn wrote:
             | Since a number of people are giving pushback, can you point
             | to any (California-oriented) driving instructions
             | consistent with this? I'm not seeing any. I see people
             | saying "it's curteous", but when I'm driving I'm managing
             | hundreds of variables and changing lanes is often risky,
             | given motorcycles lanesplitting at high speed (quite
             | common).
        
               | davidcalloway wrote:
               | Definitely not California but literally the first part of
               | traffic law in Germany says that caution and
               | consideration are required from all partaking in traffic.
               | 
               | Germans are not known for poor driving.
        
               | dekhn wrote:
               | Right- but the "consideration" here is the person merging
               | onto the highway actually paying attention and adjusting,
               | rather than pointedly not even looking (this is a very
               | common merging behavior where I life). Changing lanes
               | isn't without risk even on a clear day with good
               | visibility. Seems like my suggestion of slowing down or
               | speeding up makes perfect sense because it's less risky
               | overall, and is still being considerate.
               | 
               | Note that I personally do change lanes at times when it's
               | safe, convenient, I am experienced with the intersection,
               | and the merging driver is being especially unaware.
        
               | watwut wrote:
               | Consideration is also making space for slower car wanting
               | to merge and Germans do it.
        
               | sangnoir wrote:
               | It's not just courteous, it's self serving, AFAIK, a
               | self-emergent phenomenon. If you're driving at 65 mph and
               | anticipate a slow down in your lane due merging traffic,
               | do you stay in your lane and slow down to 40 mph, or do
               | you change lanes (if it's safe to do so) and maintain
               | your speed?
               | 
               | Texas highways allow for much higher merging speeds at
               | the cost of far large (land area), 5-level interchanges
               | rather than 35 mph offramps and onramps common in
               | California.
               | 
               | Any defensive driving course (which fall under
               | instruction IMO) states that you don't always _have_ to
               | exercise your right of way, and indeed it may be unsafe
               | to do so in some circumstances. Anticipating the actions
               | of other drivers around you and avoiding potentially
               | dangerous are the other aspects of being a defensive
               | driver, and those concepts are consistent with freeing up
               | the lane slower-moving vehicles are merging onto when it
               | 's safe to do so.
        
           | bastawhiz wrote:
           | Just because you have the right of way doesn't mean the
           | correct thing to do is to remain in the lane. If remaining in
           | your lane is likely to make _someone else_ do something
           | reckless, you should have been proactive. Not legally, for
           | the sake of being a good driver.
        
             | dekhn wrote:
             | Can you point to some online documentation that recommends
             | changing lanes in preference to speeding up when a person
             | is merging at too slow a speed? What I'm doing is following
             | CHP guidance in this post:
             | https://www.facebook.com/chpmarin/posts/lets-talk-about-
             | merg... """Finally, if you are the vehicle already
             | traveling in the slow lane, show some common courtesy and
             | do what you can to create a space for the person by slowing
             | down a bit or speeding up if it is safer. """
             | 
             | (you probably misinterpreted what I said. I do sometimes
             | change lanes, even well in advance of a merge I know is
             | prone to problems, if that's the safest and most
             | convenient. What I am saying is the guidance I have read
             | indicates that staying in the same lane is generally safer
             | than changing lanes, and speeding up into an empty space is
             | better for everybody than slowing down, especially because
             | many people who are merging will keep slowing down more and
             | more when the highway driver slows for them)
        
               | bastawhiz wrote:
               | > recommends changing lanes in preference to speeding up
               | when a person is merging at too slow a speed
               | 
               | It doesn't matter, Tesla does neither. It always does the
               | worst possible non-malicious behavior.
        
               | jazzyjackson wrote:
               | I read all this thread and all I can say is not
               | everything in the world is written down somewhere
        
         | modeless wrote:
         | Tesla jumped the gun on the FSD free trial earlier this year.
         | It was nowhere near good enough at the time. Most people who
         | tried it for the first time probably share your opinion.
         | 
         | That said, there is a night and day difference between FSD 12.3
         | that you experienced earlier this year and the latest version
         | 12.6. It will still make mistakes from time to time but the
         | improvement is massive and obvious. More importantly, the rate
         | of improvement in the past two months has been much faster than
         | before.
         | 
         | Yesterday I spent an hour in the car over three drives and did
         | not have to turn the steering wheel at all except for parking.
         | That _never_ happened on 12.3. And I don 't even have 12.6 yet,
         | this is still 12.5; others report that 12.6 is a noticeable
         | improvement over 12.5. And version 13 is scheduled for release
         | in the next two weeks, and the FSD team has actually hit their
         | last few release milestones.
         | 
         | People are right that it is still not ready yet, but if they
         | think it will stay that way forever they are about to be very
         | surprised. At the current rate of improvement it will be quite
         | good within a year and in two or three I could see it actually
         | reaching the point where it could operate unsupervised.
        
           | seizethecheese wrote:
           | _If_ this is the case, the calls for heavy regulation in this
           | thread will lead to many more deaths than otherwise.
        
           | jvanderbot wrote:
           | I have yet to see a difference. I let it highway drive for an
           | hour and it cut off a semi, coming within 9 to 12 inches of
           | the bumper for no reason. I heard about that one believe me.
           | 
           | It got stuck in a side street trying to get to a target
           | parking lot, shaking the wheel back and forth.
           | 
           | It's no better so far and this is the first day.
        
             | modeless wrote:
             | You have 12.6?
             | 
             | As I said, it still makes mistakes and it is not ready yet.
             | But 12.3 was much worse. It's the rate of improvement I am
             | impressed with.
             | 
             | I will also note that the predicted epidemic of crashes
             | from people abusing FSD never happened. It's been on the
             | road for a long time now. The idea that it is
             | "irresponsible" to deploy it in its current state seems
             | conclusively disproven. You can argue about exactly what
             | the rate of crashes is but it seems clear that it has been
             | at the very least no worse than normal driving.
        
               | jvanderbot wrote:
               | Hm. I thought that was the latest release but it looks
               | like no. But there seems to be no improvements from the
               | last trial, so maybe 12.6 is magically better.
        
               | modeless wrote:
               | A lot of people have been getting the free trial with
               | 12.3 still on their cars today. Tesla has really screwed
               | up on the free trial for sure. Nobody should be getting
               | it unless they have 12.6 at least.
        
               | jvanderbot wrote:
               | I have 12.5. maybe 12.6 is better but I've heard that
               | before.
               | 
               | Don't get me wrong without a concerted data team building
               | maps a priori, this is pretty incredible. But from a pure
               | performance standpoint it's a shaky product.
        
               | KaoruAoiShiho wrote:
               | The latest version is 12.5.6, I think he got confused by
               | the .6 at the end. If you think that's bad then there
               | isn't a better version available. However it is a
               | dramatic improvement over 12.3, don't know how much you
               | tested on it.
        
               | modeless wrote:
               | You're right, thanks. One of the biggest updates in
               | 12.5.6 is transitioning the highway Autopilot to FSD. If
               | he has 12.5.4 then it may still be using the old non-FSD
               | Autopilot on highways which would explain why he hasn't
               | noticed improvement there; there hasn't been any until
               | 12.5.6.
        
             | hilux wrote:
             | > ... coming within 9 to 12 inches of the bumper for no
             | reason. I heard about that one believe me.
             | 
             | Oh dear.
             | 
             | Glad you're okay!
        
             | eric_cc wrote:
             | Is it possible you have a lemon? Genuine question. I've had
             | nothing but positive experiences with FSD for the last
             | several months and many thousands of miles.
        
               | ben_w wrote:
               | I've had nothing but positive experiences with
               | ChatGPT-4o, that doesn't make people wrong to criticise
               | either as modelling their training data too much and
               | generalising too little when they need to use it for
               | something where the inference domain is too far outside
               | the training domain.
        
               | kelnos wrote:
               | If the incidence of problems is some relatively small
               | number, like 5% or 10%, it's very easily possible that
               | you've never personally seen a problem, but overall we'd
               | still consider that the total incidence of problems is
               | unacceptable.
               | 
               | Please stop presenting arguments of the form "I haven't
               | seen problems so people who have problems must be extreme
               | outliers". At best it's ignorant, at worst it's actively
               | in bad faith.
        
               | londons_explore wrote:
               | I suspect the performance might vary widely depending on
               | if you're on a road in california they have a lot of data
               | on, or if its a road FSD has rarely seen before.
        
               | dham wrote:
               | A lot of haters mistake safety critical disengagements
               | with "oh the car is doing something I don't like or I
               | wouldn't do"
               | 
               | If you treat the car like it's a student driver or
               | someone else driving, disengagements will go do. If you
               | treat it like you're driving there's also something to
               | complain about.
        
           | snypher wrote:
           | So just a few more years of death and injury until they reach
           | a finished product?
        
             | quailfarmer wrote:
             | If the answer was yes, presumably there's a tradeoff where
             | that deal would be reasonable.
        
             | londons_explore wrote:
             | So far, data points to it having far fewer crashes than a
             | human alone. Teslas data shows that, but 3rd party data
             | seems to imply the same.
        
               | rvnx wrote:
               | It disconnects in case of dangerous situations, so every
               | 33 miles to 77 miles driven (depending on the version),
               | versus 400'000 miles for a human
        
               | llamaimperative wrote:
               | Tesla does not release the data required to substantiate
               | such a claim. It simply doesn't and you're either lying
               | or being lied to.
        
               | londons_explore wrote:
               | tesla releases this data:
               | https://www.tesla.com/VehicleSafetyReport
        
               | rainsford wrote:
               | That data is not an apples to apples comparison unless
               | autopilot is used in exactly the same mix of conditions
               | as human driving. Tesla doesn't share that in the report,
               | but I'd bet it's not equivalent. I personally tend to
               | turn on driving automation features (in my non-Tesla car)
               | in easier conditions and drive myself when anything
               | unusual or complicated is going on, and I'd bet most
               | drivers of Teslas and otherwise do the same.
               | 
               | This is important because I'd bet similar data on the use
               | of standard, non-adaptive cruise control would similarly
               | show it's much safer than human drivers. But of course
               | that would be because people use cruise control most in
               | long-distance highway driving outside of congested areas,
               | where you're least likely to have an accident.
        
               | llamaimperative wrote:
               | Per the other comment: no, they don't. This data is not
               | enough to evaluate its safety. This is enough data to
               | mislead people who spend <30 seconds thinking about the
               | question though, so I guess that's something (something
               | == misdirection and dishonesty).
               | 
               | You've been lied to.
        
               | FireBeyond wrote:
               | No, it releases enough data to actively mislead you
               | (because there is no way Tesla's data people are unaware
               | of these factors):
               | 
               | The report measures accidents in FSD mode. Qualifiers to
               | FSD mode: the conditions, weather, road, location,
               | traffic all have to meet a certain quality threshold
               | before the system will be enabled (or not disable
               | itself). Compare Sunnyvale on a clear spring day to
               | Pittsburgh December nights.
               | 
               | There's no qualifier to the "comparison": all drivers,
               | all conditions, all weather, all roads, all location, all
               | traffic.
               | 
               | It's not remotely comparable, and Tesla's data people are
               | not that stupid, so it's willfully misleading.
               | 
               | This report does not include fatalities. It also doesn't
               | consider any incident where there was not airbag
               | deployment to be an accident. Sounds potentially
               | reasonable until you consider:
               | 
               | - first gen airbag systems were primitive: collision
               | exceeds threshold, deploy. Currently, vehicle safety
               | systems consider duration of impact, speeds, G-forces,
               | amount of intrusion, angle of collision, and a multitude
               | of other factors before deciding what, if any, systems to
               | fire (seatbelt tensioners, airbags, etc.) So hit
               | something at 30mph with the right variables? Tesla: "this
               | is not an accident".
               | 
               | - Tesla also does not consider "incident was so
               | catastrophic that airbags COULD NOT deploy*" to be an
               | accident, because "airbags didn't deploy". This umbrella
               | could also include egregious, "systems failed to deploy
               | for any reason up to and including poor assembly line
               | quality control", as also not an accident and also "not
               | counted".
        
             | Peanuts99 wrote:
             | If this is what society has to pay to improve Tesla's
             | product, then perhaps they should have to share the
             | software with other car manufacturers too.
             | 
             | Otherwise every car brand will have to kill a whole heap of
             | people too until they manage to make a FSD system.
        
               | modeless wrote:
               | Elon has said many times that they are willing to license
               | FSD but nobody else has been interested so far. Clearly
               | that will change if they reach their goals.
               | 
               | Also, "years of death and injury" is a bald-faced lie.
               | NHTSA would have shut down FSD a long time ago if it were
               | happening. The statistics Tesla has released to the
               | public are lacking, it's true, but they cannot hide
               | things from the NHTSA. FSD has been on the road for years
               | and a billion miles and if it was overall significantly
               | worse than normal driving (when supervised, of course)
               | the NHTSA would know by now.
               | 
               | The current investigation is about performance under
               | specific conditions, and it's possible that improvement
               | is possible and necessary. But overall crash rates have
               | not reflected any significant extra danger by public use
               | of FSD even in its primitive and flawed form of earlier
               | this year and before.
        
             | the8472 wrote:
             | We also pay this price with every new human driver we
             | train. again and again.
        
               | dham wrote:
               | You won't be able to bring logic to people with Elon
               | derangement syndrome.
        
           | misiti3780 wrote:
           | i have the same experience 12.5 is insanely good. HN is full
           | of people that dont want self driving to succeed for some
           | reason. fortunately, it's clear as day to some of us that
           | tesla approach will work
        
             | ethbr1 wrote:
             | Curiousity about why they're against it and enunciating
             | your why you think it will work would be more helpful.
        
               | misiti3780 wrote:
               | It's evident to Tesla drivers using Full Self-Driving
               | (FSD) that the technology is rapidly improving and will
               | likely succeed. The key reason for this anticipated
               | success is data: any reasonably intelligent observer
               | recognizes that training exceptional deep neural networks
               | requires vast amounts of data, and Tesla has accumulated
               | more relevant data than any of its competitors. Tesla
               | recently held a robotaxi event, explicitly informing
               | investors of their plans to launch an autonomous
               | competitor to Uber. While Elon Musk's timeline
               | predictions and politics may be controversial, his
               | ability to achieve results and attract top engineering
               | and management talent is undeniable.
        
               | ryandrake wrote:
               | Then why have we been just a year or two away from actual
               | working self-driving, for the last 10 years? If I told my
               | boss that my project would be done in a year, and then
               | the following year said the same thing, and continued
               | that for years, that's not what "achieving results"
               | means.
        
               | kelnos wrote:
               | > _It 's evident to Tesla drivers using Full Self-Driving
               | (FSD) that the technology is rapidly improving and will
               | likely succeed_
               | 
               | Sounds like Tesla drivers have been at the Kool-Aid then.
               | 
               | But to be a bit more serious, the problem isn't
               | necessarily that people don't think it's improving (I do
               | believe it is) or that they will likely succeed (I'm not
               | sure where I stand on this). The problem is that every
               | year Musk says the next year will be the Year of FSD. And
               | every next year, it doesn't materialize. This is like the
               | Boy Who Cried Wolf; Musk has zero credibility with me
               | when it comes to predictions. And that loss of
               | credibility affects my feeling as to whether he'll be
               | successful at all.
               | 
               | On top of that, I'm not convinced that autonomous driving
               | that only makes use of cameras will ever be reliably
               | safer than human drivers.
        
               | modeless wrote:
               | I have consistently been critical of Musk for this over
               | the many years it's been happening. Even right now, I
               | don't believe FSD will be unsupervised next year like he
               | just claimed. And yet, I can see the real progress and I
               | am convinced that while it won't be next year, it could
               | absolutely happen within two or three years.
               | 
               | One of these years, he is going to be right. And at that
               | point, the fact that he was wrong for a long time won't
               | diminish their achievement. As he likes to say, he
               | specializes in transforming technology from "impossible"
               | to "late".
               | 
               | > I'm not convinced that autonomous driving that only
               | makes use of cameras will ever be reliably safer than
               | human drivers.
               | 
               | Believing this means that you believe AIs will never
               | match or surpass the human brain. Which I think is a much
               | less common view today than it was a few years ago.
               | Personally I think it is obviously wrong. And also I
               | don't believe surpassing the human brain in every respect
               | will be necessary to beat humans in driving safety.
               | Unsupervised FSD will come before AGI.
        
               | Animats wrote:
               | > and Tesla has accumulated more relevant data than any
               | of its competitors.
               | 
               | Has it really? How much data is each car sending to Tesla
               | HQ? Anybody actually know? That's a lot of cell phone
               | bandwidth to pay for, and a lot of data to digest.
               | 
               | Vast amounts of data about routine driving is not all
               | that useful, anyway. A "highlights reel" of interesting
               | situations is probably more valuable for training. Waymo
               | has shown some highlights reels like that, such as the
               | one were someone in a powered wheelchair is chasing a
               | duck in the middle of a residential street.
        
               | jeffbee wrote:
               | Anyone who believes Tesla beats Google because they are
               | better at collecting and handling data can be safely
               | ignored.
        
               | llamaimperative wrote:
               | The crux of the issue is that _your interpretation of
               | performance cannot be trusted_. It is absolutely
               | irrelevant.
               | 
               | Even a system that is 99% reliable will _honestly feel_
               | very, very good to an individual operator, but would
               | result in _huge_ loss of life when scaled up.
               | 
               | Tesla can earn more trust be releasing the data necessary
               | to evaluate the system's performance. The fact that they
               | do not is _far_ more informative than a bunch of
               | commentators saying "hey it's better than it was last
               | month!" for the last several years -- even if it is true
               | that it's getting better and even if it's true it's
               | hypothetically possible to get to the finish line.
        
               | KaiserPro wrote:
               | Tesla's sensor suite does not support safe FSD.
               | 
               | It relies on inferred depth from a single point of view.
               | This means that the depth/positioning info for the entire
               | world is noisy.
               | 
               | From a safety critical point of view its also bollocks,
               | because a single birdshit/smear/raindrop/oil can render
               | the entire system inoperable. Does it degrade safely?
               | does it fuck.
               | 
               | > recognizes that training exceptional deep neural
               | networks requires vast amounts of data,
               | 
               | You missed _good_ data. Recording generic driver 's
               | journeys isn't going to yield good data, especially if
               | the people who are driving aren't very good. You need to
               | have a bunch of decent drivers doing specific scenarios.
               | 
               | Moreover that data isn't easily generalisable to other
               | sensor suites. Add another camera? yeahna, new model.
               | 
               | > Tesla recently held a robotaxi event, explicitly
               | informing investors of their plans
               | 
               | When has Musk ever delivered on time?
               | 
               | > his ability to achieve results
               | 
               | most of those results aren't that great. Tesla isn't
               | growing anymore, its reliant on state subsidies to be
               | profitable. They still only ship 400k units a quarter,
               | which is tiny compared to VW's 2.2million.
               | 
               | > attract top engineering and management talent is
               | undeniable
               | 
               | Most of the decent computer vision people are not in
               | tesla. Hardware wise, their factories aren't fun places
               | to be. He's a dick to work for, capricious and
               | vindictive.
        
             | eric_cc wrote:
             | Completely agree. It's very strange. But honestly it's
             | their loss. FSD is fantastic.
        
               | llamaimperative wrote:
               | Very strange not wanting poorly controlled 4,000lb steel
               | cages driving around at 70mph stewarded by people calling
               | "only had to stop it from killing me 4 times today!" as
               | great success.
        
             | kelnos wrote:
             | > _HN is full of people that dont want self driving to
             | succeed for some reason._
             | 
             | I would love for self-driving to succeed. I do long-ish car
             | trips several times a year, and it would be wonderful if
             | instead of driving, I could be watching a movie or working
             | on something on my laptop.
             | 
             | I've tried Waymo a few times, and it feels like magic, and
             | feels safe. Their record backs up that feeling. After
             | everything I've seen and read and heard about Tesla, if I
             | got into a Tesla with someone who uses FSD, I'd ask them to
             | drive manually, and probably decline the ride entirely if
             | they wouldn't honor my request.
             | 
             | > _fortunately, it 's clear as day to some of us that tesla
             | approach will work_
             | 
             | And based on my experience with Tesla FSD boosters, I
             | expect you're basing that on feelings, not on any empirical
             | evidence or actual understanding of the hardware or
             | software.
        
             | FireBeyond wrote:
             | I would love self-driving to succeed. I should be a Tesla
             | fan, because I'm very much a fan of geekery and tech
             | anywhere and everywhere.
             | 
             | But no. I want self-driving to succeed, and when it does
             | (which I don't think is that soon, because the last 10%
             | takes 90% of the time), I don't think Tesla or their
             | approach will be the "winner".
        
           | bastawhiz wrote:
           | > At the current rate of improvement it will be quite good
           | within a year
           | 
           | I'll believe it when I see it. I'm not sure "quite good" is
           | the next step after "feels dangerous".
        
             | rvnx wrote:
             | "Just round the corner" (2016)
        
               | FireBeyond wrote:
               | Musk in 2016 (these are quotes, not paraphrases): "Self
               | driving is a solved problem. We are just tuning the
               | details."
               | 
               | Musk in 2021: "Right now our highest priority is working
               | on solving the problem."
        
           | delusional wrote:
           | > That said, there is a night and day difference between FSD
           | 12.3 that you experienced earlier this year and the latest
           | version 12.6
           | 
           | >And I don't even have 12.6 yet, this is still 12.5;
           | 
           | How am i supposed to take anything you say seriously when
           | your only claim is a personal anecdote that doesn't even
           | apply to your own argument. Please, think about what you're
           | writing, and please stop repeating information you heard on
           | youtube as if it's fact.
           | 
           | The is one of the reasons (among many) that I can't take
           | Tesla booster seriously. I have absolutely zero faith in your
           | anecdote that you didn't touch the steering wheel. I bet it's
           | a lie.
        
             | modeless wrote:
             | The version I have is already a night and day difference
             | from 12.3 and the current version is better still. Nothing
             | I said is contradictory in the slightest. Apply some basic
             | reasoning, please.
             | 
             | I didn't say I didn't touch the steering wheel. I had my
             | hands lightly touching it most of the time, as one should
             | for safety. I occasionally used the controls on the wheel
             | as well as the accelerator pedal to adjust the set speed,
             | and I used the turn signal to suggest lane changes from
             | time to time, though most lane choices were made
             | automatically. But I did not _turn_ the wheel. All turning
             | was performed by the system. (If you turn the wheel
             | manually the system disengages). Other than parking, as I
             | mentioned, though FSD did handle some navigation into and
             | inside parking lots.
        
             | eric_cc wrote:
             | I can second this experience. I rarely touch the wheel
             | anymore. I'd say I'm 98% FSD. I take over in school zones,
             | parking lots, and complex construction.
        
             | jsjohnst wrote:
             | > I have absolutely zero faith in your anecdote that you
             | didn't touch the steering wheel. I bet it's a lie.
             | 
             | I'm not GP, but I can share video showing it driving across
             | residential, city, highway, and even gravel roads all in a
             | single trip without touching the steering wheel a single
             | time over a 90min trip (using 12.5.4.1).
        
               | jsjohnst wrote:
               | And if someone wants to claim I'm cherry picking the
               | video, happy to shoot a new video with this post visible
               | on an iPad in the seat next to me. Is it autonomous? Hell
               | no. Can it drive in Manhattan? Nope. But can it do >80%
               | of my regular city (suburb outside nyc) and highway
               | driving, yep.
        
           | wstrange wrote:
           | I have a 2024 Model 3, and it's a a great car. That being
           | said, I'm under no illusion that the car will _ever_ be self
           | driving (unsupervised).
           | 
           | 12.5.6 Still fails to read very obvious signs for 30 Km/h
           | playgrounds zones.
           | 
           | The current vehicles lack sufficient sensors, and likely do
           | not have enough compute power and memory to cover all edge
           | cases.
           | 
           | I think it's a matter of time before Tesla faces a lawsuit
           | over continual FSD claims.
           | 
           | My hope is that the board will grow a spine and bring in a
           | more focused CEO.
           | 
           | Hats off to Elon for getting Tesla to this point, but right
           | now they need a mature (and boring) CEO.
        
             | pelorat wrote:
             | The board is family and friends, so them ousting him will
             | never happen.
        
               | dboreham wrote:
               | At some point the risk of going to prison overtakes
               | family loyalty.
        
               | dlisboa wrote:
               | There is no risk of going to prison. It just doesn't
               | happen, never have and never will, no matter how unfair
               | that is. Board members and CEOs are not held accountable,
               | ever.
        
               | rvnx wrote:
               | https://fortune.com/2023/01/24/google-meta-spotify-
               | layoffs-c...
               | 
               | As they say, they take "full responsibility"
        
               | llamaimperative wrote:
               | https://www.justice.gov/opa/pr/former-enron-ceo-jeffrey-
               | skil...
        
           | jeffbee wrote:
           | If I had a dime for every hackernews who commented that FSD
           | version X was like a revelation compared to FSD version X-e
           | I'd have like thirty bucks. I will grant you that every
           | release has surprisingly different behaviors.
           | 
           | Here's an unintentionally hilarious meta-post on the subject
           | https://news.ycombinator.com/item?id=29531915
        
             | modeless wrote:
             | Sure, plenty of people have been saying it's great for a
             | long time, when it clearly was not (looking at you, Whole
             | Mars Catalog). _I_ was not saying it was super great back
             | then. I have consistently been critical of Elon for
             | promising human level self driving  "next year" for like 10
             | years in a row and being wrong every time. He said it this
             | year again and I still think he's wrong.
             | 
             | But the rate of progress I see right now has me thinking
             | that it may not be more than two or three years before that
             | threshold is finally reached.
        
               | ben_w wrote:
               | The most important lesson I've had from me incorrectly
               | predicting in 2009 that we'd have cars that don't come
               | with steering wheels in 2018, and thinking that the
               | progress I saw each year up to then was consistent with
               | that prediction, is that it's really hard to guess how
               | long it takes to walk the fractal path that is software
               | R&D.
               | 
               | How far are we now, 6 years later than I expected?
               | 
               | Dunno.
               | 
               | I suspect it's gonna need an invention on the same level
               | as Diffusion or Transformer models to be able to get all
               | the edge cases we can get, and that might mean we only
               | get it with human level AGI.
               | 
               | But I don't know that, it might be we've already got all
               | we need architecture-wise and it's just a matter of
               | scale.
               | 
               | Only thing I can be really sure of is we're making
               | progress "quite fast" in a non-objective use of the words
               | -- it's not going to need a re-run of 6 million years of
               | mammilian evolution or anything like that, but even 20
               | years wall clock time would be a disappointment.
        
               | modeless wrote:
               | Waymo went driverless in 2020, maybe you weren't that far
               | off. Predicting that in 2009 would have been pretty good.
               | They could and should have had vehicles without steering
               | wheels anytime since then, it's just a matter of hardware
               | development. Their steering wheel free car program was
               | derailed when they hired traditional car company
               | executives.
        
               | ben_w wrote:
               | Waymo for sure, but I meant also without any geolock
               | etc., so I can't claim credit for my prediction.
               | 
               | They may well best Tesla to this, though.
        
               | IX-103 wrote:
               | Waymo is using full lidar and other sensors, whereas
               | Tesla is relying on pure vision systems (to the point of
               | removing radar on newer models). So they're solving a
               | much harder problem.
               | 
               | As for whether it's worthwhile to solve that problem when
               | having more sensors will always be safer, that's another
               | issue...
        
               | ben_w wrote:
               | Indeed.
               | 
               | While it ought to be possible to solve for just RGB...
               | making it needlessly hard for yourself is a fun hack-day
               | side project, not a valuable business solution.
        
             | Laaas wrote:
             | Doesn't this just mean it's improving rapidly which is a
             | good thing?
        
               | jeffbee wrote:
               | No, the fact that people say FSD is on the verge of
               | readiness constantly for a decade means there is no
               | widely shared benchmark.
        
             | kylecordes wrote:
             | On one hand, it really has gotten much better over time.
             | It's quite impressive.
             | 
             | On the other hand, I fear/suspect it is asymptotically,
             | rather than linearly, approaching good enough to be
             | unsupervised. It might get halfway there, each year,
             | forever.
        
           | m463 wrote:
           | > the rate of improvement in the past two months has been
           | much faster than before.
           | 
           | I suspect the free trials let tesla collect orders of
           | magnitude more data on events requiring human intervention.
           | If each one is a learning event, it could exponentially
           | improve things.
           | 
           | I tried it on a loaner car and thought it was pretty good.
           | 
           | One bit of feedback I would give tesla - when you get some
           | sort of FSD message on the center screen, make the text BIG
           | and either make it linger more, or let you recall it.
           | 
           | For example, it took me a couple tries to read the message
           | that gave instructions on how to give tesla feedback on why
           | you intervened.
           | 
           | EDIT: look at this graph
           | 
           | https://electrek.co/wp-
           | content/uploads/sites/3/2024/10/Scree...
        
           | latexr wrote:
           | > At the current rate of improvement it will be quite good
           | within a year and in two or three I could see it actually
           | reaching the point where it could operate unsupervised.
           | 
           | That's not a reasonable assumption. You can't just
           | extrapolate "software rate of improvement", that's not how it
           | works.
        
             | modeless wrote:
             | The timing of the rate of improvement increasing
             | corresponds with finishing their switch to end-to-end
             | machine learning. ML does have scaling laws actually.
             | 
             | Tesla collects their own data, builds their own training
             | clusters with both Nvidia hardware and their own custom
             | hardware, and deploys their own custom inference hardware
             | in the cars. There is no obstacle to them scaling up
             | massively in all dimensions, which basically guarantees
             | significant progress. Obviously you can disagree about
             | whether that progress will be enough, but based on the
             | evidence I see from using it, I think it will be.
        
           | josefx wrote:
           | > it will be quite good within a year
           | 
           | The regressions are getting worse. For the first release
           | anouncement it was only hitting regulatory hurdles and now
           | the entire software stack is broken? They should fire whoever
           | is in charge and restore the state Elon tried to release a
           | decade ago.
        
         | potato3732842 wrote:
         | If you were a poorer driver who did these things you wouldn't
         | find these faults so damning because it'd only be say 10%
         | dumber than you rather than 40% or whatever (just making up
         | those numbers).
        
           | bastawhiz wrote:
           | That just implies FSD is as good as a bad driver, which isn't
           | really an endorsement.
        
             | potato3732842 wrote:
             | I agree it's not an endorsement but we allow chronically
             | bad drivers on the road as long as they're legally bad and
             | not illegally bad.
        
               | kelnos wrote:
               | We do that for reasons of practicality: the US is built
               | around cars. If we were to revoke the licenses of the 20%
               | worst drivers, most of those people would be unable to
               | get to work and end up homeless.
               | 
               | So we accept that there are some bad drivers on the road
               | because the alternative would be cruel.
               | 
               | But we don't have to accept bad software drivers.
        
               | potato3732842 wrote:
               | Oh, I'm well aware how things work.
               | 
               | But we should look down on them and speak poorly of them
               | same as we look down on and speak poorly of everyone else
               | who's discourteous in public spaces.
        
         | dreamcompiler wrote:
         | > It didn't merge left to make room for vehicles merging onto
         | the highway. The vehicles then tried to cut in. The system
         | should have avoided an unsafe situation like this in the first
         | place.
         | 
         | This is what bugs me about ordinary autopilot. Autopilot
         | doesn't switch lanes, but I like to slow down or speed up as
         | needed to allow merging cars to enter my lane. Autopilot never
         | does that, and I've had some close calls with irate mergers who
         | expected me to work with them. And I don't think they're wrong.
         | 
         | Just means that when I'm cruising in the right lane with
         | autopilot I have to take over if a car tries to merge.
        
           | bastawhiz wrote:
           | Agreed. Automatic lane changes are the only feature of
           | enhanced autopilot that I think I'd be interested in, solely
           | for this reason.
        
           | kelnos wrote:
           | While I certainly wouldn't object to how you handle merging
           | cars (it's a nice, helpful thing to do!), I was always taught
           | that if you want to merge into a lane, you are the sole
           | person responsible for making that possible and making that
           | safe. You need to get your speed and position right, and if
           | you can't do that, you don't merge.
           | 
           | (That's for merging onto a highway from an entrance ramp, at
           | least. If you're talking about a zipper merge due to a lane
           | ending or a lane closure, sure, cooperation with other
           | drivers is always the right thing to do.)
        
             | lotsofpulp wrote:
             | >cooperation with other drivers is always the right thing
             | to do
             | 
             | Correct, including when the other driver may not have the
             | strictly interpreted legal right of way. You don't know if
             | their vehicle is malfunctioning, or if the driver is
             | malfunctioning, or if they are being overly aggressive or
             | distracted on their phone.
             | 
             | But most of the time, on an onramp to a highway, people on
             | the highway in the lane that is being merged into need to
             | be taking into account the potential conflicts due to
             | people merging in from the acceleration lane. Acceleration
             | lanes can be too short, other cars may not have the
             | capability to accelerate quickly, other drivers may not be
             | as confident, etc.
             | 
             | So while technically, the onus is on people merging in, a
             | more realistic rule is to take turns whenever congestion
             | appears, even if you have right of way.
        
             | lolinder wrote:
             | I was taught that in _every_ situation you should act as
             | though you are the sole person responsible for making the
             | interaction safe.
             | 
             | If you're the one merging? It's on you. If you're the one
             | being merged into? Also you.
             | 
             | If you assume that every other driver has a malfunctioning
             | vehicle or is driving irresponsibly then your odds of a
             | crash go way down because you _assume_ that they 're going
             | to try to merge incorrectly.
        
             | llamaimperative wrote:
             | More Americans should go drive on the Autobahn. Everyone
             | thinks the magic is "omg no speed limits!" which is neat
             | but the _really_ amazing thing is that NO ONE sits in the
             | left hand lane and EVERYONE will let you merge
             | _immediately_ upon signaling.
             | 
             | It's like a children's book explanation of the nice things
             | you can have (no speed limits) if everyone could just stop
             | being such obscenely selfish people (like sitting in the
             | left lane or preventing merges because of some weird "I
             | need my car to be in front of their car" fixation).
        
               | rvnx wrote:
               | Tesla FSD on German Autobahn = most dangerous thing ever.
               | The car has never seen this rule and it's not ready for a
               | 300km/h car behind you.
        
             | macNchz wrote:
             | At least in the northeast/east coast US there are still
             | lots of old parkways without modern onramps, where moving
             | over to let people merge is super helpful. Frequently these
             | have bad visibility and limited room to accelerate if any
             | at all, so doing it your way is not really possible.
             | 
             | For example:
             | 
             | I use this onramp fairly frequently. It's rural and rarely
             | has much traffic, but when there is you can get stuck for a
             | while trying to get on because it's hard to see the coming
             | cars, and there's not much room to accelerate (unless
             | people move over, which they often do).
             | https://maps.app.goo.gl/ALt8UmJDzvn89uvM7?g_st=ic
             | 
             | Preemptively getting in the left lane before going under
             | this bridge is a defensive safety maneuver I always make--
             | being in the right lane nearly guarantees some amount of
             | conflict with merging traffic.
             | 
             | https://maps.app.goo.gl/PumaSM9Bx8iyaH9n6?g_st=ic
        
             | rainsford wrote:
             | > You need to get your speed and position right, and if you
             | can't do that, you don't merge.
             | 
             | I agree, but my observation has been that the majority of
             | drivers are absolutely trash at doing that and I'd rather
             | they not crash into me, even if would be their fault.
             | 
             | Honestly I think Tesla's self-driving technology is long on
             | marketing and short on performance, but it really helps
             | their case that a lot of the competition is human drivers
             | who are completely terrible at the job.
        
           | japhyr wrote:
           | > Just means that when I'm cruising in the right lane with
           | autopilot I have to take over if a car tries to merge.
           | 
           | Which brings it right back to the original criticism of
           | Tesla's "self driving" program. What you're describing is
           | assisted driving, not anything close to "full self driving".
        
           | dham wrote:
           | Autopilot is just adaptive cruise control with lane keep.
           | Literally every car has this now. I don't see people on
           | Toyota, Honda, or Ford forums complaining that a table-stakes
           | feature of a car doesn't adjust speed or change lanes as a
           | car is merging in. Do you know how insane that sounds. I'm
           | assuming you're in software since you're on Hacker news.
        
         | paulcole wrote:
         | > Until I ride in one and feel safe, I can't have any faith
         | that this is a reasonable system
         | 
         | This is probably the worst way to evaluate self-driving for
         | society though, right?
        
           | bastawhiz wrote:
           | Why would I be supportive of a system that has actively
           | scared me for objectively scary reasons? Even if it's the
           | worst reason, it's not a bad reason.
        
             | paulcole wrote:
             | How you feel while riding isn't an objective thing. It's
             | entirely subjective. You and I can sit side by side and
             | feel differently about the same experience.
             | 
             | I don't see how this is in any way objective besides the
             | fact that you want it to be objective.
             | 
             | You can support things for society that scare you and feel
             | unsafe because you can admit your feelings are subjective
             | and the thing is actually safer than it feels to you
             | personally.
        
               | bastawhiz wrote:
               | I also did write about times when the car would have
               | damaged itself or likely caused an accident, and those
               | are indeed objective problems.
        
               | paulcole wrote:
               | > It failed with a cryptic system error while driving
               | 
               | I'll give you this one.
               | 
               | > In my opinion, the default setting accelerates way too
               | aggressively. I'd call myself a fairly aggressive driver
               | and it is too aggressive for my taste
               | 
               | Subjective.
               | 
               | > It started making a left turn far too early that would
               | have scraped the left side of the car on a sign. I had to
               | manually intervene.
               | 
               | Since you intervened and don't know what would've
               | happened, subjective.
               | 
               | > It tried to make way too many right turns on red when
               | it wasn't safe to. It would creep into the road, almost
               | into the path of oncoming vehicles
               | 
               | Subjective.
               | 
               | > It would switch lanes to go faster on the highway, but
               | then missed an exit on at least one occasion because it
               | couldn't make it back into the right lane in time.
               | Stupid.
               | 
               | Objective.
               | 
               | You've got some fair complaints but the idea that
               | _feeling_ safe is what's needed remains subjective.
        
         | TheCleric wrote:
         | > Lots of people are asking how good the self driving has to be
         | before we tolerate it.
         | 
         | There's a simple answer to this. As soon as it's good enough
         | for Tesla to accept liability for accidents. Until then if
         | Tesla doesn't trust it, why should I?
        
           | genocidicbunny wrote:
           | I think this is probably both the most concise and most
           | reasonable take. It doesn't require anyone to define some
           | level of autonomy or argue about specific edge cases of how
           | the self driving system behaves. And it's easy to apply this
           | principle to not only Tesla, but to all companies making self
           | driving cars and similar features.
        
           | concordDance wrote:
           | Whats the current total liability cost for all Tesla drivers?
           | 
           | The average for all USA cars seems to be around $2000/year,
           | so even if FSD was half as dangerous Tesla would still be
           | paying $1000/year equivalent (not sure how big insurance
           | margins are, assuming nominal) per car.
           | 
           | Now, if legally the driver could avoid paying insurance for
           | the few times they want/need to drive themselves (e.g. snow?
           | Dunno what FSD supports atm) then it might make sense
           | economically, but otherwise I don't think it would work out.
        
             | Retric wrote:
             | Liability alone isn't nearly that high.
             | 
             | Car insurance payments include people stealing your car,
             | uninsured motorists, rental cars, and other issues not the
             | drivers fault. Further insurance payments also include
             | profits for the insurance company, advertising, billing,
             | and other overhead from running a business.
             | 
             | Also, if Tesla was taking on these risks you'd expect your
             | insurance costs to drop.
        
               | TheCleric wrote:
               | Yeah any automaker doing this would just negotiate a flat
               | rate per car in the US and the insurer would average the
               | danger to make a rate. This would be much cheaper than
               | the average individual's cost for liability on their
               | insurance.
        
               | ryandrake wrote:
               | Somehow I doubt those savings would be passed along to
               | the individual car buyer. Surely buying a car insured by
               | the manufacturer would be much more expensive than buying
               | the car plus your own individual insurance, because the
               | car company would want to profit from both.
        
               | thedougd wrote:
               | And it would be supplementary to the driver's insurance,
               | only covering incidents that happen while FSD is engaged.
               | Arguably they would self insure and only purchase
               | insurance for Tesla as a back stop to their liability,
               | maybe through a reinsurance market.
        
               | ywvcbk wrote:
               | What if someone gets killed because of some clear
               | bug/error and the jury decides to award 100s of millions
               | just for that single ? I'm not sure it's trivial to
               | insurance companies to account for that sort of risk
        
               | ndsipa_pomu wrote:
               | Not trivial, but that is exactly the kind of thing that
               | successful insurance companies factor into their
               | premiums, or specifically exclude those scenarios (e.g.
               | not covering war zones for house insurance).
        
               | kalenx wrote:
               | It is trivial and they've done it for ages. It's called
               | reinsurance.
               | 
               | Basically (_very_ basically, there's more to it) the
               | insurance company insures itself against large claims.
        
               | ywvcbk wrote:
               | I'm not sure Boeing etc. could have insured any liability
               | risk resulting from engineering/design flaws in their
               | vehicles?
        
               | concordDance wrote:
               | Good points, thanks.
        
               | ywvcbk wrote:
               | How much would every death or severe injury caused by FSD
               | cost Tesla? We probably won't know anytime soon but since
               | unlike anyone else they can afford to pay out virtually
               | unlimited amounts and courts will presumably take that
               | into account
        
             | ywvcbk wrote:
             | Also I wouldn't be surprised if any potential wrongful
             | death lawsuits could cost Tesla several magnitudes more
             | than the current average.
        
           | bdcravens wrote:
           | The liability for killing someone can include prison time.
        
             | TheCleric wrote:
             | Good. If you write software that people rely on with their
             | lives, and it fails, you should be held liable for that
             | criminally.
        
               | beej71 wrote:
               | And such coders should carry malpractice insurance.
        
               | dmix wrote:
               | Drug companies and the FDA (circa 1906) play a very
               | dangerous and delicate dance all the time releasing new
               | drugs to the public. But for over a century now we've
               | managed to figure it out without holding pharma companies
               | criminally liable for every death.
               | 
               | > If you write software that people rely on with their
               | lives, and it fails, you should be held liable for that
               | criminally.
               | 
               | Easy to type those words on the internet than make it a
               | policy IRL. That sort of policy IRL would likely result
               | in a) killing off all commercial efforts to solve traffic
               | deaths via technology and vast amounts of other semi-
               | autonomous technology like farm equipment or b)
               | government/car companies mandating filming the driver
               | every time they turn it on, because it's technically
               | supposed to be human assisted autopilot in these testing
               | stages (outside restricted pilot programs like Waymo
               | taxis). Those distinctions would matter in a criminal
               | court room, even if humans can't always be relied upon to
               | always follow the instructions on the bottle's label.
        
               | ryandrake wrote:
               | Your take is understandable and not surprising on a site
               | full of software developers. Somehow, the general
               | software industry has ingrained this pessimistic and
               | fatalistic dogma that says bugs are inevitable and
               | there's nothing you can do to prevent them. Since
               | everyone believes it, it is a self-fulfilling prophecy
               | and we just accept it as some kind of law of nature.
               | 
               | Holding software developers (or their companies) liable
               | for defects would definitely kill off a part of the
               | industry: the very large part that YOLOs code into
               | production and races to get features released without
               | rigorous and exhaustive testing. And why don't they spend
               | 90% of their time testing and verifying and proving their
               | software has no defects? Because defects are inevitable
               | and they're not held accountable for them!
        
               | viraptor wrote:
               | > that says bugs are inevitable and there's nothing you
               | can do to prevent them
               | 
               | I don't think people believe this as such. It may be the
               | short way to write it, but actually what devs mean is
               | "bugs are inevitable at the funding/time available". I
               | often say "bugs are inevitable" when it practice it means
               | "you're not going to pay a team for formal specification,
               | validated implementation and enough reliable hardware".
               | 
               | Which business will agree to making the process 5x longer
               | and require extra people? Especially if they're not
               | forced there by regulation or potential liability?
        
               | everforward wrote:
               | It is true of every field I can think of. Food gets
               | salmonella and what not frequently. Surgeons forget
               | sponges inside of people (and worse). Truckers run over
               | cars. Manufacturers miss some failures in QA.
               | 
               | Literally everywhere else, we accept that the costs of
               | 100% safety are just unreasonably high. People would
               | rather have a mostly safe device for $1 than a definitely
               | safe one for $5. No one wants to pay to have every head
               | of lettuce tested for E Coli, or truckers to drive at
               | 10mph so they can't kill anyone.
               | 
               | Software isn't different. For the vast majority of
               | applications where the costs of failure are low to none,
               | people want it to be free and rapidly iterated on even if
               | it fails. No one wants to pay for a formally verified
               | Facebook or DoorDash.
        
               | kergonath wrote:
               | > Literally everywhere else, we accept that the costs of
               | 100% safety are just unreasonably high.
               | 
               | Yes, but also in none of these situations would the
               | consumer/customer/patient be held responsible. I don't
               | expect a system to be perfect, but I won't accept any
               | liability if it malfunctions as I use it the way it is
               | intended. And even worse, I would not accept that the
               | designers evade their responsibilities if it kills
               | someone I know.
               | 
               | As the other poster said, I am happy to consider it safe
               | enough the day the company accepts to own its issues and
               | the associated responsibility.
               | 
               | > No one wants to pay for a formally verified Facebook or
               | DoorDash.
               | 
               | This is untenable. Does nobody want a formally verified
               | avionics system in their airliner, either?
        
               | everforward wrote:
               | You could be held liable if it impacts someone else. A
               | restaurant serving improperly cooked chicken that gives
               | people E Coli is liable. Private citizens may not have
               | that duty, I'm not sure.
               | 
               | You would likely also be liable if you overloaded an
               | electrical cable, causing a fire that killed someone.
               | 
               | "Using it in the way it was intended" is largely circular
               | reasoning; of course it wasn't intended to hurt anyone,
               | so any usage that does hurt someone was clearly
               | unintended. People frequently harm each other by misusing
               | items in ways they didn't realize were misuses.
               | 
               | > This is untenable. Does nobody want a formally verified
               | avionics system in their airliner, either?
               | 
               | Not for the price it would cost. Airbus is the pioneer
               | here, and even they apply formal verification sparingly.
               | Here's a paper from a few years ago about it, and how
               | it's untenable to formally verify the whole thing:
               | https://www.di.ens.fr/~delmas/papers/fm09.pdf
               | 
               | Software development effort generally tends to scale
               | superlinearly with complexity. I am not an expert, but
               | the impression I get is that formal verification grows
               | exponentially with complexity to the point that it is
               | untenable for most things beyond research and fairly
               | simple problems. It is a huge pain in the ass to do
               | something like putting time bounds around reading a
               | config file.
               | 
               | IO also sucks in formal verification from what I hear,
               | and that's like 80% of what a plane does. Read these 300
               | signals, do some standard math, output new signals to
               | controls.
               | 
               | These things are much easier to do with tests, but tests
               | only check for scenarios you've thought of already
        
               | kergonath wrote:
               | > You could be held liable if it impacts someone else. A
               | restaurant serving improperly cooked chicken that gives
               | people E Coli is liable. Private citizens may not have
               | that duty, I'm not sure. > You would likely also be
               | liable if you overloaded an electrical cable, causing a
               | fire that killed someone.
               | 
               | Right. But neither of these examples are following
               | guidelines or proper use. If I turn the car into people
               | on the pavement, I am responsible. If the steering wheel
               | breaks and the car does it, then the manufacturer is
               | responsible (or the mechanic, if the steering wheel was
               | changed). The question at hand is whose responsibility it
               | is if the car's software does it.
               | 
               | > "Using it in the way it was intended" is largely
               | circular reasoning; of course it wasn't intended to hurt
               | anyone, so any usage that does hurt someone was clearly
               | unintended.
               | 
               | This is puzzling. You seem to be conflating use and
               | consequences and I am not quite sure how you read that in
               | what I wrote. Using a device normally should not make it
               | kill people, I guess at least we can agree on that.
               | Therefore, if a device kills people, then it is either
               | improper use (and the fault of the user), or a defective
               | device, at which point it is the fault of the designer or
               | manufacturer (or whoever did the maintenance, as the case
               | might be, but that's irrelevant in this case).
               | 
               | Each device has a manual and a bunch of regulations about
               | its expected behaviour and standard operating procedures.
               | There is nothing circular about it.
               | 
               | > Not for the price it would cost.
               | 
               | Ok, if you want to go full pedantic, note that I wrote
               | "want", not "expect".
        
               | ywvcbk wrote:
               | Punishing individual developers is of course absurd
               | (unless intent can be proven) the company itself and the
               | upper management on the hand? Would make perfect sense.
        
               | chgs wrote:
               | You have one person in that RACI accountable box. That's
               | the engineer signing it off as fit. They are held
               | accountable, including with jail if required.
        
               | tsimionescu wrote:
               | > And why don't they spend 90% of their time testing and
               | verifying and proving their software has no defects?
               | Because defects are inevitable and they're not held
               | accountable for them!
               | 
               | For a huge part of the industry, the reason is entirely
               | different. It is because software that mostly works today
               | but has defects is _much_ more valuable than software
               | that always works and has no defects 10 years from now.
               | Extremely well informed business customers will pay for
               | delivering a buggy feature today rather than wait two
               | more months for a comprehensively tested feature. This is
               | the reality of the majority of the industry: consumers
               | care little about bugs (below some defect rate) and care
               | far more about timeliness.
               | 
               | This of course doesn't apply to critical systems like
               | automatic drivers or medical devices. But the vast
               | majority of the industry is not building these types of
               | systems.
        
               | hilsdev wrote:
               | We should hold Pharma companies liable for every death.
               | They make money off the success cases. Not doing so is
               | another example of privatized profits and socialized
               | risks/costs. Something like a program with reduced costs
               | for those willing to sign away liability to help balance
               | social good vs risk analysis
        
               | ywvcbk wrote:
               | > criminally liable for every death.
               | 
               | The fact that people generally consume drugs voluntarily
               | and make that decision after being informed about most of
               | the known risks probably mitigates that to some extent.
               | Being killed by someone else's FSD car seems to be very
               | different
        
               | sokoloff wrote:
               | Imagine that in 2031, FSD cars could exactly halve all
               | aspects of auto crashes (minor, major, single car, multi
               | car, vs pedestrian, fatal/non, etc.)
               | 
               | Would you want FSD software to be developed or not? If
               | you do, do you think holding devs or companies criminally
               | liable for half of all crashes is the best way to ensure
               | that progress happens?
        
               | blackoil wrote:
               | Say cars have near 0 casualty in northern hemisphere but
               | occasionally fails for cars driving topsy turvy in south.
               | If company knew about it and chooses to ignore it because
               | of profits, yes they should be charged criminally.
        
               | ywvcbk wrote:
               | From a utilitarian perspective sure, you might be right
               | but how do you exempt those companies from civil
               | liability and make it impossible for victims/their
               | families to sue the manufacturer? Might be legally tricky
               | (driver/owner can explicitly/implicitly agree with the
               | EULA or other agreements, imposing that on third parties
               | wouldn't be right).
        
               | Majromax wrote:
               | > how do you exempt those companies from civil liability
               | and make it impossible for victims/their families to sue
               | the manufacturer?
               | 
               | I don't think anyone in this thread has talked about an
               | exemption from _civil_ liability (sue for money), just
               | criminal liability (go to jail).
               | 
               | Civil liability is the far less controversial issue
               | because it's transferred all the time: governments even
               | mandate that drivers carry insurance for this purpose.
               | 
               | With civil liability transfer, imperfect FSD can still
               | make economic sense. Just as an insurance company needs
               | to collect enough premium to pay claims, the FSD
               | manufacturer would need to reserve enough revenue to pay
               | its expected claims. In this case, FSD doesn't even need
               | to be better than humans to make economic sense, in the
               | same way that bad drivers can still buy (expensive)
               | insurance.
        
               | ywvcbk wrote:
               | > just criminal liability (go to jail).
               | 
               | That just seems like a theoretical possibility (even if
               | that). I don't see how any engineer or even someone in
               | management could go to jail unless intent or gross
               | negligence can be proven.
               | 
               | > drivers carry insurance for this purpose.
               | 
               | The mandatory limit is extremely low in many US states.
               | 
               | > expected claims
               | 
               | That seems like the problem. It might take a while until
               | we reach an equilibrium of some sort.
               | 
               | > that bad drivers can still buy
               | 
               | That's still capped by the amount of coverage + total
               | assets held by that bad driver. In Tesl's case there is
               | no real limit (without legislation/established
               | precedent). Juries/courts would likely be influenced by
               | that fact as well.
        
               | DennisP wrote:
               | In fact, if you buy your insurance from Tesla, you
               | effectively do put civil responsibility for FSD back in
               | their hands.
        
               | ekianjo wrote:
               | > make that decision after being informed about most of
               | the known risks
               | 
               | Like for the COVID-19 vaccines? Experimental yet given to
               | billions without ever showing them a consent form.
        
               | ywvcbk wrote:
               | Yes, but worse. Nobody physically forced anyone to get
               | vaccinated so you still had some choice. Of course
               | legally banning individuals from using public roads or
               | sidewalks unless they give up their right to sue
               | Tesla/etc. might be an option.
        
               | dansiemens wrote:
               | Are you suggesting that individuals should carry that
               | liability?
        
               | izacus wrote:
               | The ones that are identified as making decisions leading
               | to death, yes.
               | 
               | It's completely normal in other fields where engineers
               | build systems that can kill.
        
               | A4ET8a8uTh0 wrote:
               | Pretty much. Fuck. I just watched higher ups sign off on
               | a project I know for a fact has defects all over the
               | place going into production despite our very explicit:
               | don't do it ( not quite Tesla level consequences, but
               | still resulting in real issues for real people ). The
               | sooner we can start having people in jail for knowingly
               | approving half-baked software, the sooner it will
               | improve.
        
               | IX-103 wrote:
               | Should we require Professional Engineers to sign off on
               | such projects the same way they are required to for other
               | safety critical infrastructure (like bridges and dams)?
               | The Professional Engineer that signed off is liable for
               | defects in the design. (Though, of course, if the design
               | is not followed then liability can shift back to the
               | company that built it)
        
               | A4ET8a8uTh0 wrote:
               | I hesitate, because I shudder at government deciding
               | which algorithm is best for a given scenario ( because
               | that is effectively is where it would go ). Maybe the
               | distinction is, the moment money changes hands based on
               | product?
               | 
               | I am not an engineer, but I have watched clearly bad
               | decisions take place from technical perspective so that a
               | person with title that went to their head and a bonus
               | that is not aligned with right incentives mess things up
               | for us. Maybe some proffesionalization of software
               | engineering is in order.
        
               | _rm wrote:
               | What a laugh, would you take that deal?
               | 
               | Upside: you get paid a 200k salary, if all your code
               | works perfectly. Downside: if it doesn't, you go to
               | prison.
               | 
               | The users aren't compelled to use it. They can choose not
               | to. They get to choose their own risks.
               | 
               | The internet is a gold mine of creatively moronic
               | opinions.
        
               | moralestapia wrote:
               | Read the site rules.
               | 
               | And also, of course some people would take that deal, and
               | of course some others wouldn't. Your argument is moot.
        
               | thunky wrote:
               | You can go to prison or die for being a bad driver, yet
               | people choose to drive.
        
               | ukuina wrote:
               | Systems evolve to handle such liability: Drivers pass
               | theory and practical tests to get licensed to drive (and
               | periodically thereafter), and an insurance framework that
               | gauges your risk-level and charges you accordingly.
        
               | kergonath wrote:
               | Requiring formal licensing and possibly insurance for
               | developers working on life-critical systems is not that
               | outlandish. On the contrary, that is already the case in
               | serious engineering fields.
        
               | ekianjo wrote:
               | And yet tens of thousands of people die on the roads
               | right now every year. Working well?
        
               | _rm wrote:
               | Arguing for the sake of it; you wouldn't take that risk
               | reward.
               | 
               | Most code has bugs from time to time even when highly
               | skilled developers are being careful. None of them would
               | drive if the fault rate was similar and the outcome was
               | death.
        
               | notahacker wrote:
               | Or to put even more straightforwardly: people who choose
               | to drive rarely expect to drive more than a few 10s of k
               | per year. People who choose to write autonomous
               | software's lines of code potentially drive a billion
               | miles per year, experiencing a lot more edge cases they
               | are expected to handle in a non-dangerous manner, and
               | have to handle them via advance planning and interactions
               | with a lot of other people's code.
               | 
               | The only practical way around this which permits
               | autonomous vehicles (which are apparently dependent on
               | much more complex and intractable codebases than, say,
               | avionics) is a much higher threshold of criminal
               | responsibility than the "the serious consequences
               | resulted from the one-off execution of an dangerous
               | manoeuvre which couldn't be justified in context" which
               | sends human drivers to jail. And of course that double
               | standard will be problematic if "willingness to accept
               | liability" is the only safety threshold.
        
               | 7sidedmarble wrote:
               | I don't think anyone's seriously suggesting people be
               | held accountable for bugs which are ultimately accidents.
               | But if you knowingly sign off on, oversea, or are
               | otherwise directly responsible for the construction of
               | software that you know has a good chance of killing
               | people, then yes, there should be consequences for that.
        
               | chgs wrote:
               | Need far more regulation of the software industry, far
               | too many people working in it fail to understand the
               | scope of what they do.
               | 
               | Civil engineer kills someone with a bad building, jail.
               | Surgeon removes the wrong lung, jail. Computer programmer
               | kills someone, "oh well it's your own fault".
        
               | caddemon wrote:
               | I've never heard of a surgeon going to jail over a
               | genuine mistake even if it did kill someone. I'm also not
               | sure what that would accomplish - take away their license
               | to practice medicine sure, but they're not a threat to
               | society more broadly.
        
               | bdcravens wrote:
               | Assuming there's the kind of guard rails as in other
               | industries where this is true, absolutely. (In other
               | words, proper licensing and credentialing, and the
               | ability to prevent a deployment legally)
               | 
               | I would also say that if something gets signed off on by
               | management, that carries an implicit transfer of
               | accountability up the chain from the individual
               | contributor to whoever signed off.
        
               | viraptor wrote:
               | That's a dangerous line and I don't think it's correct.
               | Software I write shouldn't be relied on in critical
               | situations. If someone makes that decision then it's on
               | them not on me.
               | 
               | The line should be where a person tells others that they
               | can rely on the software with their lives - as in the
               | integrator for the end product. Even if I was working on
               | the software for self driving, the same thing would apply
               | - if I wrote some alpha level stuff for the internal
               | demonstration and some manager decided "good enough, ship
               | it", they should be liable for that decision. (Because I
               | wouldn't be able to stop them / may have already left by
               | then)
        
               | presentation wrote:
               | To be fair maybe the software you write shouldn't be
               | relied on in critical situations but in this case the
               | only place this software could be used in are critical
               | situations
        
               | viraptor wrote:
               | Ultimately - yes. But as I mentioned, the fact it's sold
               | as ready for critical situations doesn't mean the
               | developers thought/said it's ready.
        
               | elric wrote:
               | I think it should be fairly obvious that it's not the
               | individual developers who are responsible/liable. In
               | critical systems there is a whole chain of liability.
               | That one guy in Nebraska who thanklessly maintains some
               | open source lib that BigCorp is using in their car should
               | obviously not be liable.
        
               | f1shy wrote:
               | It depends. If you do bad sw and skip reviews and
               | processes, you may be liable. Even if you are told to do
               | something, if you know is wrong, you should say it. Right
               | now I'm in middle of s*t because of I spoked up.
        
               | Filligree wrote:
               | > Right now I'm in middle of s*t because of I spoked up.
               | 
               | And you believe that, despite experiencing what happens
               | if you speak up?
               | 
               | We shouldn't simultaneously require people to take heroic
               | responsibility, while also leaving them high and dry if
               | they do.
        
               | f1shy wrote:
               | I do believe I am responsible. I recognize I'am now in a
               | position that I can speak without fear. If I get fired I
               | would make a party tbh.
        
               | gmueckl wrote:
               | But someone slapped that label on it and made a pinky
               | promise that it's true. That person needs to accept
               | liability if things go wrong. If person A is loud and
               | clear that something isn't ready, but person B tells the
               | customer otherwise, B is at fault.
               | 
               | Look, there are well established procedures in a lot of
               | industries where products are relied on to keep people
               | safe. They all require quite rigorous development and
               | certification processes and sneaking untested alpha
               | quality software through such a process would be actively
               | malicious and quite possibly criminal in and of itself,
               | at least in some industries.
        
               | viraptor wrote:
               | This is the beginning of the thread
               | https://news.ycombinator.com/item?id=41891164
               | 
               | You're in violent agreement with me ;)
        
               | latexr wrote:
               | No, the beginning of the thread is earlier. And with that
               | context it seems clear to me that the "you" in the post
               | you linked means "the company", not "the individual
               | software developer". No one else in your replies seems
               | confused by that, we all understand self-driving software
               | wasn't written by a single person that has ultimate
               | decision power within a company.
        
               | viraptor wrote:
               | If the message said "you release software", or "approve"
               | or "produce", or something like that, sure. But it said
               | "you write software" - and I don't think that can apply
               | to a company, because writing is what individuals do. But
               | yeah, maybe that's not what the author meant.
        
               | latexr wrote:
               | > and I don't think that can apply to a company, because
               | writing is what individuals do.
               | 
               | By that token, no action could ever apply to a company--
               | including approving, producing, or releasing--since it is
               | a legal entity, a concept, not a physical thing. For all
               | those actions there was a person actually doing it in the
               | name of the company.
               | 
               | It's perfectly normal to say, for example, "GenericCorp
               | wrote a press-release about their new product".
        
               | kergonath wrote:
               | It's not that complicated or outlandish. That's how most
               | engineering fields work. If a building collapses because
               | of design flaws, then the builders and architects can be
               | held responsible. Hell, if a car crashes because of a
               | design or assembly flaw, the manufacturer is held
               | responsible. Why should self-driving software be any
               | different?
               | 
               | If the software is not reliable enough, then don't use it
               | in a context where it could kill people.
        
               | krisoft wrote:
               | I think the example here is that the designer draws a
               | bridge for a railway model, and someone decides to use
               | the same design and sends real locomotives across it. Is
               | the original designer (who neither intended nor could
               | have foreseen this) liable in your understanding?
        
               | ndsipa_pomu wrote:
               | That's a ridiculous argument.
               | 
               | If a construction firm takes an arbitrary design and then
               | tries to build it in a totally different environment and
               | for a different purpose, then the construction firm is
               | liable, not the original designer. It'd be like Boeing
               | taking a child's paper aeroplane design and making a
               | passenger jet out of it and then blaming the child when
               | it inevitably fails.
        
               | wongarsu wrote:
               | Or alternatively, if Boeing uses wood screws to attach an
               | airplane door and the screw fails that's on Boeing, not
               | the airline, pilot or screw manufacturer. But if it's
               | sold as aerospace-grade attachment bolt with attachments
               | for safety wire and a spec sheet that suggests the
               | required loads are within design parameters then it's the
               | bolt manufacturers fault when it fails, and they might
               | have to answer for any deaths resulting from that. Unless
               | Boeing knew or should have known that the bolts weren't
               | actually as good as claimed, then the buck passes back to
               | them
               | 
               | Of course that's wildly oversimplifying and multiple
               | entities can be at fault at once. My point is that these
               | are normal things considered in regular engineering and
               | manufacturing
        
               | krisoft wrote:
               | > That's a ridiculous argument.
               | 
               | Not making an argument. Asking a clarifying question
               | about someone else's.
               | 
               | > It'd be like Boeing taking a child's paper aeroplane
               | design and making a passenger jet out of it and then
               | blaming the child when it inevitably fails.
               | 
               | Yes exactly. You are using the same example I used to say
               | the same thing. So which part of my message was
               | ridiculous?
        
               | ndsipa_pomu wrote:
               | If it's not an argument, then you're just misrepresenting
               | your parent poster's comment by introducing a scenario
               | that never happens.
               | 
               | If you didn't intend your comment as a criticism, then
               | you phrased it poorly. Do you actually believe that your
               | scenario happens in reality?
        
               | lcnPylGDnU4H9OF wrote:
               | It was not a misrepresentation of anything. They were
               | just restating the worry that was stated in the GP
               | comment. https://news.ycombinator.com/item?id=41892572
               | 
               | And the only reason the commenter I linked to had that
               | response is because its parent comment was slightly
               | careless in its phrasing. Probably just change "write" to
               | "deploy" to capture the intended meaning.
        
               | krisoft wrote:
               | > you're just misrepresenting your parent poster's
               | comment
               | 
               | I did not represent or misrepresent anything. I have
               | asked a question to better understand their thinking.
               | 
               | > If you didn't intend your comment as a criticism, then
               | you phrased it poorly.
               | 
               | Quite probably. I will have to meditate on it.
               | 
               | > Do you actually believe that your scenario happens in
               | reality?
               | 
               | With railway bridges? Never. It would ring alarm bells
               | for everyone from the fabricators to the locomotive
               | engineer.
               | 
               | With software? All the time. Someone publishes some open
               | source code, someone else at a corporation bolts the open
               | source code into some application and now the former "toy
               | train bridge" is a loadbearing key-component of something
               | the original developer could never imagine nor plan for.
               | 
               | This is not theoretical. Very often I'm the one doing the
               | bolting.
               | 
               | And to be clear: my opinion is that the liability should
               | fall with whoever integrated the code and certified it to
               | be fit for some safety critical purpose. As an example if
               | you publish leftpad and i put it into a train brake
               | controller it is my job to make sure it is doing the
               | right thing. If the train crashes you as the author of
               | leftpad bear no responsibility but me as the manufacturer
               | of discount train brakes do.
        
               | kergonath wrote:
               | Someone, at some point signed off on this being released.
               | Not thinking things through seriously is not an excuse to
               | sell defective cars.
        
               | f1shy wrote:
               | Are you serious?! You must be trolling!
        
               | krisoft wrote:
               | I assure you I am not trolling. You appear to have
               | misread my message.
               | 
               | Take a deep breath. Read my message one more time
               | carefully. Notice the question mark at the end of the
               | last sentence. Think about it. If after that you still
               | think I'm trolling you or anyone else I will be here and
               | happy to respond to your further questions.
        
               | sigh_again wrote:
               | >Software I write shouldn't be relied on in critical
               | situations.
               | 
               | Then don't write software to be used in things that are
               | literally always critical situations, like cars.
        
               | mensetmanusman wrote:
               | Software requires hardware that can bit flip with gamma
               | rays.
        
               | aaronmdjones wrote:
               | Which is why hardware used to run safety-critical
               | software is made redundant.
               | 
               | Take the Boeing 777 Primary Flight Computer for example.
               | This is a fully digital fly-by-wire aircraft. There are 3
               | separate racks of equipment housing identical flight
               | computers; 2 in the avionics bay underneath the flight
               | deck, 1 in the aft cargo section. Each flight computer
               | has 3 separate processors, supporting 2 dissimilar
               | instruction set architectures, running the same software
               | built by 3 separate compilers. Each flight computer
               | captures instances of the software not agreeing about an
               | action to be undertaken and wins by majority vote. The
               | processor that makes these decisions is different in each
               | flight computer.
               | 
               | The power systems that provide each flight computer are
               | also fully redundant; each computer gets power from a
               | power supply assembly, which receives 2 power feeds from
               | 3 separate power supplies; no 2 power supply assemblies
               | share the same 2 sources of power. 2 of the 3 power
               | systems (L engine generator, R engine generator, and the
               | hot battery bus) would have to fail and the APU would
               | have to be unavailable in order to knock out 1 of the 3
               | computers.
               | 
               | This system has never failed in 30 years of service.
               | There's still a primary flight computer disconnect switch
               | on the overhead panel in the cockpit, taking the software
               | out of the loop, to logically connect all of your control
               | inputs to the flight surface actuators. I'm not aware of
               | it ever being used (edit: in a commercial flight).
        
               | mensetmanusman wrote:
               | You can't guarantee the hardware was properly built.
        
               | aaronmdjones wrote:
               | Unless Intel, Motorola, and AMD all conspire to give you
               | a faulty processor, you will get a working primary flight
               | computer.
               | 
               | Besides, this is what flight testing is for. Aviation
               | certification authorities don't let an aircraft serve
               | passengers unless you can demonstrate that all of its
               | safety-critical systems work properly and that it
               | performs as described.
               | 
               | I find it hard to believe that automotive works much
               | differently in this regard, which is what things like
               | crumple zone crash tests are for.
        
               | chgs wrote:
               | You can control for that. Multiple machines doing is
               | rival calculations for example
        
               | ekianjo wrote:
               | How is that working with Boeing?
        
               | mlinhares wrote:
               | People often forget corporations don't go to jail. Murder
               | when you're not a person ends up with a slap.
        
               | bossyTeacher wrote:
               | Doesn't seem to happen in the medical and airplane
               | industries, otherwise, Boeing would most likely not exist
               | as a company anymore.
        
               | jsvlrtmred wrote:
               | Perhaps one can debate whether it happens often enough or
               | severely enough, but it certainly _happens_. For example,
               | and only the first one to come to mind - the president of
               | PIP went to jail.
        
               | hibikir wrote:
               | Remember that this is neural networks doing the driving,
               | more than old expert systems: What makes a crash happen
               | is a network that fails to read an image correctly, or a
               | network that fails to capture what is going on when
               | melding input from different sensors.
               | 
               | So the blame won't be on a guy who got an if statement
               | backwards, but signing off on stopping training, failing
               | to have certain kinds of pictures in the set, or other
               | similar, higher order problem. Blame will be incredibly
               | nebulous.
        
               | sashank_1509 wrote:
               | Do we send Boeing engineers to jail when their plane
               | crashes?
               | 
               | Intention matters when passing crime judgement. If a
               | mother causes the death of her baby due to some poor
               | decision (say feed her something contaminated), no one
               | proposes or tries to jail the mother, because they know
               | the intention was the opposite.
        
             | renegade-otter wrote:
             | In the United States? Come on. Boeing executives are not in
             | jail - they are getting bonuses.
        
               | f1shy wrote:
               | But some little boy down the line will pay for it. Look
               | for Eschede ICE accident.
        
               | renegade-otter wrote:
               | There are many examples.
               | 
               | The Koch brothers, famous "anti-regulatory state"
               | warriors, have fought oversight so hard that their gas
               | pipelines were allowed to be barely intact.
               | 
               | Two teens get into a truck, turn the ignition key - and
               | the air explodes:
               | 
               | https://www.southcoasttoday.com/story/news/nation-
               | world/1996...
               | 
               | Does anyone go to jail? F*K NO.
        
               | IX-103 wrote:
               | To be fair, the teens knew about the gas leak and started
               | the truck in an attempt to get away. Gas leaks like that
               | shouldn't happen easily, but people near pipelines like
               | that should also be made aware of the risks of gas leaks,
               | as some leaks are inevitable.
        
               | 8note wrote:
               | As an alternative though, the company also failed at
               | handling that the gas leak started. They could have had
               | people all over the place guiding people out and away
               | from the leak safely, and keeping the public away while
               | the leak is fixed.
               | 
               | Or, they could buy sufficient buffer land around the
               | pipeline such that the gas leak will be found and stopped
               | before it could explode down the road
        
             | lowbloodsugar wrote:
             | And corporations are people now, so Tesla can go to jail.
        
           | renewiltord wrote:
           | This is how I feel about nuclear energy. Every single plant
           | should need to form a full insurance fund dedicated to paying
           | out if there's trouble. And the plant should have strict
           | liability: anything that happens from materials it releases
           | are its responsibility.
           | 
           | But people get upset about this. We need corporations to take
           | responsibility.
        
             | idiotsecant wrote:
             | While we're at it how about why apply the same standard to
             | coal and natural gas plants? For some reason when we start
             | taking about nuclear plants we all of a sudden become
             | adverse to the idea of unfunded externalities but when
             | we're talking about 'old' tech that has been steadily
             | irradiating your community and changing the gas composition
             | _of the entire planet_ it becomes less concerning.
        
               | moooo99 wrote:
               | I think it is a matter of perceived risk.
               | 
               | Realistically speaking, nuclear power is pretty safe. In
               | the history of nuclear power, there were two major
               | incidents. Considering the number of nuclear power plants
               | around the planet, that is pretty good. However, as those
               | two accidents demonstrated, the potential fallout of
               | those incidents is pretty severe and widespread. I think
               | this massively contributes to the perceived risks. The
               | warnings towards the public were pretty clear. I remember
               | my mom telling stories from the time the Chernobyl
               | incident became known to the public and people became
               | worried about the produce they usually had from their
               | gardens. Meanwhile, everything that has been done to
               | address the hazards of fossil based power generation is
               | pretty much happening behind the scenes.
               | 
               | With coal and natural gas, it seems like people perceive
               | the risks as more abstract. The radioactive emissions of
               | coal power plants have been known for a while and the
               | (potential) dangers of fine particulate matters resulting
               | from combustion are somewhat well known nowadays as well.
               | However, the effects of those danger seem much more
               | abstract and delayed, leading people to not be as worried
               | about it. It also shows on a smaller, more individual
               | scale: people still buy ICE cars at large and install gas
               | stoves into their houses despite induction being readily
               | available and at times even cheaper.
        
               | pyrale wrote:
               | > However, the effects of those danger seem much more
               | abstract and delayed, leading people to not be as worried
               | about it.
               | 
               | Climate change is very visible in the present day to me.
               | People are protesting about it frequently enough that
               | it's hard to claim they are not worried.
        
               | moooo99 wrote:
               | Climate change is certainly visible, although the extend
               | to which areas are affected varies wildly. However, there
               | are still shockingly many people who have a hard time
               | attributing ever increasing natural disasters and more
               | extreme weather patterns to climate change.
        
               | brightball wrote:
               | During power outages, having natural gas in your home is
               | a huge benefit. Many in my area just experienced it with
               | Helene.
               | 
               | You can still cook. You can still get hot water. If you
               | have gas logs you still have a heat source in the winter
               | too.
               | 
               | These trade offs are far more important to a lot of
               | people.
        
               | moooo99 wrote:
               | Granted, that is a valid concern if power outages are
               | more frequent in your area. I have never experienced a
               | power outage personally, so that is nothing I ever
               | thought of. However, I feel like with solar power and
               | battery storage systems becoming increasingly widespread,
               | this won't be a major concern for much longer
        
               | brightball wrote:
               | They aren't frequent but in the last 15-16 years there
               | have been 2 outages that lasted almost 2 weeks in some
               | areas around here. The first one was in the winter and
               | the only gas appliance I had was a set of gas logs in the
               | den.
               | 
               | It heated my whole house and we used a pan to cook over
               | it. When we moved the first thing I did was install gas
               | logs, gas stove and a gas water heater.
               | 
               | It's nice to have options and backup plans. That's one of
               | the reasons I was a huge fan of the Chevy Volt when it
               | first came out. I could easily take it on a long trip but
               | still averaged 130mpg over 3 years (twice). Now I've got
               | a Tesla and when there are fuel shortages it's also
               | really nice.
               | 
               | A friend of ours owns a cybertruck and was without power
               | for 9 days, but just powered the whole house with the
               | cybertruck. Every couple of days he'd drive to a
               | supercharger station to recharge.
        
               | renewiltord wrote:
               | Sure, we can have a carbon tax on everything. That's
               | fine. And then the nuclear plant has to pay for a
               | Pripyat-sized exclusion zone around it. Just like the guy
               | said about Tesla. All fair.
        
             | ndsipa_pomu wrote:
             | That's not a workable idea as it'd just encourage
             | corporations to obfuscate the ownership of the plant (e.g.
             | shell companies) and drastically underestimate the actual
             | risks of catastrophes. Ultimately, the government will be
             | left holding the bill for nuclear catastrophes, so it's
             | better to just recognise that and get the government to
             | regulate the energy companies.
        
               | f1shy wrote:
               | The problem I see there is that if "corporations are
               | responsible" then no one is. That is, no real person has
               | the responsibility, and acts accordingly.
        
           | tiahura wrote:
           | I think that's implicit in the promise of the upcoming-any-
           | year-now unattended full self driving.
        
           | mrjin wrote:
           | Even if it does, can it resurrect the deceased?
        
             | LadyCailin wrote:
             | But people driving manually kill people all the time too.
             | The bar for self driving isn't <<does it never kill
             | anyone>>, it's <<does it kill people less than manual
             | driving>>. We're not there yet, and Tesla's <<FSD>> is
             | marketing bullshit, but we certainly will be there one day,
             | and at that point, we need to understand what we as a
             | society will do when a self driving car kills someone. It's
             | not obvious what the best solution is there, and we need to
             | continue to have societal discussions to hash that out, but
             | the correct solution definitely isn't <<don't use self
             | driving>>.
        
               | amelius wrote:
               | No, because every driver thinks they are better than
               | average.
               | 
               | So nobody will accept it.
        
               | the8472 wrote:
               | I expect insurance to figure out the relative risks and
               | put a price sticker on that decision.
        
               | A4ET8a8uTh0 wrote:
               | Assuming I understand the argument flow correctly, I
               | think I disagree. If there is one thing that the past few
               | decades have confirmed quite conclusively, it is that
               | people will trade a lot of control and sense away in the
               | name of convenience. The moment FSD reaches that sweet
               | spot of 'take me home -- I am too drunk to drive' of
               | reliability, I think it would be accepted; maybe even
               | required by law. It does not seem there.
        
               | Majromax wrote:
               | > The bar for self driving isn't <<does it never kill
               | anyone>>, it's <<does it kill people less than manual
               | driving>>.
               | 
               | Socially, that's not quite the standard. As a society,
               | we're at ease with auto fatalities because there's often
               | Someone To Blame. "Alcohol was involved in the incident,"
               | a report might say, and we're more comfortable even
               | though nobody's been brought back to life. Alternatively,
               | "he was asking for it, walking at night in dark clothing,
               | nobody could have seen him."
               | 
               | This is an emotional standard that speaks to us as human,
               | story-telling creatures that look for order in the
               | universe, but this is not a proper actuarial standard. We
               | might need FSD to be manifestly safer than even the best
               | human drivers before we're comfortable with its universal
               | use.
        
           | ndsipa_pomu wrote:
           | > As soon as it's good enough for Tesla to accept liability
           | for accidents.
           | 
           | That makes a lot of sense and not just from a selfish point
           | of view. When a person drives a vehicle, then the person is
           | held responsible for how the vehicle behaves on the roads, so
           | it's logical that when a machine drives a vehicle that the
           | machine's manufacturer/designer is held responsible.
           | 
           | It's a complete con that Tesla is promoting their autonomous
           | driving, but also having their vehicles suddenly switch to
           | non-autonomous driving which they claim moves the
           | responsibility to the human in the driver seat. Presumably,
           | the idea is that the human should have been watching and
           | approving everything that the vehicle has done up to that
           | point.
        
             | andrewaylett wrote:
             | The responsibility doesn't shift, it _always_ lies with the
             | human. One problem is that humans are notoriously poor at
             | maintaining attention when supervising automation
             | 
             | Until the car is ready to take over as _legal_ driver, it
             | 's foolish to set the human driver up for failure in the
             | way that Tesla (and the humans driving Tesla cars) do.
        
               | f1shy wrote:
               | What?! So if there is a failure and the car goes full
               | throttle (no autonomous car) it is my responsibility?!
               | You are pretty wrong!!!
        
               | kgermino wrote:
               | You are responsible (Legally, contractually, morally) for
               | supervising FSD today. If the car decided to stomp on the
               | throttle you are expected to be ready to hit the brakes.
               | 
               | The whole point is that is somewhat of an unreasonable
               | expectation but it's what Tesla expects you to do today
        
               | f1shy wrote:
               | My example was clear about NOT about autonomous driving.
               | Because the previous comment seems to imply for
               | everything you are responsible
        
               | FireBeyond wrote:
               | > If the car decided to stomp on the throttle you are
               | expected to be ready to hit the brakes.
               | 
               | Didn't Tesla have an issue a couple of years ago where
               | pressing the brake did _not_ disengage any throttle? i.e.
               | if the car has a bug and puts throttle to 100% and you
               | stand on the brake, the car should say  "cut throttle to
               | 0", but instead, you just had 100% throttle, 100% brake?
        
               | blackeyeblitzar wrote:
               | If it did, it wouldn't matter. Brakes are required to be
               | stronger than engines.
        
               | FireBeyond wrote:
               | That makes no sense. Yes, they are. But brakes are going
               | to be more reactive and performant with the throttle at 0
               | than 100.
               | 
               | You can't imagine that the stopping distances will be the
               | same.
        
               | xondono wrote:
               | Autopilot, FSD, etc.. are all legally classified as ADAS,
               | so it's different from e.g. your car not responding to
               | controls.
               | 
               | The liability lies with the driver, and all Tesla needs
               | to prove is that input from the driver will override any
               | decision made by the ADAS.
        
               | mannykannot wrote:
               | > The responsibility doesn't shift, it always lies with
               | the human.
               | 
               | Indeed, and that goes for the person or persons who say
               | that the products they sell are safe when used in a
               | certain way.
        
             | f1shy wrote:
             | >> When a person drives a vehicle, then the person is held
             | responsible for how the vehicle behaves on the roads, so
             | it's logical that when a machine drives a vehicle that the
             | machine's manufacturer/designer is held responsible.
             | 
             | Never really understood the supposed dilemma. What happens
             | when the brakes fail because of bad quality?
        
               | arzig wrote:
               | Then this would be manufacturing liability because they
               | are not fit for purpose.
        
               | ndsipa_pomu wrote:
               | > What happens when the brakes fail because of bad
               | quality?
               | 
               | Depends on the root cause of the failure. Manufacturing
               | faults would put the liability on the manufacturer;
               | installation mistakes would put the liability on the
               | mechanic; using them past their useful life would put the
               | liability on the owner for not maintaining them in
               | working order.
        
           | jefftk wrote:
           | Note that Mercedes does take liability for accidents with
           | their (very limited level) level 3 system:
           | https://www.theverge.com/2023/9/27/23892154/mercedes-benz-
           | dr...
        
             | f1shy wrote:
             | Yes. That is the only way. That being said, I want to see
             | the first incidents, and how are they resolved.
        
           | theptip wrote:
           | Presumably that is exactly when their taxi service rolls out?
           | 
           | While this has a dramatic rhetorical flourish, I don't think
           | it's a good proxy. Even if it was safer, it would be an
           | unnecessarily high burden to clear. You'd be effectively
           | writing a free insurance policy which is obviously not free.
           | 
           | Just look at total accidents / deaths per mile driven, it's
           | the obvious and standard metric for measuring car safety.
           | (You need to be careful to not stop the clock as soon as the
           | system disengages of course. )
        
         | mike_d wrote:
         | > Lots of people are asking how good the self driving has to be
         | before we tolerate it.
         | 
         | When I feel as safe as I do sitting in the back of a Waymo.
        
         | eric_cc wrote:
         | That sucks that you had that negative experience. I've driven
         | thousands of miles in FSD and love it. Could not imagine going
         | back. I rarely need to intervene and when I do it's not because
         | the car did something dangerous. There are just times I'd
         | rather take over due to cyclists, road construction, etc.
        
           | windexh8er wrote:
           | I don't believe this at all. I don't own one but know about a
           | half dozen people that got suckered into paying for FSD. All
           | of them don't use it and 3 of them have stated it's put them
           | in dangerous situations.
           | 
           | I've ridden in an X, S and Y with it on. Talk about vomit
           | inducing when letting it drive during "city" driving. I don't
           | doubt it's OK on highway driving, but Ford Blue Cruise and
           | GM's Super Cruise are better there.
        
             | eric_cc wrote:
             | You can believe what you want to believe. It works
             | fantastic for me whether you believe it or not.
             | 
             | I do wonder if people who have wildly different experiences
             | than I have are living in a part of the country that, for
             | one reason or another, Tesla FSD does not yet do as well
             | in.
        
               | kelnos wrote:
               | I think GP is going too far in calling you a liar, but I
               | think for the most part your FSD praise is just kinda...
               | unimportant and irrelevant. GP's aggressive attitude
               | notwithstanding, I think most reasonable people will
               | agree that FSD handles a lot of situations really well,
               | and believe that some people have travel routes where FSD
               | _always_ handles things well.
               | 
               | But ok, great, so what? If that _wasn 't_ the case, FSD
               | would be an unmitigated disaster with a body count in the
               | tens of thousands. So in a comment thread about someone
               | talking about the problems and unsafe behavior they've
               | seen, a "well it works for me" reply is just annoying
               | noise, and doesn't really add anything to the discussion.
        
               | eric_cc wrote:
               | Open discussion and sharing different experiences with
               | technology is "annoying noise" to you but not to me.
               | Slamming technology that works great for others should
               | receive no counter points and become an echo chamber or
               | what?
        
           | itsoktocry wrote:
           | These "works for me!" comments are exhausting. Nobody
           | believes you "rarely intervene", otherwise Tesla themselves
           | would be promoting the heck out of the technology.
           | 
           | Bring on the videos of you in the passenger seat on FSD for
           | any amount of time.
        
             | eric_cc wrote:
             | It's the counter-point to the "it doesn't work for me"
             | posts. Are you okay with those ones?
        
               | kelnos wrote:
               | I think the problem with the "it works for me" type posts
               | is that most people reading them think the person writing
               | it is trying to refute what the person with the problem
               | is saying. As in, "it works for me, so the problem must
               | be with you, not the car".
               | 
               | I will refrain from commenting on whether or not that's a
               | fair assumption to make, but I think that's where the
               | frustration comes from.
               | 
               | I think when people make "WFM" posts, it would go a long
               | way to acknowledge that the person who had a problem
               | really did have a problem, even if implicitly.
               | 
               | "That's a bummer; I've driven thousands of miles using
               | FSD, and I've felt safe and have never had to intervene.
               | I wonder what's different about our travel that's given
               | us such different experiences."
               | 
               | That kind of thing would be a lot more palatable, I
               | think, even if you might think it's silly/tiring/whatever
               | to have to do that every time.
        
             | omgwtfbyobbq wrote:
             | I can see it. How FSD performs depends on the environment.
             | In some places it's great, in others I take over relatively
             | frequently, although it's usually because it's being
             | annoying, not because it poses any risk.
             | 
             | Being in the passenger seat is still off limits for obvious
             | reasons.
        
           | bastawhiz wrote:
           | I'm glad for you, I guess.
           | 
           | I'll say the autopark was kind of neat, but parking has never
           | been something I have struggled with.
        
           | phito wrote:
           | I hope I never get to share road with you. Oh wait I won't,
           | this crazyness is illegal here.
        
         | concordDance wrote:
         | This would be more helpful with a date. Was this in 2020 or
         | 2024? I've been told FSD had a complete rearchitecting.
        
           | bastawhiz wrote:
           | It was a few months ago
        
         | dchichkov wrote:
         | > I'm grateful to be getting a car from another manufacturer
         | this year.
         | 
         | I'm curious, what is the alternative that you are considering?
         | I've been delaying an upgrade to electric for some time. And
         | now, a car manufacturer that is contributing to the making of
         | another Jan 6th, 2021 is not an option, in my opinion.
        
           | bastawhiz wrote:
           | I've got a deposit on the Dodge Charger Daytona EV
        
           | lotsofpulp wrote:
           | I also went into car shopping with that opinion, but the
           | options are bleak in terms of other carmakers' software. For
           | some reason, if you want basic software features of a Tesla,
           | the other carmakers want an extra $20k+ (and still don't have
           | some).
           | 
           | A big example is why do the other carmakers not yet offer
           | camera recording on their cars? They are all using cameras
           | all around, but only Tesla makes it available to you in case
           | you want the footage? Bizarre. And then they want to charge
           | you an extra $500+ for one dash cam on the windshield.
           | 
           | I even had Carplay/Android Auto as a basic requirement, but I
           | was willing to forgo that after trying out the other brands.
           | And not having to spend hours at a dealership doing paperwork
           | was amazing. Literally bought the car on my phone and was out
           | the door within 15 minutes on the day of my appointment.
        
             | bink wrote:
             | Rivian also allows recording drives to an SSD. They also
             | just released a feature where you can view the cameras
             | while it's parked. I'm kinda surprised other manufacturers
             | aren't allowing that.
        
               | lotsofpulp wrote:
               | Rivians start at $30k more than Teslas, and while they
               | may be nice, they don't have the track record yet that
               | Tesla does, and there is a risk the company goes bust
               | since it is currently losing a lot of money.
        
         | pbasista wrote:
         | > I'm grateful to be getting a car from another manufacturer
         | this year.
         | 
         | I have no illusions about Tesla's ability to deliver an
         | unsupervised self-driving car any time soon. However, as far as
         | I understand, their autosteer system, in spite of all its
         | flaws, is still the best out there.
         | 
         | Do you have any reason to believe that there actually is
         | something better?
        
           | throwaway314155 wrote:
           | I believe they're fine with losing auto steering
           | capabilities, based on the tone of their comment.
        
           | bastawhiz wrote:
           | Autopilot has not been good. I have a cabin four hours from
           | my home and I've used autopilot for long stretches on the
           | highway. Some of the problems:
           | 
           | - Certain exits are not detected as such and the car
           | violently veers right before returning to the lane. I simply
           | can't believe they don't have telemetry to remedy this.
           | 
           | - Sometimes the GPS becomes miscalibrated. This makes the car
           | think I'm taking an exit when I'm not, causing the car to
           | abruptly reduce its speed to the speed of the ramp. It does
           | not readjust.
           | 
           | - It frequently slows for "emergency lights" that don't
           | exist.
           | 
           | - If traffic comes to a complete stop, the car accelerates
           | way too hard and brakes hard when the car in front moves any
           | substantial amount.
           | 
           | At this point, I'd rather have something less good than
           | something which is an active danger. For all intents and
           | purposes, my Tesla doesn't have reliable cruise control,
           | period.
           | 
           | Beyond that, though, I simply don't have trust in Tesla
           | software. I've encountered _so many_ problems at this point
           | that I can 't possibly expect them to deliver a product that
           | works reliably at any point in the future. What reason do I
           | have to believe things will magically improve?
        
             | absoflutely wrote:
             | I'll add that it randomly brakes hard on the interstate
             | because it thinks the speed limit drops to 45. There aren't
             | speed limit signs anywhere nearby on different roads that
             | it could be mistakenly reading either.
        
               | bastawhiz wrote:
               | I noticed that this happens when the triangle on the map
               | is slightly offset from the road, which I've attributed
               | to miscalibrated GPS. It happens consistently when I'm in
               | the right lane and pass an exit when the triangle is ever
               | so slightly misaligned.
        
         | geoka9 wrote:
         | > It didn't merge left to make room for vehicles merging onto
         | the highway. The vehicles then tried to cut in. The system
         | should have avoided an unsafe situation like this in the first
         | place.
         | 
         | I've been on the receiving end of this with the offender being
         | a Tesla so many times that I figured it must be FSD.
        
           | bastawhiz wrote:
           | Probably autopilot, honestly.
        
         | browningstreet wrote:
         | Was this the last version, or the version released today?
         | 
         | I've been pretty skeptical of FSD and didn't use the last
         | version much. Today I used the latest test version, enabled
         | yesterday, and rode around SF, to and from GGP, and it did
         | really well.
         | 
         | Waymo well? Almost. But whereas I haven't ridden Waymo on the
         | highway yet, FSD got me from Hunters Point to the east bay with
         | no disruptions.
         | 
         | The biggest improvement I noticed was its optimizations on
         | highway progress.. it'll change lanes, nicely, when the lane
         | you're in is slower than the surrounding lanes. And when you're
         | in the fast/passing lane it'll return to the next closest lane.
         | 
         | Definitely better than the last release.
        
           | bastawhiz wrote:
           | I'm clearly not using the FSD today because I refused to
           | complete my free trial of it a few months ago. The post of
           | mine that you're responding to doesn't mention my troubles
           | with Autopilot, which I highly doubt are addressed by today's
           | update (see my other comment for a list of problems). They
           | need to really, really prove to me that Autopilot is working
           | reliably before I'd even consider accepting another free
           | trial of FSD, which I doubt they'd do anyway.
        
         | mrjin wrote:
         | I would not even try. The reason is simple, there is absolutely
         | no ability of understanding in any of current self claimed auto
         | driving approach, no matter how well they market them.
        
         | averageRoyalty wrote:
         | I'm not disagreeing with your experience. But if it's as bad as
         | you say, why aren't we seeing tens or hundreds of FSD
         | fatalities per day or at least per week? Even if only 1000
         | people globally have it on, these issues sound like we should
         | be seeing tens per week.
        
           | bastawhiz wrote:
           | Perhaps having more accidents doesn't mean more fatal
           | accidents.
        
         | heresie-dabord wrote:
         | > After the system error, I lost all trust in FSD from Tesla.
         | 
         | May I ask how this initial trust was established?
        
           | bastawhiz wrote:
           | The numbers that are reported aren't abysmal, and people have
           | anecdotally said good things. I was willing to give it a try
           | while being hyper vigilant.
        
         | kingkongjaffa wrote:
         | > right turns on red
         | 
         | This is a idiosyncrasy of the US (maybe other places too?) and
         | I wonder if it's easier to do self driving at junctions, in
         | countries without this rule.
        
           | dboreham wrote:
           | Only some states allow turn on red, and it's also often
           | overridden by a road sign that forbids. But for me the
           | ultimate test of AGI is four-or-perhaps-three-or-perhaps-two
           | way stop intersections. You have to know whether the other
           | drivers have a stop sign or not in order to understand how to
           | proceed, and you can't see that information. As an immigrant
           | to the US this baffles me, but my US-native family members
           | shrug like there's some telepathy way to know. There's also a
           | rule that you yield to vehicles on your right at uncontrolled
           | intersections (if you can determine that it is
           | uncontrolled...) that almost no drivers here seem to have
           | heard of. You have to eye-ball the other driver to determine
           | whether or not they look like they remember road rules. Not
           | sure how a Tesla will do that.
        
             | bink wrote:
             | If it's all-way stop there will often be a small placard
             | below the stop sign. If there's no placard there then
             | (usually) cross traffic doesn't stop. Sometimes there's a
             | placard that says "two-way" stop or one that says "cross
             | traffic does not stop", but that's not as common in my
             | experience.
        
         | herdcall wrote:
         | Same here, but I tried the new 12.5.4.1 yesterday and the
         | difference is night and day. It was near flawless except for
         | some unexplained slowdowns and you don't even need to hold the
         | steering anymore (it detects attention by looking at your
         | face), they clearly are improving rapidly.
        
           | lolinder wrote:
           | How many miles have you driven since the update yesterday? OP
           | described a half dozen different failure modes in a variety
           | of situations that seem to indicate quite extensive testing
           | before they turned it off. How far did you drive the new
           | version and in what circumstances?
        
             | AndroidKitKat wrote:
             | I recently took a 3000 mile road trip on 12.5.4.1 on a mix
             | of interstate, country roads, and city streets and there
             | were only a small handful of instances where I felt like
             | FSD completely failed. It's certainly not perfect, but I
             | have never had the same failures that the original thread
             | poster had.
        
         | anonu wrote:
         | My experience has been directionally the same as yours but not
         | of the same magnitude. There's a lot of room from improvement
         | but it's still very good. I'm in a slightly suburban setting...
         | I suspect you're in a fender denser location that me, in which
         | case your experience may be different.
        
           | amelius wrote:
           | Their irresponsible behavior says enough. Even if they fix
           | all their technical issues, they are not driven by a safety
           | culture.
           | 
           | The first question that comes to their minds is not "how can
           | we prevent this accident?" but it's "how can we further
           | inflate this bubble?"
        
         | rainsford wrote:
         | Arguably the problem with Tesla self-driving is that it's stuck
         | in an uncanny valley of performance where it's worse than
         | better performing systems but _also_ worse from a user
         | experience perspective than even less capable systems.
         | 
         | Less capable driver assistance type systems might help the
         | driver out (e.g. adaptive cruise control), but leave no doubt
         | that the human is still driving. Tesla though goes far enough
         | that it takes over driving from the human but it isn't reliable
         | enough that the human can stop paying attention and be ready to
         | take over at a moment's notice. This seems like the worst of
         | all possible worlds since you are both disengaged by having to
         | maintain alertness.
         | 
         | Autopilots in airplanes are much the same way, pilots can't
         | just turn it on and take a nap. But the difference is that
         | nothing an autopilot is going to do will instantly crash the
         | plane, while Tesla screwing up will require split second
         | reactions from the driver to correct for.
         | 
         | I feel like the real answer to your question is that having
         | reasonable confidence in self-driving cars beyond "driver
         | assistance" type features will ultimately require a car that
         | will literally get from A to B reliably even if you're taking a
         | nap. Anything close to that but not quite there is in my mind
         | almost worse than something more basic.
        
       | yieldcrv wrote:
       | Come on US, regulate interstate commerce and tell them to delete
       | these cameras
       | 
       | Lidar is goated and if tesla didn't want that they can pursue a
       | different perception solution, allowing for innovation
       | 
       | But just visual cameras aiming to replicate us, ban that
        
       | bastloing wrote:
       | It was way safer to ride a horse and buggy
        
       | jgalt212 wrote:
       | The SEC is clearly afraid of Musk. I wonder what the intimidation
       | factor is at NHTSA.
        
         | leoh wrote:
         | Not meaningful enough.
        
       | Rebuff5007 wrote:
       | Tesla testing and developing FSD with normal consumer drivers
       | frankly seems criminal. Test drivers for AV companies get
       | advanced driver training, need to filed detailed reports about
       | the cars response to various driving scenarios, and generally are
       | paid to be as attentive as possible. The fact that any old tech-
       | bro or un-assuming old lady can buy this thing and be on their
       | phone when the car could potentially turn into oncoming traffic
       | is mind boggling.
        
         | buzzert wrote:
         | > can buy this thing and be on their phone when the car could
         | potentially turn into oncoming traffic is mind boggling
         | 
         | This is incorrect. Teslas have driver monitoring software, and
         | if the driver is detected using a phone while driving, will
         | almost immediately give a loud warning and disable FSD.
        
         | trompetenaccoun wrote:
         | >Test drivers for AV companies get advanced driver training,
         | need to filed detailed reports about the cars response to
         | various driving scenarios, and generally are paid to be as
         | attentive as possible.
         | 
         | Like this one, who ran over and killed a woman?
         | 
         | https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg
         | 
         | While not fully being the driver's fault, during the
         | investigation they were found to have watched TV on their
         | smartphone at the time of the accident and the driver-facing
         | camera clearly showed them not even looking at the road! Such
         | attentiveness.
        
       | quitit wrote:
       | "Full Self-Driving" but it's not "full" self-driving, as it
       | requires active supervision.
       | 
       | So it's marketed with a nod and wink, as if the supervision
       | requirement is just a peel away disclaimer to satisfy old and
       | stuffy laws that are out of step with the latest technology. When
       | in reality it really does need active supervision.
       | 
       | But the nature of the technology is this approach invites the
       | driver to distraction, because what's the use in "full self
       | driving" if one needs to have their hands on the wheel and feet
       | near the pedals ready to take control at a moments notice?
       | Worsening this problem is that the Teslas have shown themselves
       | to drive erratically at unexpected times such as phantom braking
       | or misidentifying natural phenomena for traffic lights.
       | 
       | One day people will look back on letting FSD exist in the market
       | and roll their eyes in disbelief of the recklessness.
        
       | JumpinJack_Cash wrote:
       | Unpopular take: Even with perfect FSD which is much better than
       | the average human driver (say having the robotic equivalent of a
       | Lewis Hamilton in every car) the productivity and health gains
       | won't be as great as people anticipate.
       | 
       | Sure way less traffic deaths but the spike in depression
       | especially among males would be something very big. Life events
       | are much outside of our control, having a 5000lbs thing that can
       | get to 150mph if needed and responds exactly to the accelerator,
       | brake and steering wheel input...well that makes people feel in
       | control and very powerful while behind the aforementioned
       | steering wheel.
       | 
       | Also productivity...I don't know...people think a whole lot and
       | do a whole lot of self reflection while they are driving and when
       | they arrive at destination they just implement the thoughts they
       | had while driving. The ability to talk on the phone has been
       | there for quite some time now too, so thinking and communicating
       | can be done while driving already, what would FSD add?
        
         | HaZeust wrote:
         | As a sports car owner, I see where you're coming from -- but
         | MANY do not. We are the 10%, the other 90% see their vehicle as
         | an A-B tool, and you can clearly see that displayed with the
         | average, utilitarian car models that the vast majority of the
         | public buy. There will be no "spike" in depression; simply put,
         | there's not enough people that care about their car, how it
         | gets from point A to point B, or what contribution they give,
         | if any, into that.
        
           | JumpinJack_Cash wrote:
           | Maybe they don't care about their car to be a sports car but
           | they surely enjoy some pleasure out of the control of being
           | at the helm of something powerful like a car (even though
           | it's not a sports car)
           | 
           | Also even people in small cars they think a lot while driving
           | already, and they also communicate, how much more productive
           | they could be with FSD?
        
             | HaZeust wrote:
             | I really don't think you're right about the average person,
             | or even a notable size of people, believing in the idea of
             | their car being their "frontier of freedom" as was popular
             | in the 70-80's media. I don't think that many people _care_
             | about driving nowadays.
        
       | drodio wrote:
       | I drive a 2024 Tesla Model Y and another person in my family
       | drives a 2021 Model Y. Both cars are substantially similar (the
       | 2021 actually has _more_ sensors than the 2024, which is strictly
       | cameras-only).
       | 
       | Both cars are running 12.5 -- and I agree that it's dramatically
       | improved over 12.3.
       | 
       | I really enjoy driving. I've got a #vanlife Sprinter that I'll do
       | 14 hour roadtrips in with my kids. For me, the Tesla's self-
       | driving capability is a "nice to have" -- it sometimes drives
       | like a 16 year old who just got their license (especially around
       | braking. Somehow it's really hard to nail the "soft brake at a
       | stop sign" which seems like it should be be easy. I find that
       | passengers in the car are most uncomfortable when the car brakes
       | like this -- and I'm the most embarrassed because they all look
       | at me like I completely forgot how to do a smooth stop at a stop
       | sign).
       | 
       | Other times, the Tesla's self-driving is magical and nearly
       | flawless -- especially on long highway road trips, like up to
       | Tahoe. Even someone like me who loves doing road trips really
       | appreciates the ability to relax and not have to be driving.
       | 
       | But here's one observation I've had that I don't see quite
       | sufficiently represented in the comments:
       | 
       | The other person in my family with the 2021 Model Y does not like
       | to drive like I do, and they really appreciate that the Tesla is
       | a better driver than they feel themselves to be. And as a
       | passenger in their car, I also really appreciate that when the
       | Tesla is driving, I generally feel much more comfortable in the
       | car. Not always, but often.
       | 
       | There's so much variance in us as humans around driving skills
       | and enjoyment. It's easy to lump us together and say "the car
       | isn't as good as the human." And I know there's conflicting data
       | from Tesla and NHTSA about whether in aggregate, Teslas are safer
       | than human drivers or not.
       | 
       | But what I definitely know from my experience is that the Tesla
       | is already a better driver than _many_ humans are -- especially
       | those that don 't enjoy driving. And as @modeless points out, the
       | rate of improvement is now vastly accelerating.
        
         | lowbloodsugar wrote:
         | You are a living example of survivorship bias. One day your car
         | will kill you or someone else, and then maybe you'll be able to
         | come back here and tell us how wrong you were. How, with your
         | new experience, you can see how the car only "seemed"
         | competent, how it was that very seeming competence that got
         | someone killed, because you trusted it.
        
       | fortran77 wrote:
       | I have FSD in my Plaid. I don't use it. Too scary.
        
       | lrvick wrote:
       | All these self driving car companies are competing to see whose
       | proprietary firmware and sensors kill the fewest people. This is
       | insane.
       | 
       | I will -never- own a self driving car unless the firmware is open
       | source, reproducible, remotely attestable, and built/audited by
       | several security research firms and any interested security
       | researchers from the public before all new updates ship.
       | 
       | It is the only way to avoid greedy execs from cutting corners to
       | up profit margins like VW did with faking emissions tests.
       | 
       | Proprietary safety tech is evil, and must be made illegal.
       | Compete with nicer looking more comfortable cars with better
       | miles-to-charge, not peoples lives.
        
         | boshalfoshal wrote:
         | You are conflating two seperate problems (security vs
         | functionality).
         | 
         | "Firmware" can be open source and secure, but how does this
         | translate to driving performance at all? Why does it matter if
         | the firmware is validated by security researchers, who
         | presumably don't know anything about motion planning,
         | perception, etc? And this is even assuming that the code can be
         | reasonably verified statically. You probably need to to run
         | that code on a car for millions of miles (maybe in simulation)
         | in an uncoutable number of scenarios to run through every edge
         | case.
         | 
         | The other main problem with what you're asking is that most of
         | the "alpha" of these self driving companies is in proprietary
         | _models_, not software. No one is giving up their models. That
         | is a business edge.
         | 
         | As someone who has been at multiple AV companies, no one is
         | cutting corners on "firmware" or "sensors" (apart from making
         | it reasonably cost effective so normal people can buy their
         | cars). Its just that AV is a really really really difficult
         | problem with no closed form solution.
         | 
         | Your normal car has all the same pitfalls of "unverified
         | software running on a safety critical system," except that its
         | easier to verify that straightforward device firmware works vs
         | a very complex engine whose job is to ingest sensor data and
         | output a trajectory.
        
       | rootusrootus wrote:
       | I'm on my second free FSD trial, just started for me today. Gave
       | it another shot, and it seems largely similar to the last free
       | trial they gave. Fun party trick, surprisingly good, right up
       | until it's not. A hallmark of AI everywhere, is how great it is
       | and just how abruptly and catastrophically it fails occasionally.
       | 
       | Please, if you're going to try it, keep both hands on the wheel
       | and your foot ready for the brake. When it goes off the rails, it
       | usually does so in surprising ways with little warning and little
       | time to correct. And since it's so good much of the time, you can
       | get lulled into complacence.
       | 
       | I never really understand the comments from people who think it's
       | the greatest thing ever and makes their drive less stressful.
       | Does the opposite for me. Entertaining but exhausting to
       | supervise.
        
         | darknavi wrote:
         | You slowly build a relationship with it and understand where it
         | will fail.
         | 
         | I drive my 20-30 minute commutes largely with FSD, as well as
         | our 8-10 hour road trips. It works great, but 100% needs to be
         | supervised and is basically just nicer cruise control.
        
           | coffeefirst wrote:
           | This feels like the most dangerous possible combination (not
           | for you, just to have on the road in large numbers).
           | 
           | Good enough that the average user will stop paying attention,
           | but not actually good enough to be left alone.
           | 
           | And when the machine goes to do something lethally dumb, you
           | have 5 seconds to notice and intervene.
        
             | jvolkman wrote:
             | This is what Waymo realized a decade ago and what helped
             | define their rollout strategy:
             | https://youtu.be/tiwVMrTLUWg?t=247&si=Twi_fQJC7whg3Oey
        
               | nh2 wrote:
               | This video is great.
               | 
               | It looks like Wayno really understood the problem.
               | 
               | It explains concisely why it's a bad idea to roll our
               | incremental progress, how difficult the problem really
               | is, and why you should really throw all sensors you can
               | at it.
               | 
               | I also appreciate the "we don't know when it's going to
               | be ready" attitude. It shows they have a better
               | understanding of what their task actually is than anybody
               | who claims "next year" every year.
        
               | yborg wrote:
               | You don't get a $700B market cap by telling investors "We
               | don't know."
        
               | rvnx wrote:
               | Ironically, Robotaxis from Waymo are actually working
               | really well. It's a true unsupervised system, very safe,
               | used in production, where the manufacturer takes the full
               | responsibility.
               | 
               | So the gradual rollout strategy is actually great.
               | 
               | Tesla wants to do "all or nothing", and ends up with
               | nothing for now (example with Europe, where FSD is sold
               | since 2016 but it is "pending regulatory approval", when
               | actually, the problem is the tech that is not finished
               | yet, sadly).
               | 
               | It's genuinely a difficult problem to solve, so it's
               | better to do it step-by-step than a "big-bang deploy".
        
               | mattgreenrocks wrote:
               | Does Tesla take full responsibility for FSD incidents?
               | 
               | It seemed like most players in tech a few years ago were
               | using legal shenanigans to dodge liability here, which,
               | to me, indicates a lack of seriousness toward the safety
               | implications.
        
               | nh2 wrote:
               | > So the gradual rollout strategy is actually great.
               | 
               | I think you misunderstood, or it's a terminology problem.
               | 
               | Waymo's point in the video is that in contrast to Tesla,
               | they are _not_ doing gradual rollout of seemingly-
               | working-still-often-catastropically-failing tech.
               | 
               | See e.g. minute 5:33 -> 6:06. They are stating that they
               | are targeting directly the shown upper curve of safety,
               | and that they are not aiming for the "good enough that
               | the average user will stop paying attention, but not
               | actually good enough to be left alone".
        
               | zbentley wrote:
               | Not sure how tongue-in-cheek that was, but I think your
               | statement is the heart of the problem. Investment money
               | chases confidence and moonshots rather than backing
               | organizations that pitch a more pragmatic (read:
               | asterisks and unknowns) approach.
        
               | trompetenaccoun wrote:
               | All their sensors didn't prevent them from crashing into
               | stationary object. You'd think that would be the absolute
               | easiest to avoid, especially with both radar and lidar on
               | board. Accidents like that show the training data and
               | software will be much more important than number of
               | sensors.
               | 
               | https://techcrunch.com/2024/06/12/waymo-second-robotaxi-
               | reca...
        
               | rvnx wrote:
               | The issue was fixed, now handling 100'000 trips per week,
               | and all seems to go well in the last 4 months, this is
               | 1.5 million trips.
        
               | trompetenaccoun wrote:
               | So they had "better understanding" of the problem as the
               | other user put it, but their software was still flawed
               | and needed fixing. That's my point. This happened two
               | weeks ago btw: https://www.msn.com/en-
               | in/autos/news/waymo-self-driving-car-...
               | 
               | I don't mean Waymo is bad or unsafe, it's pretty cool. My
               | point is about true automation needing data and
               | intelligence. A lot more data than we currently have,
               | because the problem is in the "edge" cases, the kind of
               | situation the software has never encountered. Waymo is in
               | the lead for now but they have fewer cars on the road,
               | which means less data.
        
               | jraby3 wrote:
               | Any idea how many accidents and how many fatalities? And
               | how that compares to human drivers?
        
             | ricardobeat wrote:
             | Five seconds is a _long_ time in driving, usually you'll
             | need to react in under 2 seconds in situations where it
             | disengages, those never happen while going straight.
        
               | theptip wrote:
               | Not if you are reading your emails...
        
           | lolinder wrote:
           | When an update comes out does that relationship get reset
           | (does it start failing on things that used to work), or has
           | it been a uniform upward march?
           | 
           | I'm thinking of how every SaaS product I ever have to use
           | regularly breaks my workflow to make 'improvements'.
        
             | xur17 wrote:
             | For me it does, but only somewhat. I'm much more cautious /
             | aware for the first few drives while I figure it out again.
             | 
             | I also feel like it takes a bit (5-10 minutes of driving)
             | for it to recalibrate after an update, and it's slightly
             | worse than usual at the very beginning. I know they have to
             | calibrate the cameras to the car, so it might be related to
             | that, or it could just be me getting used to its quarks.
        
             | bdndndndbve wrote:
             | I wouldn't take OP's word for it, if they really believe
             | they know how it's going to react in every situation in the
             | first place. Studies have shown this is a gross
             | overestimation of their own ability to pay attention.
        
           | eschneider wrote:
           | "You slowly build a relationship with it and understand where
           | it will fail."
           | 
           | I spent over a decade working on production computer vision
           | products. You think you can do this, and for some percentage
           | of failures you can. The thing is, there will ALWAYS be some
           | percentage of failure cases where you really can't perceive
           | anything different from a success case.
           | 
           | If you want to trust your life to that, fine, but I certainly
           | wouldn't.
        
             | sandworm101 wrote:
             | Or until a software update quietly resets the relationship
             | and introduces novel failure modes. There is little more
             | dangerous on the road than false confidence.
        
             | peutetre wrote:
             | Elon Musk is a technologist. He knows a lot about
             | computers. The last thing Musk would do is trust a computer
             | program:
             | 
             | https://www.nbcnews.com/tech/tech-news/musk-pushes-
             | debunked-...
             | 
             | So I guess that's game over for full self-driving.
        
               | llamaimperative wrote:
               | Oooo maybe he'll get a similar treatment as Fox did
               | versus Dominion.
        
           | sumodm wrote:
           | Something along this lines is the real danger. People will
           | understand common failure modes and assume they have
           | understood its behavior for most scenarios. Unlike common
           | deterministic and even some probabilistic systems, where
           | behavior boundaries are well behaved, there could be
           | discontinuities in 'rarer' seen parts of the boundary. And
           | these 'rarer' parts need not be obvious to us humans, since
           | few pixel changes might cause wrinkles.
           | 
           | *vocabulary use is for a broad stroke explanation.
        
         | tverbeure wrote:
         | I just gave it another try after my last failed attempt.
         | (https://tomverbeure.github.io/2024/05/20/Tesla-FSD-First-
         | and...)
         | 
         | I still find it shockingly bad, especially in the way it
         | reacts, or doesn't, to the way things change around the car
         | (think a car on the left in front of you who switches on
         | indicators to merge in front of you) or the way it makes the
         | most random lane changing decisions and changes it's mind in
         | the middle of that maneuver.
         | 
         | Those don't count as disengagements, but they're jarring and
         | drivers around you will rightfully question your behavior.
         | 
         | And that's all over just a few miles of driving in an easy
         | environment if interstate or highway.
         | 
         | I totally agree that it's an impressive party trick, but it has
         | no business being on the road.
         | 
         | My experience with Waymo in SF couldn't have been more
         | different.
        
           | sokoloff wrote:
           | > (think a car on the left in front of you who switches on
           | indicators to merge in front of you)
           | 
           | That car is signaling an intention to merge into your lane
           | _once it is safe for them to do so_. What does the Tesla do
           | (or not do) in this case that 's bad?
        
             | cma wrote:
             | Defensive driving is to assume they might not check their
             | blindspot, etc. And just generally ease off in this
             | situation if they would merge in tight if they began
             | merging now.
        
               | tverbeure wrote:
               | That's the issue: I would immediately slow a little bit
               | to let the other one merge. FSD seems to be noticing
               | something, and eventually slow down, but the action is
               | too subtle (if at all) to signal the other guy that
               | you're letting them merge.
        
             | hotspot_one wrote:
             | > That car is signaling an intention to merge into your
             | lane once it is safe for them to do so.
             | 
             | Only under the assumption that the driver was trained in
             | the US, to follow US traffic law, and is following that
             | training.
             | 
             | For example, in the EU, you switch on the indicators when
             | you start the merge; the indicator shows that you ARE
             | moving.
        
               | sokoloff wrote:
               | That seems odd to the point of uselessness, and does not
               | match the required training I received in Germany from my
               | work colleagues at Daimler prior to being able to sign
               | out company cars.
               | 
               | https://www.gesetze-im-internet.de/stvo_2013/__9.html
               | seems to be the relevant law in Germany, which Google
               | translates to "(1) Anyone wishing to turn must announce
               | this clearly and in good time; direction indicators must
               | be used."
        
               | nielsole wrote:
               | Merging into the lane is probably better addressed by
               | SS7, with the same content:
               | https://dejure.org/gesetze/StVO/7.html
        
               | Zanfa wrote:
               | > For example, in the EU, you switch on the indicators
               | when you start the merge; the indicator shows that you
               | ARE moving.
               | 
               | In my EU country it's theoretically at least 3 seconds
               | _before_ initiating the move.
        
           | y-c-o-m-b wrote:
           | > it makes the most random lane changing decisions and
           | changes it's mind in the middle of that maneuver.
           | 
           | This happened to me during my first month of trialing FSD
           | last year and was a big contributing factor for me not
           | subscribing. I did NOT appreciate the mess the vehicle made
           | in this type of situation. If I saw another driver doing the
           | same, I'd seriously question if they were intoxicated.
        
         | 650REDHAIR wrote:
         | This was my experience as well. It tried to drive us (me, my
         | wife, and my FIL) into a tree on a gentle low speed uphill turn
         | and I'll never trust it again.
        
         | jerb wrote:
         | But it's clearly statistically much safer
         | (https://www.tesla.com/VehicleSafetyReport) 7 million miles
         | before an accident w FSD vs. 1 million when disengaged. I agree
         | I didn't like the feel of FSD either, but the numbers speak for
         | themselves.
        
           | bpfrh wrote:
           | Teslas numbers have biases in them which paint a wrong
           | picture:
           | 
           | https://www.forbes.com/sites/bradtempleton/2023/04/26/tesla-.
           | ..
           | 
           | They compare incomparable data(city miles vs highway miles),
           | autopilot is also mostly used on higways which is not where
           | most accidents happen.
        
       | TheAlchemist wrote:
       | Tesla released a promotional video in 2016 saying that with FSD a
       | human driver is not necessary and that "The person in the
       | driver's seat is only there for legal reasons". The video was
       | staged as we've learned in 2022.
       | 
       | 2016 folks... Even with today's FSD which is several orders of
       | magnitude better than the one in the video, you would still
       | probably have a serious accident within a week (and I'm being
       | generous here) if you didn't seat in the driver's seat.
       | 
       | How Trevor Milton got sentenced for fraud and the people
       | responsible for this were not is a mystery to me.
        
         | 1f60c wrote:
         | AFAIK the owner's manual says you have to keep your hands on
         | the wheel and be ready to take over at all times, but Elon Musk
         | and co. love to pretend otherwise.
        
           | Flameancer wrote:
           | This part doesn't seem to be common knowledge. I don't own a
           | Tesla but I have been a few. From my understanding the
           | feature as always said it was in beta and that it still
           | required that you have your hands on the wheel.
           | 
           | I like the idea of FSD, but I think we should have a serious
           | talk about how the safety implications of making this more
           | broadly available and also compatibility with making a mesh
           | network so FSD vehicles can communicate. I'm not well versed
           | in the tech but I feel like it would be safer if you have
           | like say have more cars on the road that can communicate and
           | making decisions together than separate cars existing in a
           | vacuum having to make a decision.
        
             | y-c-o-m-b wrote:
             | I've wondered about the networked vehicle communication for
             | a while. It doesn't even need to be FSD. I might be
             | slightly wrong on this, but I would guess most cars going
             | back at least a decade can have their software/firmware
             | modified to do this if the manufacturers so choose. I
             | imagine it would improve the reliability and reaction-times
             | of FSD considerably.
        
       | sanp wrote:
       | This will go away once Trump wins
        
         | greenie_beans wrote:
         | why as a consumer would you want that? sounds extremely against
         | your interest. i doubt you're a billionaire who less regulation
         | would benefit
        
       | amelius wrote:
       | Add it to the list ...
       | 
       | https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...
        
       | InsomniacL wrote:
       | As I come over the top of a crest, there was suddenly a lot of
       | sun glare and the my Model Y violently swerved to the left,
       | fortunately I had just overtaken a car on a two lane, dual
       | carriageway and hadn't moved back to the left hand lane yet.
       | 
       | The driver I had just overtaken, although he wasn't very close
       | anymore slowed right down to get away from me and I didn't blame
       | him.
       | 
       | That manoeuvre in another car likely would have put it on two
       | wheels.
       | 
       | They say FSD crashes less often than a human per mile driven, but
       | I can only use FSD on roads like motorways, so I don't think it's
       | a fair comparison.
       | 
       | I don't trust FSD, I still use it occasionally but never in less
       | than ideal conditions. Typically when doing something like
       | changing the music on a motorway.
       | 
       | It probably is safer than just me driving alone, when it's in
       | good conditions on a straight road with light traffic with an
       | alert driver.
        
         | mglz wrote:
         | The underlying problem is that the current FSD architecture
         | doesn't seem to have good guard rails for these outlier
         | situations you describe (sunlight blinding the camera from just
         | the right angle probably?) and it is probably not possible to
         | add such rules without limiting the system enormously.
         | 
         | Fundamentally driving consists of a set of fairly clear cut
         | rules with a ridiculous amount of "it depends" cases.
        
       | nemo44x wrote:
       | I'm a Tesla fan but I have to say anecdotally that it seems like
       | Teslas represent an outsize number of bad drivers in my
       | observations. Is it the FSD that's a bit too aggressive and
       | sporadic? Lots of lane changing, etc?
       | 
       | They're up there with Dodge Ram drivers.
        
       | Fomite wrote:
       | "Driver is mostly disengaged, but then must intervene in a sudden
       | fail state" is also one of the most dangerous types of automation
       | due to how long it takes the driver to reach full control as
       | well.
        
         | drowsspa wrote:
         | Yeah, I don't drive but I would think it would be worse than
         | actually paying attention all the time
        
           | pessimizer wrote:
           | It's also a problem that gets worse as the software gets
           | better. Having to intervene once every 5 minutes is a lot
           | easier than having to intervene once every 5 weeks. If lack
           | of intervention causes an accident, I'd bet on the 5 minute
           | car avoiding an accident longer than the 5 week car for any
           | span of time longer than 10 weeks.
        
             | jakub_g wrote:
             | I feel like the full self driving cars should have a
             | "budget". Every time you drive, say, 1000 km in FSD, you
             | then need to drive 100 km in "normal" mode to keep sharp.
             | Or whatever the ratio / exact numbers TBD. You can reset
             | the counter upfront by driving smaller mileage more
             | regularly.
        
           | lopkeny12ko wrote:
           | You are required to pay attention all the time. That's what
           | the "supervised" in "FSD (supervised)" means.
        
             | freejazz wrote:
             | FSD stands for Fully Supervised Driving, right?
        
               | dhdaadhd wrote:
               | yeah, that sounds like Elon's marketing to me.
        
       | siliconc0w wrote:
       | Traffic jams and long monotonous roads are really where these
       | features, getting to level 3 on those should be the focus over
       | trying to maintain a fiction of level 5 everywhere. (And like
       | other comments, >2 should automatically mean liability)
        
       | Yeul wrote:
       | Now we know why Musk wants Trump to win. To completely subjugate
       | the state to the whims of it's billionaire class. Going back to
       | the 19th century's gilded age.
       | 
       | https://www.bbc.com/news/articles/cg78ljxn8g7o
        
       | FergusArgyll wrote:
       | Are the insurance prices different if you own a Tesla with FSD?
       | if not, why not?
        
       | JTbane wrote:
       | How can you possibly have a reliable self-driving car without
       | LIDAR?
        
         | dham wrote:
         | Returning to this post in 5 years when FSD has been solved with
         | just vision.
        
       | masto wrote:
       | I have such a love-hate relationship with this thing. I don't
       | think Tesla's approach will ever be truly autonomous, and they do
       | a lot of things to push it into unsafe territory (thanks to you
       | know who at the helm). I am a tech enthusiast and part of the
       | reason I bought this car (before you know who revealed himself to
       | be you know what) is that they were the furthest ahead and I
       | wanted to experience it. If they had continued on the path I'd
       | hoped, they'd have put in _more_ sensors, not taken them out for
       | cost-cutting and then tried to gaslight people about it. And all
       | this hype about turning your car into a robotaxi while you 're
       | not using it is just stupid.
       | 
       | On the other hand, I'd hate for the result of all this to be to
       | throw the ADAS out with the bathwater. The first thing I noticed
       | even with the early "autopilot" is that it made long road trips
       | much more bearable. I would arrive at my destination without
       | feeling exhausted, and I attribute a lot of that to not having to
       | spend hours actively making micro adjustments to speed and
       | steering. I know everyone thinks they're a better driver than
       | they are, and it's those other people who can't be trusted, but I
       | do feel that when I have autopilot/FSD engaged, I am paying
       | attention, less fatigued, and actually have more cognitive
       | capacity freed up to watch for dangerous situations.
       | 
       | I had to pick someone up at LaGuardia Airport yesterday, a long
       | annoying drive in heavy NYC-area traffic. I engaged autosteer for
       | most of the trip both ways (and disengaged it when I didn't feel
       | it was appropriate), and it made it much more bearable.
       | 
       | I'm neither fanboying nor apologizing for Tesla's despicable
       | behavior. But I would be sad if, in the process of regulating
       | this tech, it got pushed back too far.
        
       | TeslaCoils wrote:
       | Works most of the time, Fails at the worst time - Supervision
       | absolutely necessary...
        
       | deergomoo wrote:
       | This is an opinion almost certainly based more in emotion than
       | logic, but I don't think I could trust any sort of fully
       | autonomous driving system that didn't involve communication with
       | transmitters along the road itself (like a glideslope and
       | localiser for aircraft approaches) and with other cars on the
       | road.
       | 
       | Motorway driving sure, there it's closer to fancy cruise control.
       | But around town, no thank you. I regularly drive through some
       | really crappily designed bits of road, like unlabelled approaches
       | to multi-lane roundabouts where the lane you need to be in for a
       | particular exit sorta just depends on what the people in front
       | and to the side of you happen to have chosen. If it's difficult
       | as a human to work out what the intent is, I don't trust a
       | largely computer vision-based system to work it out.
       | 
       | The roads here are also in a terrible state, and the lines on
       | them even moreso. There's one particular patch of road where the
       | lane keep assist in my car regularly tries to steer me into the
       | central reservation, because repair work has left what looks a
       | bit like lane markings diagonally across the lane.
        
         | michaelt wrote:
         | _> If it 's difficult as a human to work out what the intent
         | is, I don't trust a largely computer vision-based system to
         | work it out._
         | 
         | Most likely, every self-driving car company will send drivers
         | down every road in the country, recording everything they see.
         | Then they'll have human labellers figure out any junctions
         | where the road markings are ambiguous.
         | 
         | They've had sat nav maps covering every road for decades, and
         | the likes of Google Street View, so to have a detailed map of
         | every junction is totally possible.
        
           | deergomoo wrote:
           | In that case I hope they're prepared to work with local
           | authorities to immediately update the map every time road
           | layouts change, temporarily or permanently. Google Maps gets
           | lane guidance wrong very often in my experience, so that
           | doesn't exactly fill me with confidence.
        
             | tjpnz wrote:
             | And the contractors employed by the local authorities to do
             | roadworks big and small.
        
             | crazygringo wrote:
             | I kind of assumed that already happened. Does it not? Is
             | anyone pushing for it?
             | 
             | Honestly it seems like it ought to be federal law by now
             | that municipalities need to notify a designated centralized
             | service of all road/lane/sign/etc. changes in a
             | standardized format, that all digital mapping providers can
             | ingest from.
             | 
             | Is this not a thing? If not, is anyone lobbying for it? Is
             | there opposition?
        
               | jjav wrote:
               | > I kind of assumed that already happened.
               | 
               | Road layout can change daily, sometimes multiple times
               | per day. Sometimes in a second, like when a tree falls on
               | a lane and now you have to reroute on the oncoming lane
               | for some distance, etc.
        
               | lukan wrote:
               | "Honestly it seems like it ought to be federal law by now
               | that municipalities need to notify a designated
               | centralized service of all road/lane/sign/etc. changes in
               | a standardized format, that all digital mapping providers
               | can ingest from"
               | 
               | Why not just anyone and make that data openly avaiable?
        
               | fweimer wrote:
               | Coordinating roadwork is challenging in most places, I
               | think. Over here, it's apparently cheaper to open up a
               | road multiple times in a year, rather than coordinating
               | all the different parties that need underground access in
               | the foreseeable future.
        
         | sokoloff wrote:
         | > didn't involve communication with transmitters along the road
         | itself (like a glideslope and localiser for aircraft
         | approaches) and with other cars on the road
         | 
         | There will be a large number of non-participating vehicles on
         | the road for at least another 50 years. (The _average_ age of a
         | car in the US is a little over 12 years and rising. I doubt we
         | 'll see a comms-based standard emerge and be required equipment
         | on new cars for at least another 20 years.)
        
           | lukan wrote:
           | "There will be a large number of non-participating vehicles
           | on the road for at least another 50 years."
           | 
           | I think so too, but I also think, if we would really want to,
           | all it would take is a GPS device with internet connection,
           | like a smart phone, to make a normal car into a realtime
           | connected one.
           | 
           | But I also think we need to work out some social and
           | institutional issues first.
           | 
           | Currently I would not like my position to be avaiable in real
           | time to some obscure agency.
        
           | stouset wrote:
           | Hell, ignore vehicles. What about pedestrians, cyclists,
           | animals, construction equipment, potholes, etc?
        
         | emmelaich wrote:
         | Potential problem with transmitters is that they could be
         | faked.
         | 
         | You could certainly never rely on them alone.
        
           | wtallis wrote:
           | There are lots of other areas where intentionally violating
           | FCC regulations to transmit harmful signals is already
           | technologically feasible and cheap, but hasn't become a
           | widespread problem in practice. Why would it be any worse for
           | cars communicating with each other? If anything, having lots
           | of cars on the road logging what they receive from other cars
           | (spoofed or otherwise) would make it too easy to identify
           | which signals are fake, thwarting potential use cases like
           | insurance fraud (since it's safe to assume the car
           | broadcasting fake data is at fault in any collision).
        
             | johnisgood wrote:
             | I agree, the problem has been solved.
             | 
             | If a consensus mechanism similar to those used in
             | blockchain were implemented, vehicles could cross-reference
             | the data they receive with data from multiple other
             | vehicles. If inconsistencies are detected (for example, a
             | car reporting a different speed than what others are
             | observing), that data could be flagged as potentially
             | fraudulent.
             | 
             | Just as blockchain technologies can provide a means of
             | verifying the authenticity of transactions, a network of
             | cars could establish a decentralized validation process for
             | the data they exchange. If one car broadcasts false data,
             | the consensus mechanism among the surrounding vehicles
             | would allow for the identification of this "anomaly",
             | similar to how fraudulent transactions can be identified
             | and rejected in a blockchain system.
             | 
             | What you mentioned with regarding to insurance could be
             | used as a deterrent, too, along with laws making it illegal
             | to spoof relevant data.
             | 
             | In any case, privacy is going to take a toll here, I
             | believe.
        
               | 15155 wrote:
               | This is a complicated, technical solution looking for a
               | problem.
               | 
               | Simple, asymmetrically-authenticated signals and felonies
               | for the edge cases solve this problem without any
               | futuristic computer wizardry.
        
               | johnisgood wrote:
               | I did not intend to state that we ought to use the
               | blockchain, at all, for what it is worth. Vehicles should
               | cross-reference the data they receive with data from
               | multiple other vehicles and detect inconsistencies, any
               | consensus mechanism could work, if we could call it that.
        
         | sva_ wrote:
         | I agree with you about the trust issues and feel similarly, but
         | also feel like the younger generations who grow up with these
         | technologies might be less skeptical about adopting them.
         | 
         | I've been kind of amazed how much younger people take some
         | newer technologies for granted, the ability of humans to adapt
         | to changes is marvelous.
        
         | vmladenov wrote:
         | Once insurance requires it or makes you pay triple to drive
         | manually, that will likely be the tipping point for many
         | people.
        
       | Teknomancer wrote:
       | This is just an opinion. The only way forward with automated and
       | autonomous vehicles is through industry cooperation and
       | standardization. The Tesla approach to the problem is inadequate,
       | lacking means for interoperability, and relying on inferior
       | detection mechanisms. Somebody who solves these problems and does
       | it by offering interoperability and standards applied to all
       | automakers wins.
       | 
       | Sold Tesla investments. The company is on an unprofitable
       | downward spiral trajectory. The CEO is a total clown. Reinvested
       | on advise in Diamler, after Mercedes-Benz and Diamler Trucks
       | North America demonstrated their research and work into creating
       | true autonomous technology and safe global industry
       | standardizations.
        
       | lopkeny12ko wrote:
       | > NHTSA said it was opening the inquiry after four reports of
       | crashes where FSD was engaged during reduced roadway visibility
       | like sun glare, fog, or airborne dust. A pedestrian was killed in
       | Rimrock, Arizona, in November 2023 after being struck by a 2021
       | Tesla Model Y, NHTSA said.
       | 
       | This is going to be another extremely biased investigation.
       | 
       | 1. A 2021 Model Y is not on HW4.
       | 
       | 2. FSD in November 2023 is not FSD 12.5, the current version. Any
       | assessment of FSD on such outdated software is not going to be
       | representative of the current experience.
        
         | sashank_1509 wrote:
         | HW4 is a ridiculous requirement, it's only post 2023 and even
         | then except Model Y, there's no HW4.
         | 
         | FSD in Nov 2023 is not latest but it's not that old, I guess
         | it's not in the 12 series which is much better but no need to
         | not investigate this.
        
           | lopkeny12ko wrote:
           | That is literally the entire point. The whole investigation
           | is moot because both the hardware _and_ software are out of
           | date, and no longer used for any current Model Ys off the
           | production line.
        
         | ra7 wrote:
         | The perfect setup. By the time an incident in the "current"
         | software is investigated, it will be outdated. All Tesla has to
         | do is a rev a software version and ignore all incidents that
         | occurred prior to it.
        
           | lopkeny12ko wrote:
           | You are welcome to conjure whatever conspiracy theories you
           | like but the reality is FSD 12.5 is exponentially better than
           | previous versions. Don't just take it from me, this is what
           | all Tesla owners are saying too.
        
       | kjkjadksj wrote:
       | One thing thats a little weird with the constant tesla framing of
       | fsd being better than the average driver, is this assumption that
       | a tesla owner might be an average driver. The "average" driver
       | includes people who total their cars, who kill pedestrians, who
       | drive drunk, who go 40 over. Meanwhile I've never been in an
       | accident. For me and probably for many other drivers, their own
       | individual average performance is much better than the average of
       | all drivers. And given that its a possibility that relying on fsd
       | is much worse for you than not in terms of rate of risk.
        
       | metabagel wrote:
       | Cruise control with automatic following distance and lane-keeping
       | are such game changers, that autonomous driving isn't necessary
       | for able drivers.
       | 
       | OK, the lane-keeping isn't quite there, but I feel like that's
       | solvable.
        
       | lowbloodsugar wrote:
       | I'm not turning FSD on until it is a genuine autonomous vehicle
       | that requires no input from me and never disengages. Until Tesla
       | is, under the law, the legal driver of the vehicle, and suffers
       | all the legal impact, you'd have to be mental to let it drive for
       | you. It's like asking, "Hey, here's a chauffeur who has killed
       | several people so far, all over the world. You want him to
       | drive?" Or "Hey, here's a chauffeur. You're fine, you can read a
       | book. But at some point, right when something super dangerous is
       | about to happen, he's going to just panic and stop driving, and
       | then you have to stop whatever you're doing and take over."
       | That's fucking mental.
        
       ___________________________________________________________________
       (page generated 2024-10-20 23:01 UTC)