[HN Gopher] Scenarios in which Tesla FSD Beta 9.0 fails
       ___________________________________________________________________
        
       Scenarios in which Tesla FSD Beta 9.0 fails
        
       Author : giacaglia
       Score  : 202 points
       Date   : 2021-07-12 16:19 UTC (6 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | wedn3sday wrote:
       | I watched a human driver in a BMW weave at 90 mph through traffic
       | yesterday, swerving back and forth between lanes inches from
       | other peoples cars. If Tesla can make a car that's even 1% less
       | dangerous than human drivers, that could be thousands of deaths
       | avoided every year. Humans should not be allowed to drive cars on
       | public streets.
        
         | carlmr wrote:
         | But that surely is the description of 1% of the drivers. It
         | should be better than the 1% worst drivers. It should be
         | demonstrably safer than humans on average with a good margin.
        
           | minhazm wrote:
           | Even if they accomplish that, people will move the goal
           | posts. People are illogical and will see some edge case that
           | self driving might perform worse at, but on an aggregate data
           | basis it might be 10x safer than human drivers. People will
           | see that 1 edge case and say that self driving cars are worse
           | than humans.
        
             | mikestew wrote:
             | That's an awful lot of presumption of "people"'s behavior
             | toward a system that has failed nearly every promise made.
             | There's no need to move the goal posts when the player
             | isn't anywhere near it. I mean, if we're going to talk
             | about someone moving goal posts around, how about the one
             | claiming "full self-driving" for a system that isn't
             | anything of the sort?
        
         | Syonyk wrote:
         | > _Humans should not be allowed to drive cars on public
         | streets._
         | 
         | Because the alternative right now is... this abomination that
         | doesn't even grasp what a concrete pillar is?
        
           | jacquesm wrote:
           | Maybe it does and it is trying to put itself out of its
           | misery?
        
         | throwaway0a5e wrote:
         | The fact that a presumably average or not too far from average
         | human can <clutches pearls> "weave at 90 mph through traffic
         | yesterday, swerving back and forth between lanes inches from
         | other peoples cars" should tell you how wide of a gulf there is
         | between human operators and current FSD tech, which will dodge
         | road debris only if it looks like a car, person, cyclist or
         | traffic cone.
        
           | wedn3sday wrote:
           | Yes, Im "pearl clutching" because some ahole decided to put
           | hundreds of peoples lives at risk so he (and its always a he)
           | could have a little thrill. There is a wide gulf now between
           | human skill and robo-driver skill, just as 50 years ago it
           | was much faster to call someone then send them a message in
           | the mail. Now we have email, and in the future we will have
           | (actual, real) full self driving.
        
             | throwaway0a5e wrote:
             | >Im "pearl clutching" because some ahole decided to put
             | hundreds of peoples lives at risk
             | 
             | The fact that you're saying he put hundreds of lives at
             | risk when a hundred is a score that a terrorist with a semi
             | truck would be proud of and is for all practical purposes
             | unattainable with a light vehicle does seem to point in
             | that direction.
             | 
             | I'm not condoning his behavior but 1) people are jerks,
             | what do you expect and 2) you are getting way to bent out
             | of shape over it.
             | 
             | > Now we have email, and in the future we will have
             | (actual, real) full self driving.
             | 
             | And yet people pick up the phone for the urgent stuff and
             | send snail mail for the super important stuff (though the
             | latter is changing with online forms moving into more and
             | more areas). That says something about the true nature of
             | technological progress. We come up with new better stuff
             | but often old stuff keeps some niches for itself.
        
             | nradov wrote:
             | Is that the same future where we'll have fusion power and
             | colonies on Mars?
        
         | darknavi wrote:
         | It will be interesting where we draw the line on "good enough".
         | 
         | I think people will naturally be much more critical of a car
         | than human drivers even if "overall" they are statistically
         | safer.
        
         | mikestew wrote:
         | If Tesla can build a system that can do what that BMW driver
         | did, I'll buy one tomorrow. As it is, it looks like the things
         | have trouble just staying in their lane.
        
         | H8crilA wrote:
         | You know there's a very old and working solution to this
         | specific problem, it's calling the police and reporting the
         | license plate numbers so that the driver is punished for
         | reckless behavior. Technology made it much easier via dashcams.
        
           | darknavi wrote:
           | I've admittedly only tried doing this once, but it got no
           | where. The police said they wouldn't/couldn't do anything
           | unless they were on-site and could ID the driver.
        
             | mikestew wrote:
             | Same experience, which causes me to question the utility of
             | signs along (as one example in the Seattle area) East Lake
             | Sammamish Parkway stating that I should report aggressive
             | driving. Well, not if you're just going to blow me off.
        
       | gamblor956 wrote:
       | The 4th scenario (going the wrong way down a one-way road because
       | FSD misinterpreted the sign) is especially scary, as that has the
       | highest likelihood of causing a fatality.
        
       | selimnairb wrote:
       | "Full Self-Driving Beta" should be an oxymoron. Why is this being
       | allowed?
        
       | stsmwg wrote:
       | One of the questions I've repeatedly had regarding FSD (and
       | Tesla's approach in particular) is the notion of memory. While a
       | lot of these scenarios are disturbing, I've seen people wavering
       | on lanes, exits and attempting to turn the wrong way onto one-way
       | streets. People have memory, however. If we go through the same
       | confusing intersection a few times, we'll learn how to deal with
       | that specific intersection. It seems like a connected group of
       | FSD cars could perform that learning even faster since it could
       | report that interaction with any car rather than driver-by-
       | driver. Are any of the FSD implementations taking this into
       | account?
        
         | specialist wrote:
         | My thoughts exactly. I've made those mistakes myself, many
         | times.
         | 
         | I guess I sort of assumed that Tesla would do three things:
         | 
         | - Record the IRL decisions of 100k drivers.
         | 
         | - Running FSD in the background, compare FSD decisions with
         | those IRL decisions. Forward all deltas to the mothership for
         | further analysis.
         | 
         | - Some kind of boid, herd behavior. If all the other cars drive
         | around the monorail column, or going one direction on a one way
         | roadway, to follow suit.
         | 
         | To your point, there should probably also be some sort of
         | geolocated decision memory. eg When at this intersection,
         | remember that X times we ultimately did this action.
        
           | notahacker wrote:
           | I can see _big_ issues in biasing a decision making algorithm
           | too much towards average driver behaviour under past road
           | conditions though, particularly if a lot of its existing
           | issues are not handling novelty at all well...
        
           | CyanLite2 wrote:
           | Pretty simple that an official Tesla employee could confirm
           | that at this location, there's a giant concrete pillar here.
           | Or worst case, at this location deactivate FSD and require
           | human decision until you're outside of this geofenced area.
           | They could do that with a simple OTA update. GM/Ford have
           | taken this approach.
        
         | H8crilA wrote:
         | That's kind of the "selling point" of running this experiment
         | on non-consenting public, that it will learn over time and
         | something working will come out of that in the end.
        
         | Syonyk wrote:
         | This has been a common assertion about Tesla's "leadership" in
         | the field - that they can learn from all the cars, push
         | updates, and obviously not have to experience the same issue
         | repeatedly.
         | 
         | It's far from clear, in practice, if they're actually doing
         | this. If they have, it would have to be fairly recent, because
         | the list of "Oh, yeah, Autopilot _always_ screws up at this
         | highway split... " is more or less endless.
         | 
         | GM's Supercruise relies on fairly solid maps of the areas of
         | operation (mostly limited access highways), so it has an
         | understanding of "what should be there" it can work off and it
         | seems to handle the mapped areas competently.
         | 
         | But the problem here is that the learning requires humans
         | taking over, and telling the automation, "No, you're wrong."
         | And then being able to distill that into something useful for
         | other cars - because the human who took over may not have
         | really done the correct thing, just the "Oh FFS, this car is
         | being stupid, no, THAT lane!" thing.
         | 
         | And FSD doesn't get that kind of feedback anyway. It's only
         | with a human in the loop that you can learn from how humans
         | handle stuff.
        
           | AtlasBarfed wrote:
           | RE: determining what humans did was right to take over
           | 
           | It's a QA department. If there is a failure hot spot, then
           | take a bunch of known "good" QA drivers through that area.
           | Assign strong weight to their performance/route/etc.
           | 
           | It's interesting reading through all this, I can see a review
           | procedure checklist:
           | 
           | - show me how you take hotspot information into account
           | 
           | - show me how your QA department helps direct the software
           | 
           | - show me how your software handles the following known
           | scenarios (kids, deer, trains, deer weather)
           | 
           | - show me how you communicate uncertainty and requests for
           | help from the driver
           | 
           | - show me if there is plans for a central monitoring/manual
           | takeover service
           | 
           | - show me how it handles construction
           | 
           | Also, construction absolutely needs to evolve convergently
           | with self driving. Cones are... ok, but some of those people
           | leaning on shovels need to update systems with information on
           | what is being worked on and what is cordoned off.
        
             | Syonyk wrote:
             | > _Also, construction absolutely needs to evolve
             | convergently with self driving. Cones are... ok, but some
             | of those people leaning on shovels need to update systems
             | with information on what is being worked on and what is
             | cordoned off._
             | 
             | No. If the car cannot handle random obstructions and
             | diversions without external data, it cannot be allowed on
             | the road.
             | 
             | Construction is often enough planned ahead of time, but
             | crashes happen, will continue to happen, and if a SDC can't
             | handle being routed around a crash scene without someone
             | having updated some cloud somewhere, it shouldn't be
             | allowed to drive.
             | 
             | First responders need to deal with the accident, not be
             | focused on uploading details of the routing around the
             | crash before they can trust other cars to not blindly drive
             | into the crash scene because it was stationary and not on a
             | map.
             | 
             | And if you can handle that on-car, which I consider a hard
             | requirement, then why not simply use that logic for all the
             | cases involving detours and lane closures?
        
           | stsmwg wrote:
           | Great, thanks for that info. I'm remembering the fatal crash
           | of a Tesla on 101 where the family said the guy driving had
           | complained about the site of the accident before. It's
           | interesting to know that there's at least a mental list of
           | places like this even now. Disengagements should at least
           | prompt a review of that interaction to try and understand why
           | the human didn't like the driving. Though at Tesla's scale
           | that has already become something that has to be automated
           | itself.
        
         | AtlasBarfed wrote:
         | Humans have an ... ok ... driving algorithm for unfamiliar
         | roads. It's improved a lot with maps/directions software, but
         | it still sucks, especially the more dense you get.
         | 
         | Routes people drive frequently are much more optimized:
         | knowledge of specific road conditions like potholes,
         | undulations, sight lines, etc.
         | 
         | I would like to have centrally curated AI programs for routes
         | rather than a solve-everything adhoc program like Tesla is
         | doing.
         | 
         | However, the adhoc/memoryless model will still work ok on
         | highway miles I would guess.
         | 
         | What I really want is extremely safe highway driving more than
         | automated a trip to Taco Bell.
         | 
         | I personally think Tesla is doing ...ok. The beta 9 is
         | marginally better than the beta 8 from the youtubes I've seen.
         | Neither are ready for primetime, but both are impressive
         | technical demonstrations.
         | 
         | If they did a full-from-scratch about three or four years ago
         | then this is frankly pretty amazing.
         | 
         | Of course with Tesla you have the fanboys (he is the technogod
         | of the future!) and the rabid haters (someone equated him with
         | Donald Trump, please).
         | 
         | A basic uncertainty lookup map would probably be a good thing.
         | How many tesla drivers took control in this area/section? What
         | is the reported certainties by the software for this
         | area/section?
         | 
         | It's all a black box, google's geofencing, Tesla, once-upon-a-
         | time Uber, GM supercruise, etc.
         | 
         | A twitter account listing failures is meaningless without the
         | grand scheme of statistics and success rates. A Twitter account
         | of human failures would be even scarier.
        
       | cromwellian wrote:
       | All of the Tesla fanboys saying you don't need LIDAR, RADAR, or
       | Maps as a kind of "ground truth" and you can get by on video only
       | should take this as a huge repudiation of brag by Musk.
       | 
       | Look, Radar, Lidar, and Maps aren't perfect. A map can be wrong,
       | Radar can have false echos, Lidar can have issues in bad weather,
       | etc. But if you're depending on visual recognition of one-way
       | signs, having a potentially out-of-date backup map to check your
       | prediction again, and err on the side of caution, is better than
       | having no Map at all.
       | 
       | Ditto for radar and lidar. Radar enabled cars have collided with
       | semis, remember, because they're aimed low (underneath the semi).
       | But in the case of these planters, or road cones, or monorail
       | poles, the radar is going to be a strong signal to the rest of
       | the system that it isn't seeing something.
       | 
       | There's no way FSD should be allowed on public roads, it's not
       | even CLOSE to ready. The other car companies are behaving far
       | more responsibility: full sensor suite, multi-sensor fusion,
       | comparison with detailed road maps, and limiting operation to
       | safe areas, and expanding as safety is proven.
       | 
       | Who the hell cares if it's not FSD driving if its so dangerous?
       | I'll take 75% FSD that only operates on highways and long
       | commutes or road trips, over 100% FSD that has so many
       | disengagements and "woah" moments as to be unusable.
        
         | TheParkShark wrote:
         | I see this argument a lot. This is a small beta test and I have
         | yet to see anything better on a larger scale. Most drivers
         | shouldn't be allowed on public roads yet here we are. Why are
         | you so full of fear and anger over this? Isn't this a place for
         | open discussion?
        
           | nullc wrote:
           | > I have yet to see anything better on a larger scale
           | 
           | Waymo self-driving is a relatively large scale. It also has
           | blunders but they appear to be far less concerning than
           | Tesla's.
        
             | TheParkShark wrote:
             | Geo-fenced isn't large scale. There's no clear route to
             | allow them to expand that tech.
        
           | cromwellian wrote:
           | Because every year, Tesla keeps saying it's going to ship
           | soon. It's clear this isn't even close to ready, I'd say it's
           | 10 years from shipping, if not back to the drawing board.
           | Tesla is having people pay for a self driving package as an
           | option, and years later, it still isn't available. Shouldn't
           | they refund their owners?
           | 
           | I mean, just look at the visual representation of what the
           | FSD is showing. It has no temporal stability. Solid objects
           | are constantly appearing and disappearing. Often the
           | trajectory line periodically switches from a sane path to
           | some radically wrong path.
           | 
           | The car then often takes a HARD accelerated left or right,
           | like it is dodging something, which often isn't there, only
           | to threaten another collision.
           | 
           | It's like a Trolley Problem where one of the tracks is empty,
           | and you divert the trolley into a group of kids.
           | 
           | The "aggressive" nature of its fails is super concerning. My
           | son is 16 and learning to drive right now. When he makes a
           | mistake, he doesn't aggressively accelerate and make wild
           | turns, he hits the brakes. In most cases, in residential
           | traffic (25-30mph), someone braking will annoy other drivers,
           | but they will be able to stop and avoid collision. At many
           | city driving speeds, you should not have to make sudden,
           | super aggressive turns into the other lanes. Just safety come
           | to a stop, even if you annoy people.
           | 
           | I see Musk's attitude here as endangering the public. Cars
           | aren't rockets launched at far away landing strips by expert
           | flight controllers. Even these beta testers can fuck up, as
           | numerous people have already died using FSD, because some
           | dipshits like to watch DVDs or sit in the back seats to show
           | off to friends on TikTok, dangering not just themselves, but
           | the public.
           | 
           | The beta should be closed IMHO to Tesla employees only and
           | professionally hired drivers. I mean, why can't Tesla just do
           | what other companies do, hire like 1000 drivers to go drive
           | FSD around 100 American cities for months at a time
           | collecting data?
        
       | nikhizzle wrote:
       | Does anyone have any insight as to why regulators allow a beta
       | like this on the road with drivers who are not trained to
       | specifically respond to it's mistakes?
       | 
       | IMHO this puts everybody and their children at risk so Tesla can
       | beta test earlier, but I would love to corrected.
        
         | croes wrote:
         | They fear Elon's Twitter wrath
        
         | darknavi wrote:
         | > drivers who are not trained to specifically respond to it's
         | mistakes
         | 
         | What would this entail? Perhaps some sort of "license" which
         | allows the user to operator a motor vehicle?
        
           | visarga wrote:
           | Perhaps operating a motor vehicle is different from
           | supervising an AI doing that same task.
        
           | bigtex wrote:
           | Giant obxonious flashing lights and blinking signs stating
           | "Student Self Driving tech on board"
        
           | ethbr0 wrote:
           | Safely operating a motor vehicle via a steering wheel,
           | accelerator, and brake (and sometimes clutch and gearshift)
           | is a completely different skillset than monitoring an
           | automated system in realtime.
           | 
           | Novel skill requirements include: interpreting FSD UI,
           | anticipating common errors, recognizing intent behind FSD
           | actions, & remaining vigilant even during periods of
           | autonomous success.
        
             | tshaddox wrote:
             | No, it's just unsafe (and presumably illegal in most
             | places) to operate a motor vehicle without maintaining
             | constant awareness of the road and control of the vehicle.
             | There's no _additional_ training or license that permits
             | you to stop maintaining that awareness and control, just
             | like there's no additional training that permits you to
             | drive under the influence of alcohol.
        
               | ectopod wrote:
               | I agree, but if you have to maintain constant control of
               | the vehicle then it isn't self-driving.
        
               | throwaway0a5e wrote:
               | >There's no additional training or license that permits
               | you to stop maintaining that awareness and control
               | 
               | Being a cop on duty often exempts you from distracted
               | driving laws.
               | 
               | I'm not sure how they can do stuff on a laptop while
               | driving safely but I assume the state knows best. /s
        
               | kemitche wrote:
               | The theoretical "additional license" would be in addition
               | to maintaining awareness, not a substitution. So all the
               | normal road awareness, plus being informed of likely FSD
               | failure conditions, and anticipating them / being ready
               | and capable to intercede.
        
               | tshaddox wrote:
               | If you maintain control at all times, you shouldn't need
               | additional training for any of the car's features. We
               | don't require additional training for cruise control,
               | lane-keep assistance, anti-lock brakes, etc.
        
             | Syonyk wrote:
             | The history of human machine interactions makes it clear
             | that if you have the human as the primary control (in the
             | main decision loop, operating things) and the automation
             | watching over their shoulders, it works pretty well.
             | 
             | The opposite - where the automation handles the common
             | case, with the human watching, _simply does not work._ It
             | 's not worked in aviation, it's not worked in industrial
             | systems, it's not worked in self driving cars. This isn't a
             | novel bit of research, this is well understood.
             | 
             | "I'll do 99.90% of it correctly, you watch and catch the
             | 0.1% when I screw up!" style systems that rely on the human
             | to catch the mistakes shouldn't be permitted to operate in
             | any sort of safety critical environment. It simply doesn't
             | work.
        
               | ethbr0 wrote:
               | That's a great way to put it! It should be indicative of
               | the kind and volume of licensing and training we require
               | of commercial pilots, in order to use a more limited and
               | situationally simple autopilot.
               | 
               | And we _still_ get Boeing-style @ &$#-ups at the UX
               | interface.
        
               | Syonyk wrote:
               | The use of the term "autopilot" is particularly cute
               | here, because any pilot who's ever used one knows that
               | autopilots are dumb as rocks, can't really be trusted,
               | and don't do nearly as much as non-pilots tend to assume
               | they do.
               | 
               | "You go that way, at this altitude, until I tell you to
               | do something different" is what most of them manage, and
               | they do it well enough. Some of the high end ones have
               | navigational features you can use ("Follow this GPS route
               | or VOR radial"), and in airliners, they can do quite a
               | bit, but you still are very closely monitoring during
               | anything important.
               | 
               | "In the middle of the sky," yeah, you've got some time to
               | figure things out if George goes wonky. During a Cat III
               | ILS approach down to minimums, you have two trained
               | pilots, paying full attention to just what this bit of
               | computer is doing, both prepared to abort the approach
               | and head back into the middle of the sky with an awful
               | lot of thrust behind them if anything goes wrong. But in
               | cruise, there's just not too much that requires an
               | instant response.
        
               | Theodores wrote:
               | A century ago horses had Full Self Driving with the human
               | having to be vigilant and in charge.
               | 
               | The horse could go crazy in scary situations endangering
               | others but you could train that out and make a good war
               | horse.
               | 
               | Basic stuff like following the road around a bend was
               | something the horse could do but you could not tell it to
               | autonomously go across town to pick someone up.
               | 
               | People were okay with horses and knew what to expect.
               | Tesla need to rebrand autopilot to 'horse mode' and reset
               | people's expectations of FSD.
        
               | jazzyjackson wrote:
               | Horses could at least make their own way back home, and
               | they were self-fueling to boot!
        
               | nkingsy wrote:
               | The watching is the sticking point. It seems this could
               | be ameliorated if the automation could be 100% trusted to
               | know when it needs an intervention.
               | 
               | Solving for that, even a system that was extremely
               | conservative would be useful, as you are completely freed
               | up when it is operating, even if that is only 20% of the
               | time.
        
               | bumby wrote:
               | > _" I'll do 99.90% of it correctly, you watch and catch
               | the 0.1% when I screw up!" _
               | 
               | Do you have thoughts on why this is? Is it because,
               | almost by definition, the remaining 0.1% are the toughest
               | problems so the hardest to catch/solve? Or is it a human-
               | factors issue where people get complacent when they know
               | they can count on the machine 99.9% of the time and they
               | lose the discipline to be vigilant enough to catch the
               | remaining problems?
        
               | jazzyjackson wrote:
               | You might be interested in a couple of episodes of 99%
               | invisible, called "Children of the Magenta" about pilots
               | becoming complacent because the machine usually takes
               | care of problems for them.
               | 
               | Basically if you grow up with a self driving car and
               | never learn how to handle small fuck ups, how can you
               | ever be expected to recover from a big fuckup? (Bad
               | sensor reading, bad weather, shredded tire, etc)
        
               | antisthenes wrote:
               | > "I'll do 99.90% of it correctly, you watch and catch
               | the 0.1% when I screw up!" style systems that rely on the
               | human to catch the mistakes shouldn't be permitted to
               | operate in any sort of safety critical environment. It
               | simply doesn't work.
               | 
               | Right, because it actually relies on the human to catch
               | 100% of mistakes _anyway_. The human can already do the
               | 99.90% of it correctly with a reasonable degree of
               | safety.
               | 
               | I wouldn't be surprised if the increased cognitive load
               | on the human in such a system is an actual overall
               | decrease in safety and strictly worse than a human
               | operator alone.
               | 
               | At least with an unreliable system that Tesla
               | demonstrated here.
        
             | mbreese wrote:
             | I'm going to posit that none of those novel skill
             | requirements are actually requirements. You don't need to
             | interpret a UI, or recognize intent to know that the car
             | shouldn't turn the wrong way on a one-way street or hit a
             | planter.
             | 
             | Anticipating common errors and remaining vigilant while
             | driving is a requirement for all driving, not when dealing
             | with FSD.
             | 
             | If you are capable of driving a car safely to begin with,
             | then you're capable of recognizing then the car is going to
             | do something wrong and turning the wheel and/or hitting the
             | brakes.
             | 
             | I wouldn't personally trust a FSD beta, but only if it is
             | properly supervised, I think it could be tested safely.
             | But, there's the problem -- how can you really properly
             | supervise this? How many drivers are going to try this and
             | just let the car drive w/o paying attention? Or, using it
             | under controlled conditions, but then giving it too much
             | trust in more adverse conditions?
        
               | ethbr0 wrote:
               | What makes them required is the _speed_ that a reaction
               | is expected. In both the quick and velocity senses of the
               | word.
               | 
               | Unless the user is able to understand and _anticipate_
               | common reactions, the entire system is unsafe at  >25 mph
               | or so.
               | 
               | Or, to put it another way, I'd love to see the statistics
               | on how many drivers can react quickly enough to avert an
               | unexpected turn into an exit divider at 60 mph, after a
               | number of days of interruption free FSD.
        
               | mbreese wrote:
               | We're largely talking about an update where the primary
               | benefit is low-speed automation -- driving in areas where
               | the speed limits are low because the obstacles are many.
        
           | simion314 wrote:
           | Maybe people that are trained with such failure videos that
           | show what can go wrong and NOT with propaganda videos that
           | only show the good parts, causing the driver not to pay
           | attention.
        
           | vkou wrote:
           | A license that allows you to operate a motor vehicle with a
           | beta self-driving feature. It's very similar to a regular
           | motor vehicle, but has different failure modes.
           | 
           | A motorcycle is similar to an automobile, but has different
           | failure modes, and needs a special license. A >5 tonne truck
           | is very similar to an automobile, but has different failure
           | modes, and needs a special license. An automobile that
           | usually drives itself, but sometimes tries to crash itself
           | has different failure modes from an automobile that does not
           | try to drive itself.
        
           | paxys wrote:
           | What part of the driving test covers taking over control from
           | an automated vehicle with a split second notice?
        
           | AlotOfReading wrote:
           | California at least has a specific license for this called
           | the Autonomous Vehicle Operator license. It enforces some
           | minimal amount of training beyond simply having a regular
           | driver's license.
        
           | bobsomers wrote:
           | Safety drivers at proper AV companies usually go through
           | several weeks of rigorous classroom instruction and test
           | track driving with a professional driving instructor,
           | including lots of practice with the real AVs on closed
           | courses with simulated system faults, takeover situations,
           | etc. Anecdotally I've seen trained safety drivers take over
           | from unexpected system failures at speeds near the floor of
           | human reaction time. They are some of the best and most
           | attentive drivers on the road.
           | 
           | Average folks beta testing Tesla's software for them are
           | woefully under-prepared to safely mitigate the risks these
           | vehicles pose. In several cases they've meaninglessly died so
           | that Tesla could learn a lesson everyone else saw coming
           | months in advance.
        
           | bestouff wrote:
           | I guess any driving instructor worth its salt would have the
           | required skills to correct the vehicle if it attempts to do
           | something weird. After all FSD is a still-in-training (maybe
           | for an unusual long time) driver.
        
             | Syonyk wrote:
             | Probably not.
             | 
             | It fails in ways that are totally different from how humans
             | typically fail at driving.
             | 
             | Student drivers have the ability to comprehend and
             | understand the 3D world around them - it's just a matter of
             | learning the proper control manipulations to make the car
             | do what you want, and then how to interact with other
             | traffic in safe and expected ways.
             | 
             | Tesla's FSD system seems to fail at basic 3D world
             | comprehension and reading, often enough.
        
         | TheParkShark wrote:
         | Everyone and their children seems a bit hyperbolic. There
         | hasn't been an accident involving FSD beta 9, but I'm sure
         | someone has been killed at the hands of a drunk driver today. I
         | am failing to find any comments from you arguing for alcohol to
         | be removed from shelves across the country? Why aren't you
         | pushing your regulators for that?
        
         | avalys wrote:
         | Because America does not yet require that its citizens ask
         | mommy and daddy for permission for every thing they do.
        
           | Barrin92 wrote:
           | this isn't going to the ice cream store, this is driving two
           | tons of steel with experimental software that glitches
           | constantly through inner city traffic. If you look at the
           | footage, at times these cars behave as erratically as a drunk
           | driver, and they're controlled by what appears to be random
           | 'beta testers' with no qualifications to operate a car under
           | those conditions.
        
           | ASalazarMX wrote:
           | Such crude trolling. Build a new house without a permit and
           | see how that freedom feels.
        
           | mrRandomGuy wrote:
           | It would be a glorious honor to have my 'FSD' Tesla drive
           | into a giant concrete pillar, causing my car to explode and
           | erupt into flames.
        
         | nrjames wrote:
         | I used to work with the USG. There was "Beta" software
         | absolutely everywhere, including on some very sensitive
         | systems, because it was not required to go through security
         | approval until it was out of Beta. In some instances, these
         | applications had been in place for > 10 years. That was a
         | number of years ago, so I hope the situation has changed. In
         | general, the USG doesn't have sophisticated infrastructure and
         | policy to deal with software that is in development. With
         | Tesla, my guess is that it is not that they are allowing it to
         | happen, but that they lack the regulatory infrastructure to
         | prevent it from happening.
        
           | verelo wrote:
           | USG? I'm not sure what that means, i did a google search and
           | i assume it's not "United States Gypsum" or the University
           | System or Georgia...?
        
             | darknavi wrote:
             | US government I assume.
        
               | verelo wrote:
               | Thanks, that makes sense.
        
         | whoknowswhat11 wrote:
         | It's a think of the children on HN :)
        
         | SloopJon wrote:
         | The Twitter thread is lacking in attributions, but I saw most
         | of these some months back after a post about scary FSD
         | behavior. I watched with morbid curiosity, and a rising level
         | of anger.
         | 
         | The guy with the white ball cap repeatedly attempted a left
         | turn across two lanes of fast-moving traffic, with a drone
         | providing overhead views. He seemed smart, aware, and even a
         | bit conservative in intervening. Nonetheless, I couldn't help
         | but thinking that none of the oncoming vehicles consented to
         | the experiments racking up YouTube views. If he doesn't jump on
         | the brakes at the right time, he potentially causes a head-on
         | collision with a good chance of fatalities.
         | 
         | Yes, I do agree that beta drivers should get extra training.
         | However, I'm not sure I agree with the premise of beta testing
         | an aluminum rocket on wheels on public roads in the first
         | place.
        
           | misiti3780 wrote:
           | How do you know the beta drivers did not get training?
        
             | SloopJon wrote:
             | I said that they should. I don't know whether they do.
             | 
             | A March 2021 tweet indicated that beta access will be as
             | simple as pressing a "Download Beta" button in the car. A
             | subsequent tweet said that the program had 2,000 owners,
             | some of whom have had their access revoked for not paying
             | enough attention to the road.
        
         | optimiz3 wrote:
         | > puts everybody and their children at risk
         | 
         | Oh won't somebody think of the children?
         | 
         | https://en.m.wikipedia.org/wiki/Think_of_the_children
        
           | mrRandomGuy wrote:
           | Out of all the times to mock someone saying that, I don't
           | think "self-driving-but-not-really-car nearly crashes into
           | things and puts the occupants and bystanders at risk for
           | serious bodily harm as evident by the videos posted above" is
           | one of those times.
        
             | ASalazarMX wrote:
             | It's still appropriate. The issue is using children to
             | appeal to the emotions of parents, as if children were VIP
             | humans. Yes, they are very important people, but mostly to
             | their parents and close relatives.
        
             | optimiz3 wrote:
             | Wait till you read the stats and see videos of what humans
             | driving cars regularly do!
        
               | ASalazarMX wrote:
               | Still, an autopilot should aim to be a peer with the best
               | drivers, not with mediocre or bad drivers.
        
               | colinmhayes wrote:
               | Well if you think of this closed beta as a way for Tesla
               | to collect data about its program's shortcomings it's
               | easy enough to see how releasing could save lives in the
               | long run even it is takes some in the short run. Every
               | day sooner that a 10x better self driving car comes is
               | hundreds of lives saved.
        
               | camjohnson26 wrote:
               | And what if it gets stuck in a local maximum and never
               | improves? Then you lost those extra lives for no reason,
               | which consequently is what I believe has happened. At
               | least 4 people are dead in preventable autopilot crashes,
               | and the real number is probably over 10. For the number
               | of Tesla's on the road that number is way too high.
               | 
               | https://tesladeaths.com
        
               | colinmhayes wrote:
               | 4 deaths is worth a 1% increased chance of achieving self
               | driving a month earlier. Way more than 400 people a month
               | die in car accidents.
        
               | foepys wrote:
               | I somehow get the feeling that you don't actually want to
               | save lives but rather want to experience the future(tm),
               | whatever it costs.
        
               | jazzyjackson wrote:
               | I just don't understand why people think a robot can
               | improve on this.
               | 
               | If you want to save lives, lower speed limits, build out
               | public transit, and don't hand out drivers licenses like
               | candy.
        
               | colinmhayes wrote:
               | I don't only want to save lives, I want to maximize
               | utility. I think driving less would raise utility, but
               | society disagrees, so this is the next best option.
        
               | jazzyjackson wrote:
               | Right, fair point, people drive 80mph because they have
               | someplace to be. I'm perennially bummed that America
               | can't figure out how to build high speed rail for less
               | than $100 Million per mile.
        
               | [deleted]
        
               | throwaway0a5e wrote:
               | A peer with average drivers would probably be fine.
               | 
               | A peer with average drivers and doing completely
               | nonsensical things every now and then is basically an
               | average driver on their cell phone though and the verdict
               | on that is that it's not fine.
        
               | optimiz3 wrote:
               | It's a limited beta with the goal of discovering error
               | cases.
        
               | barbazoo wrote:
               | In this case I'd prefer if "discovering error cases" was
               | done without risking actual lives. Injuring someone would
               | not be a valid way discover an error case.
        
               | junipertea wrote:
               | Beta limited to regular drivers that are not specifically
               | trained testers. Error cases can involve injury or death.
        
               | sidibe wrote:
               | Humans do way, way, way better than what we've seen in
               | the FSD beta testers' videos (most of whom are huge Tesla
               | fans)
        
       | croes wrote:
       | I doubt that autonomous driving will work in the next couple of
       | years without infrastructure adjustments. We built roads for cars
       | to replace horses, we will need to built roads for autonmous cars
       | to replace human driven cars.
        
         | sidewndr46 wrote:
         | If we're going to build what amounts to new infrastructure,
         | shouldn't we build transportation infrastructure that does away
         | with the need to own cars? At least for transport in urban and
         | suburban areas.
        
           | timroman wrote:
           | IMHO they're the same thing. Automate all cars to get around
           | the right of way and high construction costs that limit rail
           | development. I think the upgrades needed to roads are minimal
           | in comparison. Maybe some sensors and signals but it may be
           | just perfecting signage, lines, etc.
        
             | sidewndr46 wrote:
             | Upgrading or retrofitting existing infrastructure to
             | accommodate $60,000 luxury vehicles is not the same thing
             | as building infrastructure that removes the need for car
             | ownership.
        
             | jazzyjackson wrote:
             | Pavement holds up a lot better in rain and snow than
             | electronics, and still needs to be replaced every few
             | years. Agreed that it should be a bare minimum expectation
             | to have the roads painted clearly, but there are plenty of
             | roads in eg Chicago where I don't think there is an
             | official number of lanes, it's just kind of a free for all.
             | Just going through the whole system and deciding where the
             | boundaries are is a gargantuan task. American railroads
             | haven't even managed to upgrade their signals for PTC
             | without failures left and right - if riding Amtrak or NYC
             | metro is any indication the cars will have to coast at 5mph
             | whenever there are signal problems.
        
               | sidewndr46 wrote:
               | Yeah, I was visiting Tucson recently and discovered they
               | have lots of roads with no lines at all. Surprisingly, it
               | may have actually made traffic substantially more relaxed
               | as I think everyone was paying more attention.
        
         | heavyset_go wrote:
         | Automated vehicles have existed in industries for years, and
         | they operate like this. Special environments and tracks are set
         | up for the vehicles, and human presence and interference are
         | minimized on the tracks in order to minimize the potential for
         | injury or death.
        
         | rytor718 wrote:
         | Ive been saying this for over a decade: we're working on the
         | wrong problem right now with autonomous vehicles. Their chief
         | obstacle is operating on the roads _with humans_. They need
         | dedicated infrastructure where only autonomous vehicles
         | operate. Anything short of this will be implemented in very
         | restrictive, small, niche areas and never become the new way
         | most people move around. Tesla and Uber, et al have taken this
         | as far as it will go without infrastructure in my opinion.
         | 
         | I think we'll be on the right track when city planners reclaim
         | some of our current roads and make them available only to
         | autonomous vehicles. They'll need their own garages, highways
         | and streets to operate to fully realize it. For example, FSD
         | vehicles can be moved to dedicated streets similar to how
         | alleys work in many cities currently (where they're largely
         | routes for maintenance and utilities -- removing them from
         | normal commuter traffic).
        
           | heavyset_go wrote:
           | > _I think we 'll be on the right track when city planners
           | reclaim some of our current roads and make them available
           | only to autonomous vehicles. They'll need their own garages,
           | highways and streets to operate to fully realize it._
           | 
           | I'd rather subsidize automated electric trolley lines in
           | cities than subsidize exclusive roads for automated personal
           | vehicles.
        
           | nradov wrote:
           | I don't want my tax money wasted on that. In most urban areas
           | there's simply no space for separate autonomous vehicles
           | roads. Would rather see that money spent on maintaining
           | existing roads and improving schools.
        
         | AtlasBarfed wrote:
         | The infrastructure convergence should concentrate on highways,
         | where the highest speeds, most use, but conversely the least
         | complicated algorithms are needed.
        
         | dangoor wrote:
         | I live in a place with fairly snowy winters, which seems like a
         | huge problem for self-driving cars. I completely agree with
         | this thought that we should work on making our roads help the
         | self-driving cars. If we had started that effort several years
         | back, we would likely have a lot of road miles that could
         | safely handle self-driving vehicles. If we start now, in
         | several years we'll have a bunch of road miles ready for them.
         | 
         | Granted that the US has _a lot_ of road-miles, but doing this
         | work incrementally is the way to go. I have a feeling it will
         | be faster to outfit the roads than to build reliable self-
         | driving software. I will also grant that this is a  "feeling"
         | and I'm sure someone has done actual research :)
         | 
         | If this capability could be used to help hasten the move to
         | electric vehicles, we could even get a climate change win in at
         | the same time.
        
           | jazzyjackson wrote:
           | I just don't see self driving cars taking off for this
           | reason: people want or need to drive even when it's dangerous
           | to do so.
           | 
           | Self driving cars might well refuse to move in bad conditions
           | (my Prius gets very upset when navigating my overgrown
           | driveway for instance, sometimes slamming on the auto-brake
           | to avoid the plant-pedestrians)
        
           | w0m wrote:
           | North East in winter has ~0% chance of getting FSD capable in
           | the next 20 years I think. 80% of the population of my
           | hometown can't leave the house for months at a time safely
           | today; let alone automation taking over for that.
        
             | jazzyjackson wrote:
             | Great argument for more rail. Yes there are still
             | conditions that freeze switches but that's when you run gas
             | lines under the tracks to prevent ice from forming:
             | 
             | https://wgntv.com/news/when-its-this-cold-chicago-sets-
             | its-t...
             | 
             | Weirdest thing that prevents trains from running tho is wet
             | leaves on the tracks, of all things, it's too slippery.
        
         | optimiz3 wrote:
         | > need to built roads for autonmous cars
         | 
         | Not any more friendly than they are to humans. AI driving needs
         | to handle the general case. Otherwise a road that has a lapse
         | in autonomous-friendlyness could be catastrophic.
        
           | heavyset_go wrote:
           | We do a pretty good job of isolating certain freeways. There
           | are freeways where I almost certainly never have to worry
           | about a kid chasing a ball into the road and potentially
           | hitting them, because there are 15ft walls on either side of
           | the road, and it doesn't go through residential areas.
           | 
           | I could see some stretches of highways becoming automation
           | friendly, but it wouldn't be anywhere near even a tenth of
           | them.
        
       | Geee wrote:
       | It seems to estimate the surroundings pretty well in the new
       | version, but the path planner still has issues. I'm not sure if
       | they already do this, but I think Tesla should run FSD on all
       | cars in shadow mode, and then use the FSD vector data and driver
       | actions to train the path planner. Basically using FSD vector
       | data as input and driver actions as output of the neural net. If
       | the network can't reach confidence in certain situations, then
       | information is missing from the FSD vector data and more raw
       | video data is needed. It would be possible to measure the
       | difference between human drivers vs. FSD and automatically send
       | data from errors to Tesla for training.
        
         | verytrivial wrote:
         | It seems to be very blase about sudden changes in its live
         | model of even very close objects. Imagine trying to drive with
         | giant blind spots appearing and disappearing in front of your
         | eyes. Could you drive? Maybe, at low speed, in some conditions.
         | Should you? No way. And the complete blindess to the monorail
         | pillars seems utterly fatal to the radar-less concept. These
         | things were CLEARLY there but had unfamiliar visual geometery,
         | but it still chose to blunder forward (and directly torwards
         | them it seems.) Wild.
        
         | Syonyk wrote:
         | I've been hearing for _years_ now that this is a Tesla
         | advantage, that they can run in shadow mode, learn from humans,
         | and big data learn all the quirks of the roads so they don 't
         | have to learn them in the self driving system.
         | 
         | If this has been the case, evidence here is certainly lacking.
         | Turning down a street one presumably hasn't seen a car drive
         | down in this direction? Failing basic stationary obstacle
         | avoidance because they're in the center of the road? Screwing
         | up basic turn lane behavior?
         | 
         | Either they aren't taking the "years of learning" data into
         | account, or, worse, they are - and even with all that
         | correction, it's _still_ this bad.
        
           | Geee wrote:
           | I think they've been focused on solving the "environment
           | mapping and object detection" problem, and if that isn't
           | solved then they can't have the proper inputs to solve the
           | "path planning / decision making" problem. It seems that
           | they're close to solving the first problem, and are ready to
           | move on to the next.
           | 
           | As far as I know they've been using the fleet to collect raw
           | camera data for their training set but I don't think they've
           | used the fleet to learn driving behaviour.
        
           | visarga wrote:
           | It's a cherry picked set of examples you're drawing
           | generalisations from. On any day I can see more reckless
           | driving from humans than I've ever seen online with Tesla.
        
             | alkonaut wrote:
             | I thought this was examples from months of driving and
             | thousands of hours, which would have been terrible. But
             | apparently it's just from a few days testing. That's
             | unthinkably bad.
        
         | wanderer2323 wrote:
         | > seems to estimate the surroundings well
         | 
         | > repeatedly tries to drive into the monorail pillars
         | 
         | Choose one and only one
        
       | killjoywashere wrote:
       | I work in ML (medical diagnostics) and own a 2020 Tesla w/ FSD
       | and do not trust it at all. I also happen to be a cyclist,
       | currently living in SV. My regular road ride takes me past Tesla
       | HQ. The more of these FSD fails I see, the more I wonder how many
       | Teslas passing me are running in FSD mode. Scary.
        
       | nullc wrote:
       | These aren't even the worst examples I've seen by far-- but I
       | suppose they're the worst people have posted with the latest
       | software.
       | 
       | As someone in another thread said:
       | 
       | Driving with Tesla FSD is like being a drive instructor for a
       | narcoleptic student driver who is currently tripping balls.
       | 
       | I don't want to share a road with these things. I try to stay as
       | far away from every tesla on the road as I can.
        
         | TheParkShark wrote:
         | Please share them if you have. As someone in another thread
         | said: Tesla has the most advanced self driving software
         | currently in development in the world. If there's something
         | better you can feel free to share that with your other videos.
        
       | deregulateMed wrote:
       | Originally when Tesla was on the verge of collapse, the self
       | driving technology waa high risk, high reward tech that will kill
       | someone but the company was in the negative. If that killed
       | Enough people, declare bankruptcy and investors lose money.
       | 
       | However today, isn't Tesla profitable now? They already have
       | created a Veblen good, what benefit does Tesla get out of a high
       | risk feature?
        
       | yawaworht1978 wrote:
       | Any human driver driving like that would have his driver's
       | licence revoked, might risk prison for endangerment of society
       | and perhaps be subject to a psychological evaluation (not joking,
       | this stuff happens in Germany). Why are there no consequences for
       | neither Tesla nor the lawmakers who are looking away?
       | 
       | Sure, test autonomous driving off the roads or on dedicated test
       | environments, but do not permit this on public roads, i do not
       | wanna be part of any beta test.
        
       | cs702 wrote:
       | Many of the comments here seem a bit... unfair to me, considering
       | that these clips were handpicked.
       | 
       | I watched (and fast-forwarded) through a few of the original,
       | full-length videos from which these clips were taken. The full-
       | length videos show _long_ drives (in some cases, hours long)
       | almost entirely without human intervention, under conditions
       | explicitly meant to be difficult for self-driving vehicles.
       | 
       | One thing I really liked seeing in the full-length videos is that
       | FSD 9 is much better than previous efforts at requesting human
       | intervention in advance, with plenty of time to react, when the
       | software is confused by upcoming road situations. The handpicked
       | clips are exceptions.
       | 
       | For BETA software, FSD 9 is doing remarkably well, in my view. I
       | mean, it's clearly NOT yet ready for wide release, but it's much
       | closer than all previous versions of Tesla FSD I've seen before,
       | and acceptable for a closed-to-the-public Beta program.
        
         | rhinoceraptor wrote:
         | The fact that it 'only' requires human intervention so rarely
         | is still incredibly dangerous. You can't ask a human to have
         | complete focus for hours on end when they're not making any
         | inputs, and then require them to intervene at a moment's
         | notice. That's not how humans work.
         | 
         | Also, the fact that they're distributing safety critical
         | software to the public as a 'beta' is just insanity. How many
         | more people need to die as a result of Autopilot?
        
           | cs702 wrote:
           | _> You can 't ask a human to have complete focus for hours on
           | end when they're not making any inputs, and then require them
           | to intervene at a moment's notice. That's not how humans
           | work._
           | 
           | I agree. Everyone agrees. That's why FSD Beta 9 is closed to
           | the public. My understanding is that only a few thousand
           | approved drivers can use it.
           | 
           |  _> Also, the fact that they 're distributing safety critical
           | software to the public as a 'beta' is just insanity. How many
           | more people need to die as a result of Autopilot?_
           | 
           | FSD 9 isn't being "distributed to the public." It's a closed
           | Beta. Please don't attack a straw-man.
        
             | steelframe wrote:
             | What are the qualifications of the people selected to
             | participate in the Beta? Do they have any particular
             | experience or receive special training that would set them
             | apart from the general public?
        
             | rhinoceraptor wrote:
             | It's a public beta, as far as I can tell it's available to
             | anyone with a Tesla with the FSD package.
        
             | atoav wrote:
             | What do the people sign that share the roads with those
             | people? Or are they just colateral?
             | 
             | A closed beta needs to happen on private roads. If it
             | happens on public roads it is not a closed beta.
        
               | cs702 wrote:
               | _> What do the people sign that share the roads with
               | those people?_
               | 
               | Some of the videos show drivers having to agree to take
               | full responsibility for all driving.
        
               | atoav wrote:
               | That disn't answer my question. A private company should
               | test their product on private roads.
        
               | endisneigh wrote:
               | I mean even if that's true so-called beta software
               | shouldn't be on public roads.
        
         | CJefferson wrote:
         | In my opinion, a self-driving car which drives me smoothly for
         | 6 hours then drives straight into a concrete pillar without
         | warning isn't doing "remarkably well". That should be enough to
         | get it pulled.
        
         | lp0_on_fire wrote:
         | People who drink and drive may very well be perfect sober
         | drivers 99.9% of the time but that doesn't excuse the .1% of
         | the time that they're running into things.
         | 
         | Also, this beta isn't "closed-to-the-public". The "public" is
         | an active and unwilling participant in it.
        
         | dillondoyle wrote:
         | At least for me it's because these highlighted errors are so
         | egregious and so obvious for humans. Don't swerve into the
         | giant concrete polls.
         | 
         | The 99% of 'good' doesn't matter if you keep driving into giant
         | barriers.
        
         | alkonaut wrote:
         | If this happened all in one _month_ of constant driving, I 'd
         | say it isn't fit even for limited closed testing in public
         | traffic. It should be back at the closed circuit with
         | inflatable cars. If it was cut down from just one or a few days
         | of driving that's horrifying.
        
         | throwaway-8c93 wrote:
         | FSD occasionally requesting driver to take over in genuinely
         | difficult situations would be completely fine.
         | 
         | The videos in the Twitter feed are nothing like that. The car
         | makes potentially catastrophic blunders, like driving straight
         | into a concrete pylon, with 100% confidence.
        
           | dillondoyle wrote:
           | I'm with you on the second point strongly.
           | 
           | But IMHO it's not full self driving if it requests the driver
           | to take over even once.
           | 
           | If there's an insane storm or something then it's ok for FSD
           | to know it should disable and then you have to drive 100%
           | control. The middle ground is more like assisted driving
           | which doesn't seem safe according to most HN comments.
        
           | rcMgD2BwE72F wrote:
           | You know these are extract from multiple hours of video and
           | it's a closed beta?
        
             | ahartmetz wrote:
             | "The airplane rarely explodes! We're down to one explosion
             | every 500 hours!"
        
             | heavyset_go wrote:
             | It doesn't matter how many hours of video there is, all it
             | takes is hitting one pole to dramatically impact your life
             | or the lives of others.
             | 
             | As a pedestrian and someone who shares the road with
             | drivers who use FSD, I don't get to opt out of this "closed
             | beta", and I certainly wouldn't care about how many hours
             | of quality content it has on YouTube if it caused a car to
             | hit me or drove me off the road.
        
             | FireBeyond wrote:
             | a closed beta of a system that is "orders of magnitude
             | better, completely reimagined"... videos like this
             | certainly don't seem to show revolutionary growth in the
             | usability of FSD, indeed "more of the same".
        
         | cameldrv wrote:
         | Supposedly there are a "few thousand" FSD beta testers, and
         | only a small fraction of them are videoing their drives and
         | uploading them to YouTube. Beta 9 has existed for 2 days. This
         | puts a pretty high lower bound on the serious error rate.
        
           | camjohnson26 wrote:
           | The consensus is that there far fewer than a few thousand
        
             | rcMgD2BwE72F wrote:
             | More than 2 thousands Tesla employees and at least dozens
             | of non employees (source: teslamotorsclub.com)
        
         | richwater wrote:
         | It literally doesn't matter how well it does in 90% of
         | situations when the other 10% can injure or kill people in
         | relatively basic scenarios like the Tweets presented. I mean
         | the car almost ran into a concrete pillar like it wasn't even
         | there.
         | 
         | > For BETA software, FSD 9 is doing remarkably well
         | 
         | If this was a React website, that'd be great. But it's a
         | production $40,000 multi ton automobile.
        
           | cs702 wrote:
           | > ...doesn't matter how well it does in 90% of situations...
           | 
           | Based on the full-length videos, I'd say it's more like 99.9%
           | or even 99.99% for FSD 9.
        
             | Veserv wrote:
             | The rate of driving fatalities is ~1 per 60 _million_ miles
             | driven. At an average speed of 60 mph that would be ~1 per
             | _million_ hours. Even if we were to assume there would be
             | only one fatality-inducing error per hour, 99.99% success
             | rate would make FSD 9 _100x_ more dangerous than the
             | average driver. I do not remember the exact numbers, but
             | that would make FSD something like 1000x more dangerous
             | than the classic example of a deathtrap, the Ford Pinto.
        
               | xeromal wrote:
               | That's your estimation but we'd need to wait with actual
               | statistics
        
               | zaroth wrote:
               | We have actual statistics on how well humans do with
               | Tesla ADAS and it's remarkably well. The accident rate
               | numbers are hard to compare apples-to-apples since miles
               | driven with all ADAS features active are different from
               | miles driven without the full ADAS active, but the
               | numbers show that the software is absolutely not "1000x
               | more dangerous".
               | 
               | > _In the 1st quarter [2021], we registered one accident
               | for every 4.19 million miles driven in which drivers had
               | Autopilot engaged. For those driving without Autopilot
               | but with our active safety features, we registered one
               | accident for every 2.05 million miles driven. For those
               | driving without Autopilot and without our active safety
               | features, we registered one accident for every 978
               | thousand miles driven. By comparison, NHTSA's most recent
               | data shows that in the United States there is an
               | automobile crash every 484,000 miles._
               | 
               | https://www.tesla.com/VehicleSafetyReport
        
               | Invictus0 wrote:
               | > one accident for every 4.19 million miles driven
               | 
               | Not really a great metric. We should be looking at
               | disengagements per MM.
        
               | marvin wrote:
               | If you're evaluating the system as a L5 autonomous
               | driving system. Not if you're evaluating it as a safety
               | assistance feature.
        
             | serverholic wrote:
             | That might be acceptable for minor failures but not for
             | catastrophic failure.
        
             | tartrate wrote:
             | Not sure if 99.99% refers to distance or time, but either
             | way I'm not sure it's a good metric since it only takes a
             | second or two to kill someone.
        
           | bob33212 wrote:
           | People will die today because a driver was
           | drunk/distracted/suicidal/rode raged or had a medical
           | problem.
           | 
           | Are you OK with that or do you think we should attempt to fix
           | that with software? If you do think you should attempt to fix
           | that, do you understand that software engineering is a
           | iterative process? It gets safer overtime.
        
             | foepys wrote:
             | > People will die today because a driver was
             | drunk/distracted/suicidal/rode raged or had a medical
             | problem.
             | 
             | That's a straw man argument. Mandatory breathalyzer tests
             | before every engine start and yearly medical and
             | psychological check-ups would solve a lot of these problems
             | as well.
             | 
             | > It gets safer overtime.
             | 
             | There is no evidence of this happening yet. It's still
             | trying to drive people into concrete pillars. Remember that
             | you only need to drive into a pillar once to die. Passing
             | it 10,000 times before doesn't help when an update suddenly
             | decides that the pillar is now a lane.
             | 
             | I'd be totally fine with Tesla testing this with trained
             | drivers in suitable areas but giving this into the hands of
             | amateurs is pure "move fast and break things", with "break"
             | meaning ending lives.
             | 
             | The worst thing is: all of these videos feature sunny
             | weather. How bad would it be in the rain? Ever tried to
             | take picture in the dark with even light rain? Cameras are
             | very bad at this.
        
               | matz1 wrote:
               | >Mandatory breathalyzer tests before every engine start
               | 
               | We don't do that today and we are still allowing people
               | to drive on the road today.
        
               | foepys wrote:
               | Exactly. That's why Tesla's approach is bad. There is no
               | rush to let this technology out on the streets before
               | testing it rigorously for flaws. But Tesla promised FSD
               | for all current models with FSD hardware, so they have to
               | move or they will potentially be on the hook for fraud. I
               | call this irresponsible and dangerous.
        
               | matz1 wrote:
               | Disagree, Tesla approach is the right approach. You want
               | to test the technology as early as possible in real
               | situation as much as possible.
        
               | camjohnson26 wrote:
               | Local maximums exist. If ML models were guaranteed to
               | improve over time we would have no more problems to
               | solve.
        
               | bob33212 wrote:
               | What are you proposing? If someone is overweight with
               | high blood pressure and a higher risk for a heart attack
               | we should take away their license?
               | 
               | These drivers are vetted and their usage is monitored,
               | many are employees. In the long run automated driving
               | will save many more lives than it takes. Just like
               | automated elevators.
               | 
               | https://www.npr.org/2015/07/31/427990392/remembering-
               | when-dr....
        
               | BugsJustFindMe wrote:
               | > _What are you proposing? If someone is overweight with
               | high blood pressure and a higher risk for a heart attack
               | we should take away their license?_
               | 
               | This does not seem like an important nit to pick, so I'm
               | wondering why you're picking it. The number of people
               | killed or injured because a driver had a heart attack at
               | the wheel is vanishingly small compared to drunk or
               | distracted driving.
               | 
               | So let's reframe your question as you should have asked
               | it:
               | 
               | "If someone is driving drunk or distracted we should take
               | away their license?"
               | 
               | Yes. Absolutely.
        
               | fastball wrote:
               | Clearly it's not that simple, otherwise we'd have done it
               | already.
        
               | the8472 wrote:
               | > It's still trying to drive people into concrete
               | pillars.
               | 
               | That in itself doesn't constitute evidence that it isn't
               | getting safer. After all humans occasionally drive into
               | concrete pillars too.
               | 
               | What matters is that the distance traveled without
               | incident goes up in a super-linear fashion over time. It
               | would be nice if someone had hard numbers on that.
        
             | camjohnson26 wrote:
             | Eliminating drinking would get rid of a huge number of
             | automobile accidents. Sober humans are incredible drivers
             | and we don't have self driving cars that are better than
             | even poor humans, as long as they aren't impaired.
        
       | deegles wrote:
       | All the other FSD car companies should sue to get these off the
       | roads since a big enough fuck-up by Tesla will set back the
       | industry for a decade.
        
       | Syonyk wrote:
       | I don't understand how something this broken is allowed to
       | operate on public roads.
       | 
       | If I drove like Tesla's FSD seems to based on the videos I've
       | seen, I'd be pulled out of my car and arrested on (well founded)
       | suspicions of "driving while hammered."
       | 
       | After a decade of work, it's not capable of doing much beyond
       | "blundering around a city mostly without hitting stuff, but pay
       | attention, because it'll try to hit the most nonsensical thing it
       | doesn't understand around the next corner." It drives in a
       | weirdly non-human way - I've seen videos of it failing to
       | navigate things I'm sure I could get my 6 year old to drive
       | through competently. Except, I don't actually let her drive a car
       | on the road.
       | 
       | I'm out in a rural area, and while "staying in the lane" is
       | perfectly functional (if there are lanes, which isn't at all the
       | case everywhere on public roads), there's a lot of stuff I do on
       | a regular basis that I've not seen any evidence of. If there's a
       | cyclist on a two lane road, I typically get over well over the
       | center line if there's no oncoming traffic to make room. If there
       | is oncoming traffic, typically one person will slow down to allow
       | the other car to get over, or the lanes just collectively "shift
       | over" - the oncoming traffic edges the side of the road so I can
       | get myself on or slightly over the centerline to leave room for
       | the bike. And that's without considering things like trailers
       | that are a foot or two into the road, passing a tractor with a
       | sprayer (they don't have turn signals, so be careful where you
       | try to pass), etc.
       | 
       | If they've got any of this functionality, I'd love to see it,
       | because I've not seen anything that shows it off.
       | 
       | At this point, I think we can reasonably say that it's easier to
       | land people on the moon than teach a car to drive.
        
         | serverholic wrote:
         | We are going to need a fundamental breakthrough in AI to
         | achieve the level of FSD that people expect. Like a
         | convolutional neural network level breakthrough.
        
           | belter wrote:
           | I suggest some simple Smoke Tests. I love that concept for
           | software testing.
           | 
           | It could be applied here to test if we are getting closer or
           | further from what humans can do.
           | 
           | Some examples:
           | 
           | Smoke Test 1:
           | 
           | Driving with Snow
           | 
           | Smoke Test 2:
           | 
           | Driving with Rain
           | 
           | Smoke Test 3:
           | 
           | You are driving to a Red sign and you notice 50 meters ahead
           | a pedestrian has its headphones on. The pedestrian is
           | distracted and looking at traffic coming from the other way.
           | You notice from the pedestrian gait and demeanor its probably
           | going to cross anyway and its not noticing you. So you
           | instinctively slowly reduce your speed and keep a sharp eye
           | on its next action.
           | 
           | Smoke Test 4:
           | 
           | Keep eye contact with a Dutch cyclist as they look at you
           | across a roundabout. You realize they will cross in front of
           | your car, so you already inferred their intentions. Today the
           | cyclist bad humor will not make them raise their hand or make
           | any other signs to you other than an angry face. You however
           | already know they will push forward...
           | 
           | Smoke Test 5:
           | 
           | A little soccer ball just run across your vision field. You
           | hit the breaks hard, as you instinctively think a child might
           | show up any second running after it.
           | 
           | Failing any of these scenarios would make you fail the
           | driving license exam so I guess its the minimum we should aim
           | for. Call me back when any AI is able to even start tackling
           | ANY of these, much less All of these scenarios. :-)
        
             | dragontamer wrote:
             | This "FSD" package is still at the "Don't hit this balloon
             | we dressed up to look like a human" stage.
        
             | jtvjan wrote:
             | >the cyclist bad humor
             | 
             | slecht humeur? That's "bad mood".
        
               | belter wrote:
               | You are right. Was on mobile. Cannot edit it
               | anymore...:-)
        
               | ethbr0 wrote:
               | As an English first speaker, I didn't even notice it when
               | reading. It's a somewhat odd phrasing (yet not
               | exceptionally so) but still makes sense.
               | 
               | See second definition: https://www.merriam-
               | webster.com/dictionary/humor
        
           | bob1029 wrote:
           | I think we should first start with a fundamental breakthrough
           | in our willingness as engineers to admit defeat in the face
           | of underestimated complexity.
           | 
           | Once we take proper inventory of where we are at, we may find
           | that we are still so far off the mark that we would be
           | inclined to throw away all of our current designs and start
           | over again from (new) first principles.
           | 
           | The notion that you can iteratively solve the full-self-
           | driving problem on the current generation of cars is
           | potentially one of the bigger scams in tech today (at least
           | as marketed). I think a lot of people are deluding themselves
           | about the nature of this local minima. It is going to cost us
           | a lot of human capital over the long haul.
           | 
           | Being wrong and having to start over certainly sucks really
           | badly, but it is still better than the direction we are
           | currently headed in.
        
             | enahs-sf wrote:
             | I think we're beginning to see the true endgame to Tesla's
             | strategy. Elon initially said first you build the prototype
             | to raise the capital to finance the expensive car which you
             | build to fund the mid-priced car which you scale to build
             | the cheap car. In reality, we're still at step two which is
             | the expensive car and the R&D to build the real thing.
             | 
             | As a model Y owner, I'm quite skeptical about this
             | generation of vehicles being able to truly hit the mark on
             | FSD.
        
             | belter wrote:
             | In other words, and as somebody said:
             | 
             | "Just because we can put a man on the Moon does not mean we
             | can put a man on the Sun" :-)
        
           | threeseed wrote:
           | Cruise has videos showing them driving around for hours,
           | under challenging conditions with no issues:
           | 
           | https://www.youtube.com/channel/UCP1rvCYiruh4SDHyPqcxlJw
        
           | bobsomers wrote:
           | I don't think that's necessarily true. There are plenty of
           | people in the AV space that routinely drive significantly
           | better than this and are already doing completely driverless
           | service in some geofences.
           | 
           | The problem with Tesla's approach has always been that Elon
           | wanted to sell it before he actually knew anything about how
           | to solve it. It's lead to series of compounding errors in
           | Tesla's approach to AVs.
           | 
           | The vehicle's sensor suite is woefully inadequate and its
           | compute (yes, even with the fancy Tesla chip) is woefully
           | underpowered... all because Elon never really pushed to
           | figure out where the boundary of the problem was. They
           | started with lane-keeping on highways where everything is
           | easy and pre-sold a "full self-driving" software update with
           | absolutely no roadmap to get there.
           | 
           | To overcome the poor sensor suite and anemic compute, they've
           | boxed themselves into having to invent AGI to solve the
           | problem. Everyone else in the space has happily chosen to
           | spend more on sensors and compute to make the problem
           | computationally tractable, because those things will easily
           | get cheaper over time.
           | 
           | I'm fairly convinced that if Tesla wants to ship true FSD
           | within the next decade, the necessary sensor and compute
           | retrofit would nearly bankrupt the company. The only way out
           | is likely to slowly move the goal posts within the legalese
           | to make FSD just a fancy Level 2 driver assist and claim
           | that's always what it was meant to be.
        
             | tibbydudeza wrote:
             | We need so many compute and TFLOPS before the software has
             | been completed but it won't surprise me if they come up
             | with Tesla Cloud driving powered by Starlink with a bunch
             | of hamsters in a warehouse somewhere actually driving the
             | car.
             | 
             | I mean they trained pigeon's to guide bombs in WW2.
        
             | potatolicious wrote:
             | Yep, also over-promising that legacy cars with the "FSD-
             | readiness" upgrade would be guaranteed to support FSD was a
             | completely unforced error.
             | 
             | At this point it's pretty obvious _none_ of the original
             | "FSD ready" cars Tesla shipped will be able to handle L4/L5
             | autonomy hardware-wise. Arguably none of the cars currently
             | rolling off the line will be either - others have made the
             | point in the thread: while nobody has really cracked
             | autonomy fully, other players (see: Waymo, Cruise) are
             | clearly _much_ further along with a very different sensor
             | and compute suite than anything Tesla has shipped or
             | currently ships.
        
               | CyanLite2 wrote:
               | I think there's a clear space in North America for a
               | viable L3 system. That is, completely hands-free driving
               | at slow speeds (30-50mph) that allows me to eat a
               | sandwich, respond to a few emails, and text on my phone
               | without having to look at the road or grab the wheel
               | every 15 seconds. Sure, ping me if I'm coming up on a
               | construction zone or if it pours down raining. And don't
               | worry about making unprotected lefts and other tricky
               | situations, I can do that. But just keep the car on the
               | road while I'm making the same drive to the office every
               | morning and let me reclaim some of that commute time.
        
             | cogman10 wrote:
             | Definitely agree on the sensor suite.
             | 
             | Even wanting to do things vision only is ok, but the cars
             | simply don't have enough cameras for redundancy. You are a
             | little rain or mud away from having a system that no longer
             | operates.
        
           | rasz wrote:
           | Most problems in those clips didnt require general AI, they
           | were caused by shit vision algorithms. Car didnt spot huge
           | ass monorail columns ...
        
         | ricardobeat wrote:
         | This is a collection of videos from thousands of hours of
         | driving. I'm sure you can do much worse from human drivers...
        
         | mortenjorck wrote:
         | The monorail video is jaw-dropping.
         | 
         | Nine versions in, I would expect ongoing challenges with things
         | like you mention. But continued failure to even _see_ large,
         | flat obstacles is no longer something that needs to be fixed -
         | that it has persisted this long (even after killing someone as
         | in the case of T-boning a semi trailer at highway speeds) is an
         | indictment of the entire approach Tesla has been taking to FSD.
         | 
         | I used to think FSD was just a matter of getting enough
         | training for the models, but this has changed my mind. When you
         | still require a disengagement not to negotiate some kind of
         | nuanced edge case, but _to avoid driving straight into a
         | concrete pylon_ , it's time to throw it all out and start over.
        
           | [deleted]
        
           | flutas wrote:
           | I think a big issue with that instance (monorail) is probably
           | because they just threw out years of radar data without
           | having a comparable reliability in place with the vision
           | only.
           | 
           | Completely mental that they are allowed to run this on public
           | roadways.
        
             | TheParkShark wrote:
             | It's a stationary object. What source do you have to say
             | Tesla deleted the date? Seems kinda foolish for a company
             | like them to just throw data away. They even have data on
             | lumbar support usage.
        
             | Syonyk wrote:
             | I'm not sure radar would help - the pillar is a stationary
             | object, and car based radar tends to drop all returns at
             | the same rate as the car is moving forward. Radar isn't
             | terribly precise, so you end up with tons of returns off
             | signs, bridge abutments, etc.
             | 
             | Remember, _with_ radar, Teslas will happily autopilot
             | themselves into a semi truck across the road, stopped fire
             | trucks, concrete road barriers, etc. I see no reason to
             | believe that a stationary concrete pillar in the middle of
             | the road would be avoided any better with radar than
             | without.
        
               | moralestapia wrote:
               | Here's what happened another time someone argued that
               | "radar can't detect stationary objects".
               | 
               | https://news.ycombinator.com/item?id=16722461
               | 
               | Radar does detect stationary objects, come on.
        
               | Syonyk wrote:
               | Yes, radar can easily detect stationary objects.
               | 
               | Tesla's implementation of their radar processing drops
               | all returns from anything stationary, given the forward
               | speed of the car. In the context of a Tesla thread, I
               | didn't think that needed to be specifically stated, but
               | will do so in the future.
        
               | mcguire wrote:
               | Because, clearly, anything stationary is ground clutter
               | that can be ignored.
        
               | moralestapia wrote:
               | Particularly if you're traveling towards it at 60 mph.
        
           | Animats wrote:
           | _The monorail video is jaw-dropping._
           | 
           | Yes. Pause the video and look at the car's screen. _There 's
           | no indication on screen of the columns._ A car on the other
           | side of the row of columns is recognized, but not the
           | columns. It's clear that Tesla has a special-purpose
           | recognizer for "car".
           | 
           | The columns are a solid obstacle almost impinging into the
           | road, one that doesn't look like a car. That's the standard
           | Tesla fail. Most high-speed Tesla autopilot collisions have
           | involved something that sticks out into a lane - fire truck,
           | street sweeper, construction barrier - but didn't look like
           | the rear end of a car.
           | 
           | As I've been saying for years now, the first job is to
           | determine that the road ahead is flat enough to drive on.
           | Then decide where you want to drive. I did the DARPA Grand
           | Challenge 16 years ago, which was off-road, so that was the
           | first problem to solve. Tesla has lane-following and smart
           | cruise control, like other automakers, to which they've added
           | some hacks to create the illusion of self-driving. But they
           | just don't have the "verify road ahead is flat" technology.
           | 
           | Waymo does, and gets into far less trouble.
           | 
           | There's no fundamental reason LIDAR units have to be
           | expensive. There are several approaches to flash/solid state
           | LIDAR which could become cheap. They require custom ICs with
           | unusual processes (InGaAs) that are very expensive when you
           | make 10 of them, and not so bad at quantity 10,000,000. The
           | mechanically scanned LIDAR units are now smaller and less
           | expensive.
        
             | CyanLite2 wrote:
             | LIDAR cost is coming down, but Tesla has almost no margins
             | on cars. So adding a $500 part per car for LIDAR is out of
             | the question. Even the cheap radar parts were removed from
             | production going forward.
             | 
             | However, it's good to see other automakers like Volvo going
             | the LIDAR route on their 2022 electric models and are
             | offering Level3 autonomous driving.
        
               | raisedbyninjas wrote:
               | They currently charge $10k for FSD. Even if they don't
               | drop any of their cameras and processing equipment, $500
               | LIDAR is a 5% increase. Is it just me or are we politely
               | ignoring Musk's reluctance to admit his bold
               | proclamations against LIDAR are the biggest factor in
               | their design decisions.
        
               | TheParkShark wrote:
               | Andrej Karpathy already explained they were getting too
               | much noise from Radar. Adding LIDAR would only
               | complicated it. Unless you have better knowledge than
               | Andrej Karpathy, I'll go with his word on this.
        
               | pjc50 wrote:
               | This seems backwards; it's a premium feature, the sort
               | everyone else is adding to increase margins.
        
               | Invictus0 wrote:
               | Tesla's Full Self Driving package was between 8 and
               | 10,000 dollars per vehicle. That should cover it.
        
               | CyanLite2 wrote:
               | The hardware suite is the same on every car nowadays.
               | They want FSD to be a downloadable app like a
               | subscription. Which means you would have to put LIDAR on
               | every car even if it didn't convert to FSD.
               | 
               | Also, Elon has gone on Twitter rants several times before
               | against LIDAR. That horse has already left the stable and
               | it ain't coming back.
        
               | nullc wrote:
               | Teslas' margins are several times larger than other
               | automakers ... but the company valuation isn't
               | particularly justified even at the current high margins.
        
               | Method-X wrote:
               | Tesla's margin on the Model 3 is 35%:
               | 
               | https://thenextavenue.com/2020/06/10/teslas-
               | model-3-profit-m...
        
               | CyanLite2 wrote:
               | It's not that high once you strip out the bitcoin gains
               | and regulatory credits. https://archive.is/WAvau
        
               | TheParkShark wrote:
               | Actually BTC gains don't count, only losses do. They will
               | take a minor hit on their earnings coming up but will
               | still made record revenue and profit, so like Method-X
               | claimed - Tesla has good margins.
        
             | BugsJustFindMe wrote:
             | > _As I 've been saying for years now, the first job is to
             | determine that the road ahead is flat enough to drive on.
             | Then decide where you want to drive. I did the DARPA Grand
             | Challenge 16 years ago, which was off-road, so that was the
             | first problem to solve. Tesla has lane-following and smart
             | cruise control, like other automakers, to which they've
             | added some hacks to create the illusion of self-driving.
             | But they just don't have the "verify road ahead is flat"
             | technology._
             | 
             | This matches my perception as well and it continues to blow
             | my mind. Like it just seems like it would require full-on
             | incompetence to use only specific pedestrian/car/sign/lane
             | classifiers rather than splitting the world first into
             | ground/obstacle by reconstructing geometry and then
             | assigning relative velocity, acceleration, and curvature to
             | coherent obstacle segments irrespective of their
             | identification. But the videos always make it look like
             | this is exactly what's happening. And worse it always
             | appears to be happening on an instantaneous frame-by-frame
             | basis with things flickering in and out of existence as if
             | "thing there" and "nothing there" are somehow equally safe
             | guesses.
        
               | fnoof wrote:
               | The flickering is concerning. Kind of suggests their NN
               | hasn't learned basic object persistence yet.
        
           | moralestapia wrote:
           | >The monorail video is jaw-dropping.
           | 
           | Who needs radar right? A few cameras are enough to discern
           | gray concrete structures in the night, oh wait ...
        
             | BugsJustFindMe wrote:
             | They probably would be if they stopped using such shitty
             | cameras. It always blows my mind that the cameras they use
             | appear to be quite bad for the job.
             | 
             | > _in the night_
             | 
             | Purpose-built cameras can see better in the dark than
             | humans can.
        
               | wiz21c wrote:
               | But we have a working brain to interpret those images.
        
               | BugsJustFindMe wrote:
               | Your brain helps you with all kinds of things, but
               | fundamental geometry is not magic. Reconstructing scene
               | geometry from high quality binocular video is an
               | extremely well developed field and has been for decades.
               | The problem seems to be that Tesla overrelies on ML-based
               | instantaneous pixel classification when you don't
               | actually need to know what a large thing moving toward
               | you is in order to not run into it.
        
               | moralestapia wrote:
               | And this is supposed to imply what exactly?
        
               | mcguire wrote:
               | " _Cars driving at night are supposed to have headlights
               | active._ "
               | 
               | And yet...
        
           | AlexandrB wrote:
           | The monorail posts and planters would be trivially handled by
           | LIDAR. Tesla's aversion to time of flight sensors often
           | strikes me a premature given our level of planning/perception
           | technology.
        
             | CyanLite2 wrote:
             | Even simple highway-mapping that GM/Ford does would've
             | caught this. Every few months they send a LIDAR-equipped
             | van to pre-map highways and common roads and send out OTA
             | updates for their Level2 driving systems. Sounds low-tech
             | but GM/Ford cars aren't driving into concrete posts.
        
               | TheParkShark wrote:
               | Those obstacles wouldn't even be on the highway so whats
               | your point?
        
             | BugsJustFindMe wrote:
             | They should be trivially handled by stereopsis and
             | structure from motion as well. Stereo+time photogrammetry
             | has been solved well enough to not steer directly towards
             | large obstacles for decades. Overreliance on machine
             | learning pixel models to classify everything in view is the
             | real problem.
        
               | marvin wrote:
               | If it's so easy, there must be a reason why Tesla's team
               | hasn't accomplished it. They're not idiots.
        
               | flowerlad wrote:
               | Why not have redundant sensors and crosscheck them?
               | 
               | Boeing 737 Max had only one AOA sensor [1], and that
               | wasn't a great idea.
               | 
               | https://www.cnn.com/2019/04/30/politics/boeing-
               | sensor-737-ma...
        
               | brianwawok wrote:
               | Right, so what happens when one says wall, and one says
               | clear air? You brake? That is what happens now with
               | phantom braking. Approaching a bridge? camera sees air,
               | radar sees wall, slam on brakes.
               | 
               | You want the best sensor type over a high-fidelity sensor
               | and a lower fidelity sensor. The Tesla system has 8
               | cameras (3 forward), so they def have overlap between
               | what they are considering better cameras.
               | 
               | Time will prove which approach wins.
        
               | bumby wrote:
               | > _That is what happens now with phantom braking._
               | 
               | Uber seemed to have programmed an artificial delay when
               | the system got confused. There's a good breakdown of the
               | timeline showing how the system kept misclassifying the
               | bicyclist who was killed, but I couldn't immediately find
               | it. That breakdown shows their strategy at implementing
               | delays in the decision process. According to the NTSB
               | report[1]:
               | 
               | > _" According to Uber, emergency braking maneuvers are
               | not enabled while the vehicle is under computer control,
               | to reduce the potential for erratic vehicle behavior"_
               | 
               | When I read that in the context of the programmed delays
               | it seems to indicate "we wanted to avoid nuisance braking
               | so we put in a delay when the system was confused." As
               | someone who used to work in safety-critical software, it
               | blows my mind that you would deliberately hobble one of
               | your main risk mitigations because your system gets
               | confused. While TSLA may be focusing on a platform that
               | gets better data with better sensors, they still need to
               | translate it to better decisions.
               | 
               | [1] https://www.ntsb.gov/investigations/AccidentReports/R
               | eports/...
        
               | baud147258 wrote:
               | > "we wanted to avoid nuisance braking so we put in a
               | delay when the system was confused." As someone who used
               | to work in safety-critical software, it blows my mind
               | that you would deliberately hobble one of your main risk
               | mitigations because your system gets confused
               | 
               | Maybe it was put in place to avoid erratic braking in
               | absence of obstacle, in order to avoid getting hit in the
               | rear by other vehicles (whose driver wouldn't see any
               | obstacle and be unprepared for the Tesla car braking).
        
               | bumby wrote:
               | That's a good point and might have been their rationale,
               | but I would argue it wasn't a very good risk mitigation
               | because while they reduced the risk in one area (being
               | rear ended) they increased their risk elsewhere. Worse
               | yet, it increased the risk in an area more prone to
               | higher severity incidents (e.g., hitting pedestrians - I
               | assume - carries a higher severity than being rear-ended)
        
               | ricardobeat wrote:
               | There's a big risk of spine injuries, and a rear end
               | collision might not activate the airbag. Not a simple
               | trade off.
        
               | flowerlad wrote:
               | If you have redundant AOA sensors on a plane and they
               | disagree what do you do? Alert the pilot. You have to do
               | the same on a self-driving car as well. You can't just
               | ignore a serious malfunction, or pretend to not see it
               | just because you don't know to handle it!
               | 
               | To be truly redundant you have to use different
               | technologies, such as camera and lidar.
        
               | bumby wrote:
               | > _To be truly redundant you have to use different
               | technologies, such as camera and lidar._
               | 
               | This isn't necessarily true. From a reliability
               | engineering perspective it depends on the modes of
               | failure and the probability of each mode. If the
               | probability of an AOA failure is low enough, you can
               | reach your designed risk level by having two identical
               | and redundant components. It all comes down the level of
               | acceptable risk.
        
               | brianwawok wrote:
               | > If you have redundant AOA sensors on a plane and they
               | disagree what do you do? Alert the pilot.
               | 
               | Right, which means it's not a solution for L4/L5
               | autonomy, only for L2. Tesla is trying to reach L4/L5, so
               | just alerting the pilot is not satisfying the design
               | goal.
               | 
               | > To be truly redundant you have to use different
               | technologies, such as camera and lidar.
               | 
               | I think that is an opinion and not a fact. Watch a video
               | such as
               | 
               | https://www.youtube.com/watch?v=eOL_rCK59ZI&t=28286s
               | 
               | from someone working on this problem
        
               | flowerlad wrote:
               | > _Tesla is trying to reach L4 /L5, so just alerting the
               | pilot is not satisfying the design goal._
               | 
               | Neither is ignoring it. If a product can't meet the
               | design goals under certain circumstances should it ignore
               | it, not even look for it, or alert the user that there is
               | a catastrophic failure?
               | 
               | > _I think that is an opinion and not a fact._
               | 
               | I think it is more common sense than anything else.
        
               | BugsJustFindMe wrote:
               | The post I was responding to said lidar, not radar. But
               | if you want to switch to radar, we can talk about that
               | too.
               | 
               | > _camera sees air, radar sees wall, slam on brakes_
               | 
               | Seeing bridges as walls is not a fundamental property of
               | radar. That's an implementation problem. If cars are
               | doing radar poorly, maybe the fix is to start doing it
               | less poorly instead of throwing it away entirely.
        
               | brianwawok wrote:
               | Well, that was specifically about blending two different
               | sensors with different characteristics. For you walking,
               | it would be like blending your eyes with your nose. If
               | your eyes tells you the floor is safe, and your nose
               | smells something bad, do you stop? Anytime you have two
               | different sensors with different characteristics, you
               | want "the best". Your body uses your eyes to see where to
               | walk, and your nose to test if pizza is rotten. Blending
               | multiple sensor types is tricky.
               | 
               | So back to LIDAR.. same difference. Camera and LIDAR have
               | different profiles. I think it's fine to use either, but
               | I think trying to blend the two is a sub-optimal solution
               | to the problem.
               | 
               | Again, this is my guess from what I know. I could be
               | wrong, and the winning technology could use 12 different
               | sensors (vision + radar + lidar + smell + microphones),
               | and blend them all to drive. Cool if someone pulls it
               | off! But if I had to do it myself or place a bet, I would
               | put it on a single sensor type.
        
               | threeseed wrote:
               | > but I think trying to blend the two is a sub-optimal
               | solution to the problem
               | 
               | Research over the last decade has shown that LiDAR/Vision
               | fusion outperforms Vision Only.
               | 
               | Can you explain the science behind your position ?
        
               | baud147258 wrote:
               | > If your eyes tells you the floor is safe, and your nose
               | smells something bad
               | 
               | well, if I'm smelling gas, I know the situation isn't
               | safe... (and thus my nose is giving me an information my
               | eyes might not have detected)
        
               | BugsJustFindMe wrote:
               | Unfortunately your example is inapt because the center of
               | your vision and your peripheral vision may as well be
               | entirely separate systems that don't overlap, and the way
               | brains apply focus doesn't translate to how cameras work.
               | Your scenario is closer to asking about the center of
               | your focus saying the path in front of you is clear and
               | your peripheral vision detecting inbound motion from the
               | side. Peripheral motion detection overrides the clear
               | forward view, but it's because they aren't trying to
               | record the same information.
               | 
               | Here's why:
               | 
               | > _If your eyes tells you the floor is safe, and your
               | nose smells something bad, do you stop?_
               | 
               | Absolutely, yes, if the bad smell smells like dog shit or
               | vomit or something else that I definitely don't want to
               | step in. If I'm walking, I'm very unlikely to be looking
               | directly at my feet and much more likely to be looking
               | ahead to do predictive path planning. I definitely do
               | stop at least transiently in your scenario and then apply
               | extra visual focus to the ground right in front of my
               | feet so that I don't step on dog shit. The center of my
               | vision is great at discerning details, but peripheral
               | vision is terrible for that.
               | 
               | Anyway, the obvious answer to your inquiry based on my
               | explanation here is to use confidence weighting and
               | adaptive focus. If I think something might be happening
               | somewhere, I focus my available resources directly at the
               | problem.
        
               | viraptor wrote:
               | > Approaching a bridge? camera sees air, radar sees wall,
               | slam on brakes.
               | 
               | That's simplifying the situation a bit too much. The
               | camera can give more results than air/not-air.
               | Specifically in this case it could detect a bridge.
               | 
               | Same applies to the radar really - you'll get
               | measurements from multiple heights which would tell you
               | that it may be an inclined street, not a wall.
        
               | brianwawok wrote:
               | I think you are missing parts. Have you watched this
               | video from someone actually working in the field?
               | 
               | https://www.youtube.com/watch?v=eOL_rCK59ZI&t=28286s
        
               | CyanLite2 wrote:
               | This is where pre-mapped roadways help. Just an internal
               | GPS database sent down to your car that says "Hey, at
               | this GPS coordinate, there's a bridge here. Here's how to
               | navigate over it." Everywhere else the cars can use
               | radar+camera. GM (SuperCruise) and Ford (BlueCruise) do
               | this today.
        
               | bumby wrote:
               | > _Boeing 737 Max had only one AOA sensor_
               | 
               | Just a small nit-pick but it makes the case against
               | Boeing worse. The airframe had multiple AOA sensors but
               | the base software only used one sensor reading. Note the
               | image in [1] shows readings from both a "left" and
               | "right" AOA. From your link:
               | 
               | > _software design for relying on data from a single AOA
               | sensor_
               | 
               | Boeing sold a software upgrade to read both AOA devices.
               | (This still leaves the problem that if the two AOAs
               | disagree there might be cases where you don't know which
               | is bad). The fact that they listed MCAS as 'hazardous'
               | rather than 'catastrophic' means it was allowed to have a
               | single point of failure. It also means they may not have
               | fully understood their own design.[1]
               | 
               | [1] https://www.seattletimes.com/business/boeing-
               | aerospace/black...
        
               | BugsJustFindMe wrote:
               | No great reason not to _plan_ to use them. I mean, lidar
               | is still kinda not great right now, but I'm sure it will
               | be great at some point. But they could already be doing
               | better with just cameras than they're currently doing, so
               | why not fix that?
        
             | jazzyjackson wrote:
             | I feel like the Xbox Kinect from 2010 would be a better
             | vision solution than what they've got here, at least for
             | the 20 feet ahead of you.
        
               | schmorptron wrote:
               | Until you have multiple cars trying to project multiple
               | arrays of infrared dots onto objects 20 meters away in
               | bright sunlight and then getting an accurate reading on
               | which of the array kerfuffle is theirs.
        
               | paulryanrogers wrote:
               | Does radar have this crosstalk problem as well?
        
           | barbazoo wrote:
           | I agree. Not sure if I'd be able to trust it again after an
           | incident like this at least in similar situations where there
           | are obstacles so close to the road.
        
         | toomuchtodo wrote:
         | > I don't understand how something this broken is allowed to
         | operate on public roads.
         | 
         | Also out in a rural area. Running out to pickup lunch a few
         | minutes ago, a young man flipped their old pickup truck on its
         | side in an intersection, having hit the median for some reason.
         | I too don't understand how humans are allowed to operate on
         | public roads. Most of them are terrible at it. About 35k people
         | a year die in motor vehicle incidents [1], and millions more
         | are injured [2]. Total deaths while Tesla Autopilot was active
         | is 7 [3].
         | 
         | I believe the argument is the software will improve to
         | eventually be as good or better than humans, and I have a hard
         | time not believing that, not because the software is good but
         | because we are very bad in aggregate.
         | 
         | [1] https://www.iihs.org/topics/fatality-
         | statistics/detail/state...
         | 
         | [2] https://www.cdc.gov/winnablebattles/report/motor.html
         | 
         | [3] https://www.tesladeaths.com/
        
           | jazzyjackson wrote:
           | Cars aren't safe and robots don't fix it.
        
             | [deleted]
        
           | alkonaut wrote:
           | We accept shit drivers. We don't accept companies selling
           | technologies that calls itself "Full self driving" (witha
           | beta disclaimer or not) that hits concrete pillars. This
           | isn't hard. It's not a mathematical tradeoff with "but what
           | about if it's shit, but on average it's better (causes fewer
           | accidents) than humans?". I don't care. I accept the current
           | level of human driving skill. People drive tired or poorly
           | _at their own risk_ , and that's what makes ME accept
           | venturing into traffic with them. They have the same physical
           | skin in the game as I have.
        
           | emodendroket wrote:
           | > I believe the argument is the software will improve to
           | eventually be as good or better than humans, and I have a
           | hard time not believing that, not because the software is
           | good but because we are very bad in aggregate.
           | 
           | But logically this doesn't really follow, does it? That
           | because humans are not capable of doing something without
           | errors a machine is necessarily capable of doing it better?
           | Your argument would be more compelling if Tesla Autopilot
           | logged anything like the number of miles in the variety of
           | conditions that human drivers do. Since it doesn't, it seems
           | like saying that the climate of Antarctica is more hospitable
           | than that of British Colombia, because fewer people have died
           | this year of weather-related causes in the former than the
           | latter.
        
             | olyjohn wrote:
             | Yeah in the parent post, how many miles did that guy drive
             | before flipping his truck? In all of these videos, we are
             | seeing it disengage multiple times within a couple of
             | miles. Nobody is that bad of a driver that they crash every
             | time they go out and drive.
        
           | w0m wrote:
           | dinddingding.
           | 
           | Self Driving cars (Tesla; who is faaaar from it, among
           | others) will kill people. But people are shitty drivers on
           | their own; has to start somewhere and Tesla is the first to
           | get anything close to this level in the hands of the general
           | population (kind of, beta program is still limited in
           | release))
        
             | [deleted]
        
             | matmatmatmat wrote:
             | Sure, maybe, but why should my or my family's life be put
             | at risk until they figure it out?
        
             | emodendroket wrote:
             | This runs into the same problem that led to the regulation
             | of medicine: people can put all kinds of supposed remedies
             | out there which may or may not do anything at all to solve
             | the problem and may have worse consequences than the thing
             | they're meant to cure.
        
           | Syonyk wrote:
           | Could have been equipment failure on the truck - a tie rod
           | end failing or such will create some interesting behaviors.
           | 
           | > _I believe the argument is the software will improve to
           | eventually be as good or better than humans, and I have a
           | hard time not believing that._
           | 
           | I find it easy to believe that software won't manage to deal
           | well with all the weird things that reality can throw at a
           | car, because we suck at software as humans in the general
           | case. It's just that in most environments, those failures
           | aren't a big deal, just retry the API call.
           | 
           | Humans _can_ write very good software. The Space Shuttle
           | engineering group wrote some damned fine software. They look
           | literally nothing like the #YOLO coding that makes up most of
           | Silicon Valley, and deal with a far, far more constrained
           | environment as well than a typical public road.
           | 
           | Self driving cars are simply the most visible display of the
           | standard SV arrogance - that humans are nothing but a couple
           | crappy cameras and some neural network mush, and, besides, we
           | know _code_ - how hard can it be? That approach to solving
           | reality fails quite regularly.
        
             | toomuchtodo wrote:
             | Is flying a helicopter on Mars arrogance? Launching
             | reusable space launch vehicles that boost back to the
             | landing site arrogance? I don't believe so. These are
             | engineering challenges to be surmounted, just as safe
             | robotic vehicles are a challenge, and its reasonable (I'd
             | argue) for us as a species to drive towards solutions (no
             | pun intended).
             | 
             | Silicon Valley isn't going to suddenly become less
             | arrogant, but that doesn't mean the problems they attempt
             | to solve don't need solving. Incentives are important to
             | coax innovation in a way that balances life safety with
             | progress, and I concede the incentives in place likely need
             | improvement.
        
               | Arch-TK wrote:
               | >Is flying a helicopter on Mars arrogance?
               | 
               | No, but I am not so sure it's anything more than a stunt
               | with a high chance of failure.
               | 
               | >Launching reusable space launch vehicles that boost back
               | to the landing site arrogance?
               | 
               | Maybe, but it's mostly a PR stunt from my point of view.
               | 
               | Just my 2 cents.
        
               | Syonyk wrote:
               | Aviation and space are a very, _very_ different problem
               | space from surface street navigation, because they rely,
               | almost entirely, on  "nothing else being there."
               | 
               | We've had good automation in aviation for decades. It
               | doesn't handle anything else in the way very well at all,
               | and while there's some traffic conflict avoidance stuff,
               | it's a violent "Oh crap!" style response, nothing you
               | actually want to get anywhere close to triggering.
               | Automated approaches down to minimums require good
               | equipment at the airport as well as on the airframe, and
               | if there's a truck on the runway, well. Shouldn't have
               | been there.
               | 
               | Same thing for landing boosters. There's nothing it
               | really has to look at and understand that it can't easily
               | get from some basic sensor data - speed, attitude,
               | location, etc. It's an interesting challenge, certainly,
               | but it's of a very different form from understanding all
               | the weird stuff that happens in a complex, messy,
               | uncontrolled 3D ground environment.
               | 
               | Self driving on a closed track is a perfectly well solved
               | problem. Self driving in an open world is clearly very
               | far from solved.
        
               | belter wrote:
               | Show me Self driving on a closed track with snow or rain
               | please.
        
               | toomuchtodo wrote:
               | Does open road snow count?
               | 
               | https://youtu.be/fKiWM1KjIm4
               | 
               | https://torc.ai/torc-self-driving-car-dashes-through-
               | snow/
        
               | belter wrote:
               | Its an interesting video but its an edited video.
               | 
               | The company website mentions the system was able to
               | negotiate the challenges of the driving conditions, looks
               | like an euphemism...
               | 
               | Very few details, no scientific publications I could find
               | on their Asimov system on a quick search, not sure if
               | this is a technological breakthrough or a fine tuning of
               | existing processes and methods.
               | 
               | Because if its a fine tuning of current algorithms not
               | sure how long they can push this. They are relying on
               | Lidar (and other sensors) but even as of last year, it
               | seems most teams already realized Lidar would not be the
               | solution for Snow, and were now pushing Ground
               | Penetrating Radar ( sounds expensive...)
               | 
               | "Autonomous Cars Struggle in Snow, but MIT Has a Solution
               | for That"
               | https://www.caranddriver.com/news/a31098296/autonomous-
               | cars-...
               | 
               | Even Tesla, already realized its not just a sensor
               | problem, solving self-driving needs a higher level algo
               | than can put the different sensors, in context of
               | situational awareness. Note that in no any other way I
               | would consider Tesla an example to follow ;-) And do not
               | think they are any closer to getting a working system.
               | Some of the statements show at least a second level
               | understanding of what is required. Sensors are a means to
               | it. Its about situational awareness but also inference.
               | 
               | "LIDAR is a fool's errand... and anyone relying on LIDAR
               | is doomed. -- Elon Musk"
               | 
               | https://youtu.be/HM23sjhtk4Q
        
               | camjohnson26 wrote:
               | Worth remembering that Tesla posted this "self driving"
               | video in 2016. Editing can do amazing things.
               | https://www.tesla.com/autopilot
               | 
               | I'm actually shocked it's on the website today, first
               | frame says the driver is only there for legal purposes.
        
               | toomuchtodo wrote:
               | I don't disagree with you. I believe we're arguing
               | between "can't be solved" versus "it's going to take a
               | long time to solve." I'm stating I fall in the latter
               | camp, and advocating for stronger regulation and
               | investment in the space.
        
               | visarga wrote:
               | > There's nothing it really has to look at and understand
               | that it can't easily get from some basic sensor data -
               | speed, attitude, location, etc.
               | 
               | Self landing rockets are simple, it's not rocket science,
               | duh! Everyone and my grandma has one.
        
               | throwaway-8c93 wrote:
               | Propulsive landing has been achieved routinely and with
               | perfect precision since the 60s - Apollo's lunar modules,
               | Lunar surveyor, Lunokhod rovers.
               | 
               | The question has never been about the feasibility of
               | landing the booster stages - what has been questioned is
               | whether it's worth doing. The fuel used up during landing
               | is fuel that cannot propel the payload. The landing might
               | fail. The effects of thermal and material fatigue are not
               | well understood. The transportation, refurbishing and QA
               | are unlikely to be cheap anyway.
        
               | Syonyk wrote:
               | In terms of "understanding the environment around you
               | such that you can land a booster stage," it's not a
               | particularly hard problem. The challenges are about
               | designing a booster stage that can handle the flipping
               | and reentry, then figuring out the details of how to
               | stick the landing with several times your empty weight as
               | your minimum thrust.
               | 
               | "Where am I, and what's between me and my destination?"
               | isn't the hard part, as it is with surface driving.
        
             | rcxdude wrote:
             | > Humans can write very good software. The Space Shuttle
             | engineering group wrote some damned fine software. They
             | look literally nothing like the #YOLO coding that makes up
             | most of Silicon Valley, and deal with a far, far more
             | constrained environment as well than a typical public road.
             | 
             | Safety-critical software like that used in the space
             | shuttle is incredible expensive for the level of complexity
             | involved (which is not very high, compared to other
             | projects). A self-driving car is probably one of the most
             | complex software projects ever attempted. If you were to
             | apply the same techniques as the shuttle to achieve self-
             | driving you would literally never finish (not even the tech
             | giants have enough money to do this). So to achieve this
             | you not only need to solve the very difficult initial
             | problems you also need to come up with a way of getting
             | extreme reliability in a much more efficient way than
             | anyone has achieved before.
        
               | AshamedCaptain wrote:
               | And yet such self-driving car is going to kill way more
               | people than the Space Shuttle ever did. Oh, the irony.
        
         | moralestapia wrote:
         | Because Elon doesn't operate under the same jurisdiction as us
         | common people.
         | 
         | Try calling out a random diver a 'pedo', or perform the most
         | cynical kind of market manipulation (then laughing to the SEC
         | at their face) and your outcome will be _very_ different.
         | 
         | It's Animal Farm all over.
        
           | heavyset_go wrote:
           | > _Try calling out a random diver a 'pedo', or perform the
           | most cynical kind of market manipulation (then laughing to
           | the SEC at their face) and your outcome will be very
           | different._
           | 
           | Not just any random diver, a diver who had just helped rescue
           | the lives of several children and adults from what was an
           | internationally known emergency.
           | 
           | If any of us had said what Musk said about the diver, we'd be
           | rightfully dragged through the mud.
        
           | dnautics wrote:
           | Actually most of us _can_ do any or all of the following
           | things that Elon did (write a tweet calling a diver a pedo,
           | write a tweet claiming that a stock will go to 420 huh huh,
           | tweet random nonsense about cryptos, criticize the SEC or
           | FAA, etc.) with very little consequence, if any.
        
             | moralestapia wrote:
             | Step 1: Become the CEO/major shareholder of a billion+
             | dollar company.
             | 
             | Step 2: Publicly lie about some purported acquisition with
             | the purpose of manipulating the stock of said company.
             | 
             | We can talk after you've done that, from jail probably.
        
               | dnautics wrote:
               | My point is, I can lie about purported acquisitions of
               | companies now.
        
               | DrBenCarson wrote:
               | As a publicly accepted absolute source on said company?
        
               | moralestapia wrote:
               | Come on, dnautics. This is not hard.
               | 
               | You're neither an insider of said companies, nor are you
               | a public figure with enough influence to _actually_
               | manipulate the market.
        
               | dnautics wrote:
               | I think the problem is that we expect the market to be
               | fair. Because a lot of social systems depend "on the
               | market" in really stupid ways. Maybe that's the real
               | problem. If CEO's wanna lie, then we shouldn't trust
               | them, but is it right for it to be illegal for them to do
               | so? I promise you every CEO has lied (uncharitably,
               | misreprented, charitably) at some point about their
               | company. So in the end, it can come down to a matter of
               | which CEO has the most political connections/political
               | favor so as not to get jacked by the state that can
               | arbitrarily choose to reinterpret a misrepresentation as
               | a lie. I'm a person that actively gets politically
               | disfavored (overrepresented minority, and all that) so
               | that sort of shit scares the fuck out of me, since I
               | would like to get a large amount of money to help change
               | society for the better.
        
               | moralestapia wrote:
               | Just so we can appreciate the whole spectrum.
               | 
               | A random dude made a post on reddit about how he planned
               | to invest all of his meager (in comparison) savings into
               | GameStop stock. The post caught on, we all know what
               | happened, and he ended up being called by the SEC,
               | accused of market manipulation, among other things.
               | 
               | https://freespeechproject.georgetown.edu/tracker-
               | entries/sec...
        
               | maverick-iceman wrote:
               | I think the SEC gives insiders and C-suite some kind of
               | free pass as long as they don't time their
               | 'overtlyoptimistic' takes on their company with their
               | stock sales or vest.
               | 
               | The SEC is giving Musk free reign because they know that
               | he can't leave the company because the two entities are
               | so intertwined. Musk wealth is effectively just paper
               | wealth.
               | 
               | The SEC doesn't care that investment bankers consider it
               | real enough to give Musk loans for those pledged shares
               | 
               | They also don't care that the cult of personality he
               | managed to create enable him to constantly not produce
               | results and investors still give him money
               | 
               | In the end the SEC cares about real cash leaving the
               | company coffins and into the owners pockets , not paper
               | wealth swelling.
               | 
               | It's really a M.A.D. game between Musk and the SEC at
               | this point...the SEC is willing to bet that it's not
               | remotely possible that Musk lied through his teeth this
               | whole time just to get to #1 in the paper wealth Forbes
               | list only to implode while at the top.
               | 
               | Musk on the other hand doesn't strike me as a cold
               | calculator such as Gates or Brin or Zuck, he is much more
               | impulsive and there are non-zero chances that he did just
               | that.
        
           | optimiz3 wrote:
           | > Elon doesn't operate under the same jurisdiction as us
           | common people.
           | 
           | Elon doesn't get any special treatment. You can do all these
           | things as well if you're willing to expend resources when
           | faced with repercussions. My suspicion though is society will
           | give you more slack if you dramatically increase access to
           | space for your nation state or make credible progress against
           | a global ecological problem humanity is facing.
           | 
           | > It's Animal Farm all over
           | 
           | Don't get the Animal Farm reference as we're not talking
           | about some sort of proto-communist utopia. Everyone is
           | playing in the same sandbox.
        
             | moralestapia wrote:
             | Disclaimer: I'm all pro-capitalism and I love my money, so,
             | with that said.
             | 
             | >Everyone is playing in the same sandbox.
             | 
             | HAHAHAHAHAHAHAAHA!
        
               | optimiz3 wrote:
               | > HAHAHAHAHAHAHAAHA!
               | 
               | Defeatist thinking. As they say, pessimists get to be
               | right, but optimists get to be rich.
        
               | arrow7000 wrote:
               | How's that working out for you?
        
         | foobarbazetc wrote:
         | The thing is... everyone knows this.
         | 
         | The people writing the code, the people designing the cars, the
         | people allowing the testing, etc etc.
         | 
         | What we're seeing in these videos is basically unusable and
         | will never be cleared to drive by itself on real roads.
         | 
         | It's just the "the last 10% takes 90% of the time" adage but
         | applied to a situation where you can kill the occupant of the
         | car and/or the people outside it. And that last 10% will never
         | be good enough in a general way.
        
         | LeoPanthera wrote:
         | > I don't understand how something this broken is allowed to
         | operate on public roads.
         | 
         | It's important to point out that this software is currently
         | only offered as a private beta to deliberately selected
         | testers. Now, maybe they shouldn't be using it on public roads
         | either, but at least it's not available to the general public.
        
           | atoav wrote:
           | As long as all the people in traffic with this experiment
           | signed this agreement as well, all is good.
        
             | deegles wrote:
             | "By existing in the vicinity of this vehicle, you consent
             | to Tesla's FSD software holding your life in its hands."
        
           | alkonaut wrote:
           | It's being tested with the general public the oncoming lane,
           | so it's effectively tested "on the public" even if at limited
           | scale.
        
           | mosselman wrote:
           | Because the self driving stuff in the other Teslas works
           | well?
        
         | AndrewBissell wrote:
         | You would think that just the name "Tesla Full Self Driving
         | Beta 9.0" would be giving people some pause here.
        
           | barbazoo wrote:
           | I thought you were making a joke but it really seems to be
           | called "Full Self Driving Beta 9.0". It's hillarious, do they
           | think by adding the "Beta" it makes it ok to hit stuff unless
           | the driver intervenes instantaneously?. How are they even
           | allowed to call it "FSD" ( _Full_ self driving) if in fact it
           | doesn 't do that at all?
        
             | dyingkneepad wrote:
             | Well, everything is Beta these days! Is gmail still beta? I
             | wouldn't be surprised to find out Playstation 5 is still
             | marked as Beta.
        
       | mritun wrote:
       | Question: Would it be awesome if FDA allowed Pharma to conduct
       | drug-tests like this? Put a "beta-2.9" on the vial and let people
       | try it out...
       | 
       | Some may die, be disabled or may linger in vegetative state life-
       | long, but it was their choice afterall and the it can argued that
       | medicinal side-effects are very small cause of mortality and
       | hence long-term the "public beta tests" will make drugs more
       | effective and save more lives!
        
         | colinmhayes wrote:
         | I think the difference is releasing self driving provides the
         | data Tesla needs to improve it. Releasing a half baked drug
         | doesn't help the pharma company improve it.
         | 
         | If you're asking whether that would be awesome if it lead to
         | pharmaceutical innovations I think it would.
        
           | AlexandrB wrote:
           | > Releasing a half baked drug doesn't help the pharma company
           | improve it.
           | 
           | Sure it does. It identifies cases where the drug may have
           | unexpected side effects so either the chemistry, dosage, or
           | expected risk factors can be refined.
        
           | flutas wrote:
           | > I think the difference is releasing self driving provides
           | the data Tesla needs to improve it. Releasing a half baked
           | drug doesn't help the pharma company improve it.
           | 
           | The pharma company could see results (self-driving car data)
           | and figure out what caused the issues (details of the deaths
           | in pharma, accidents in Tesla) and use that to make the next
           | beta version.
           | 
           | I don't really see how that isn't an apt analogy.
           | 
           | It's called clinical trials for pharma the key point being
           | it's opt in and doesn't affect anyone around the subject,
           | unlike Tesla's autopilot beta.
        
         | edude03 wrote:
         | I'm pretty sure this is a logical fallacy. In the case of
         | medications it's actually a fairly common situation where a
         | patient has a terminal illness that there is no treatment
         | available for, but you can get it on "the blackmarket" and in
         | that case, it's either die for sure, or maybe not die, and in
         | that case having a beta makes sense - it ensures you're atleast
         | getting the thing you think you're getting.
         | 
         | To me autonomous vehicles are similar - the people who have
         | access to them know they're not perfect, but they're willing to
         | spend the money because they think it's better than the
         | alternative
        
         | CreepGin wrote:
         | Depends on how much the drug costs. If it costs 100k a pop,
         | then I don't see the general public being affected by it too
         | much. RIP those brave rich souls.
        
       | okareaman wrote:
       | I had a girlfriend that I didn't trust driving my car, especially
       | with me in it. That's how I feel about Elon Musk driving my car.
        
       | sharkmerry wrote:
       | Can someone explain what is happening in the first video? are the
       | planters on the left after the turn and it was trying too tight
       | of a turn?
        
       | foobarbazetc wrote:
       | This thing is like 10 years away from being actually usable, if
       | it ever gets there.
        
       | [deleted]
        
       | bsagdiyev wrote:
       | Why are there so many comments from seemingly different posters
       | all saying the same thing on this, "I don't understand why humans
       | are allowed to drive cars"? It feels kinda... culty? Or too
       | similar to be a coincidence. It honestly probably is but these
       | Tesla posts always bring out those types and it confuses me.
       | Humans kill humans, do we want machines to start doing it on the
       | road automatically now too?
        
         | 48snickers wrote:
         | Part of the disconnect here is that the oft-repeated claims of
         | how many miles have been safely driven by FSD versus humans is
         | a bullshit number. Nearly every mile driven by FSD was driven
         | by FSD _AND a human_ that had to take the wheel when FSD
         | failed.
        
         | mikestew wrote:
         | Witness some brands of motorcycle that are overpriced for such
         | dated technology, yet have a line of people telling you how
         | great they are. Hell, just read the replies in that Twitter
         | thread. Oh, you thought it would be all "how are these allowed
         | on the road?", did you? No, the narrative-supporting is strong
         | in this one. When one spends that kind of money, some have a
         | hard time admitting that their purchase wasn't all it was
         | advertised to be.
        
         | dekhn wrote:
         | I think the idea is, if humans were held to the standards
         | machines are, we wouldn't let them drive.
         | 
         | If you offered me a car that drove itself, and statistically,
         | it killed people at the same rate as humans, but let me not
         | have to drive, I'd take that. Nobody everybody agrees.
        
           | w0m wrote:
           | Self determination > logic.
           | 
           | Look at the American Gun debate as an example; carrying for
           | self defense makes you drastically less safe by most metrics.
           | But people prefer to have the modicum of self determinism
           | over more consistent statistical safety.
        
           | llbeansandrice wrote:
           | > if humans were held to the standards machines are, we
           | wouldn't let them drive.
           | 
           | That's not how it even works though. When people crash or
           | drive drunk there is a system to hand out consequences. What
           | do you do when a Tesla drives into a monorail pole and causes
           | millions in damage if the structural integrity is
           | compromised?
           | 
           | >statistically, it killed people at the same rate as humans
           | 
           | You can't tell me a car that's at best an average not-
           | murderer is a good sell.
           | 
           | There are also so many ways to self-determine your risk while
           | driving or traveling in general. A clear example is seat
           | belts. Less than 10% of people in the US don't wear seat
           | belts but a full 47% of the people that died in car accidents
           | were not wearing one. [1]
           | 
           | 1 - https://www.nhtsa.gov/risky-driving/seat-belts
        
           | notahacker wrote:
           | > I think the idea is, if humans were held to the standards
           | machines are, we wouldn't let them drive.
           | 
           | But the reverse is true. Humans receive driving bans and even
           | criminal penalties for the sort of driving errors autonomous
           | systems make without penalty.
        
             | rcMgD2BwE72F wrote:
             | The reverse is not true either. Tesla drivers will be fined
             | and/or banned, even if FSD is in charge. When there'll be
             | no driver, then Tesla will be fined or banned. No
             | differences here, since the driver is in charge.
        
         | sidibe wrote:
         | These people seem to be under the illusion that FSD is getting
         | near to being only as flawed as humans who can drive hundreds
         | of thousands of miles without incident. By contrast from the
         | couple dozen FSD Beta drivers who upload to youtube FSD Beta
         | has a near-miss every couple minutes.
        
         | darknavi wrote:
         | Tesla fan here: The Tesla echo chamber is very real in online
         | communities (here, reddit, etc.).
         | 
         | I personally enjoy playing with the progress of autopilot over
         | the years and I'd be sad to see it more restricted.
         | 
         | I understand that it can be unsafe if left unsupervised but in
         | reality I've never met someone who drives like that.
        
       | zyang wrote:
       | It appears Tesla FSD just ignores things it doesn't understand,
       | which is really dangerous in a production vehicle.
        
         | hytdstd wrote:
         | Yes, and it's quite disturbing. I just went on a road trip with
         | a friend, and despite passing a few bicyclists, the car (with
         | radar sensors) did not detect any of them.
        
         | heavyset_go wrote:
         | This is what the automated Uber vehicle[1] that struck and
         | killed a pedestrian did, as well. Despite picking her up via
         | sensors and the ML model, it was programmed to ignore them.
         | 
         | [1] https://www.nytimes.com/2018/03/19/technology/uber-
         | driverles...
        
           | jazzyjackson wrote:
           | Plus the Volvo's radar-brake was disabled so they could test
           | the vision system.
        
         | mrRandomGuy wrote:
         | Why are you getting down-voted? There's literal videos
         | depicting what you state. Is the Musk Fanboy Brigade behind
         | this?
        
           | darknavi wrote:
           | You guys can downvote?!
        
             | reallydontask wrote:
             | I think you need 500 karma before you can downvote
        
             | jacobkranz wrote:
             | You can after you get a certain amount of upvotes (I can't
             | remember the exact number though. 30? 50?)
        
         | LeoPanthera wrote:
         | The vehicle is production but this particular software is not,
         | it's a private beta and not available to the general public.
        
           | jazzyjackson wrote:
           | The general public does have the honor of being part of the
           | beta test, in that they play the role of obstacles.
        
           | barbazoo wrote:
           | I couldn't care less what version the software is as soon as
           | the vehicle drives around in the real world. Imagine driving
           | with "FSD" on and hitting someone because of an issue in your
           | "Beta" software.
        
       | yssrn wrote:
       | Scary set of videos. Is Tesla using Marylanders to train FSD?
        
       | DrBenCarson wrote:
       | Their current sensor and camera lineup have made this impossible
       | on models already on the road. Good luck to their customers
        
       | simion314 wrote:
       | Do we know what is the process and who decides that an update is
       | ready? This is a big decision so I am wondering what process or
       | personality is needed to decide that all is safe, let's update
       | things.
        
       | j7ake wrote:
       | These spectacular fails weaken the rationalist's arguments that
       | "as long as FSD achieves lower deaths per km driven (or any other
       | metric) than humans" then FSD should be accepted in favor of
       | human driving.
       | 
       | Even if "on aggregate" FSD performs safer (by some metric) than
       | humans, as long as FSD continues to fail in a way that would have
       | been easily preventable had a human been at the wheel, FSD will
       | not be accepted into society.
        
         | manmal wrote:
         | I think this is already true though for highway driving.
         | Highways are long and tiring, and the surroundings model is
         | easy to get right, so computers have an advantage. Most
         | manufacturers offer a usable cruise control which is safe and
         | can probably be active 90% of the time spent on highways. I
         | often switch it on in my 3yo Hyundai as an extra safety measure
         | in case the car in front of me unexpectedly brakes while I'm
         | not looking there. Add to that a lane keeping assistant and
         | lane change assistant, and you don't need to do much.
         | 
         | Except for when the radar doesn't see an obstacle in front of
         | you, eg because the car in front of you just changed lanes.
         | That needs to be looked out for.
        
           | [deleted]
        
         | cptskippy wrote:
         | > spectacular
         | 
         | There was nothing spectacular about those failures, I would say
         | because the driver was attentive and caught/corrected the car.
         | That's not to say some of these fails could not have ended in
         | catastrophe, but to call them spectacular is quite the
         | exaggeration.
         | 
         | One of those "spectacular fails" was displaying two stop signs
         | in the UI on top of each other while properly treating it as
         | one stop.
         | 
         | Using hyperbole like this only makes people ignore or dismiss
         | your otherwise valid point.
        
         | cogman10 wrote:
         | Yeah, I've brought this point up in other locations.
         | 
         | It does not matter that any autonomous driving tech is safer
         | than human drivers. They MUST be perfect for the general public
         | to accept them. The only accidents they'd be allowed to get
         | into are ones that are beyond their control.
         | 
         | Algorithmic accidents, no matter how rare they are, won't be
         | tolerated by the general public. Nobody will accept a self
         | driving car running over a cat or rear ending a bus even if
         | regular humans do that all day long.
         | 
         | The expectation for self driving cars is a perfectly attentive
         | driver making correct decisions. Because, that's what you
         | theoretically have. The computer's mind doesn't "wander" and it
         | can't be distracted. There's no excuse for it to drive worse
         | than the best human driver.
        
           | j7ake wrote:
           | Imagine if algorithmic accidents had biases. For example,
           | let's say a car tended to crash into children (maybe they are
           | harder to detect with cameras), more often than adults. This
           | type of algorithmic bias would be unacceptable no matter how
           | safe FSD were on aggregate.
           | 
           | So you're right, the only bar to reach is perfection (which
           | is impossible), because algorithmic errors have biases that
           | will likely deviate from human biases.
        
             | cogman10 wrote:
             | Call me an optimist, but I don't think it's impossible.
             | 
             | That said, there are going to be a lot of dead small
             | animals due to autonomous vehicles. I'd hope that whoever
             | develops the system has some good training data to stop it
             | from hitting children.
             | 
             | The issue will be that it's going to be real hard to make a
             | system that can tell the difference between a plastic bag
             | and a poodle.
        
             | truffdog wrote:
             | > let's say a car tended to crash into children (maybe they
             | are harder to detect with cameras), more often than adults
             | 
             | This is already true today of human drivers because of the
             | tall SUVs that are so popular. Do you think matching biases
             | will be acceptable?
        
               | cogman10 wrote:
               | Would you accept it? Would you be ok if a car without a
               | driver ran over your kid, even if they were playing in
               | the local street?
               | 
               | I'd say, absolutely not. The only way we'd accept that is
               | if the kid darted out before the vehicle could slow down,
               | and even then we'd expect super human braking to
               | (hopefully) avoid serious injury.
               | 
               | Also, self driving cars have and advantage that they can
               | put cameras in places typical drivers eyes aren't. They
               | should be able to see a lot more than you can from the
               | driver's seat.
        
               | ggreer wrote:
               | That's not true.
               | 
               | First, the vast majority of pedestrian deaths are adults.
               | In 2018, a total of 206 age 15 or younger were killed by
               | cars. Compare that to 5,965 killed who were age 16 or
               | older.[1] Both in absolute numbers and relative to
               | population, children are far less likely to be run over
               | and killed than adults.
               | 
               | Second, while light trucks (vans, SUVs, & pickups) are
               | 1.45x more deadly to pedestrians than cars, buses are far
               | more dangerous than either. Motorcycles (which have
               | excellent visibility) are particularly deadly to child
               | pedestrians. From _United States pedestrian fatality
               | rates by vehicle type_ [2]:
               | 
               | > Compared with cars, the RR of killing a pedestrian per
               | vehicle mile was 7.97 (95% CI 6.33 to 10.04) for buses;
               | 1.93 (95% CI 1.30 to 2.86) for motorcycles; 1.45 (95% CI
               | 1.37 to 1.55) for light trucks, and 0.96 (95% CI 0.79 to
               | 1.18) for heavy trucks. Compared with cars, buses were
               | 11.85 times (95% CI 6.07 to 23.12) and motorcycles were
               | 3.77 times (95% CI 1.40 to 10.20) more likely per mile to
               | kill children 0-14 years old. Buses were 16.70 times (95%
               | CI 7.30 to 38.19) more likely to kill adults age 85 or
               | older than were cars. The risk of killing a pedestrian
               | per vehicle mile traveled in an urban area was 1.57 times
               | (95% CI 1.47 to 1.67) the risk in a rural area.
               | 
               | All else equal, being hit by a larger vehicle does
               | increase the risk of severe injury or death, but all else
               | isn't equal. Larger vehicles tend to be more visible,
               | louder, and slower than their smaller counterparts.
               | Different types of vehicles are driven in different
               | environments with different propensities for mingling
               | with pedestrians. If vehicle mass and blind spots were
               | the main factors in pedestrian deaths, we should have
               | seen deaths skyrocket over the past 40 years (as cars got
               | bigger and bulkier for greater passenger safety). Instead
               | we saw pedestrian deaths decrease.
               | 
               | 1. https://docs.google.com/spreadsheets/d/e/2PACX-1vRqGqo
               | dKkWkS...
               | 
               | 2. https://injuryprevention.bmj.com/content/11/4/232
        
           | matz1 wrote:
           | >They MUST be perfect for the general public to accept them
           | 
           | No they don't, its far from perfect right now yet its
           | available and you can use it right now provided you have
           | money to buy it.
        
             | cogman10 wrote:
             | Incorrect. What you can buy now is driver assist. I'm
             | talking about actual "no driver at the wheel" driving.
             | 
             | AFAIK, the closest there is WAYMO, but they are geo-fenced.
             | There's also some fixed route low speed buses out there.
             | However, there's no self driving you can purchase which
             | allows you to, for example, nap while the car is driving.
        
               | matz1 wrote:
               | To get to the so called "no driver at the wheel" driving
               | you have to go through what you called "driver assist"
               | stage, its a continuous improvement.
               | 
               | So are you saying people accept it in the far from
               | perfect driver stage now but won't accept it when it
               | become so much improved "no driver at the wheel" stage ?
        
               | cogman10 wrote:
               | Correct.
               | 
               | I'm saying that the "no driver at the wheel" stage must
               | be flawless. I've seen some claims that it's "Good enough
               | if it's x times better than a human driver" or "can drive
               | n number of miles without an accident". However, both of
               | those are not the metrics to measure.
               | 
               | A "no driver at the wheel" package is good enough when it
               | doesn't run over the neighbor's cat or Timmy on the
               | street. Humans today make that mistake all the time, but
               | that's not a mistake a driverless car can make and get
               | away with. Consider, for example, Uber's killed
               | pedestrian. Sure, it was a tough scenario and the driver
               | wasn't paying attention. But the fact of the matter is
               | everyone had the expectation that the Uber car would not
               | hit that pedestrian even if they were really hard to see.
               | 
               | Until that happens, nobody will accept "no driver at the
               | wheel" cars... except maybe in tightly controlled
               | routes/geofences. Otherwise, SDC will require a driver to
               | be attentive at the wheel at all times. That is, telsa's
               | FSD is a very long way away from being able to hit that
               | "telsa taxi service" that elon has pitched. I doubt it
               | can make it with the current sensor set (not enough
               | redundancy).
        
         | the8472 wrote:
         | I think you misunderstand the argument. It is that if,
         | hypothetically, FSD really did save human lives on average then
         | it _should_ be accepted as the default mode of driving. It
         | would be a net win in human lives after all. But the  "should"
         | can also acknowledge that people irrationally won't accept this
         | net life-saving technology because it will redistribute the
         | deaths in ways which they're not accustomed to. So it's as much
         | a statement about utility as a statement about the need to
         | convince people.
         | 
         | Of course this is all theoretical. If we had solid evidence
         | that it performs better than humans in some scenarios but worse
         | in others then we could save even more lives by only allowing
         | it to run in those cases where it does and only do shadow-mode
         | piloting in the others (or those who opt into lab rat mode).
         | Enabling it by default only makes sense if we do know that it
         | performs better on average and we do not know when it does.
        
           | paxys wrote:
           | I don't agree with the former argument either. I'm not going
           | to accept a self driving system unless it increases _my_
           | personal safety. If the system doubles my accident rate but
           | cuts that of drunks by a factor of 10 (thus improving the
           | national average), it isn 't irrational to not want it for
           | myself.
        
             | the8472 wrote:
             | > The car has to be better at driving than me
             | 
             | But you can also be a pedestrian or passenger. Do you not
             | want everyone else to be less likely to kill you?
             | 
             | Also, should you really trust your own estimate of your
             | driving safety?
             | 
             |  _> McCormick, Walkey and Green (1986) found similar
             | results in their study, asking 178 participants to evaluate
             | their position on eight different dimensions of driving
             | skills (examples include the  "dangerous-safe" dimension
             | and the "considerate-inconsiderate" dimension). Only a
             | small minority rated themselves as below the median, and
             | when all eight dimensions were considered together it was
             | found that almost 80% of participants had evaluated
             | themselves as being an above-average driver.[30] _
        
       | dreyfan wrote:
       | Is "The car doesn't recognize a truck crossing in front of it so
       | it drives under and decapitates the driver" still an open issue?
       | 
       | What about the multiple instances where FSD drives the car
       | forcibly into a firetruck?
        
       | moojah wrote:
       | This really isn't beta grade software, as it isn't feature
       | complete as the failure scenarios in the video clearly show. I'd
       | call it alpha grade, and it has been that for a while.
       | 
       | It's not 2 weeks or whatever unrealistic timeline away from being
       | done, as Elon has claimed for ever. 2 perhaps years if we're
       | lucky, but given human and driving complexity probably way more
       | before even the whole of the USA is reliably supported beyond L2.
        
         | w0m wrote:
         | >This really isn't beta grade software, as it isn't feature
         | complete as the failure scenarios in the video clearly show.
         | 
         | I think it depends what they actually are trying to accomplish.
         | This is Beta for a glorified cruise control overhaul; not a
         | beta for promised RoboTaxi.
         | 
         | Musk/Tesla tend to talk about RoboTaxi then slip seemlessly
         | into/out of 'but today we have low engagement cruise control!'.
         | 
         | Fair bit of hucksterism.
        
           | barbazoo wrote:
           | > I think it depends what they actually are trying to
           | accomplish
           | 
           | Good point. "Full Self Driving" in my mind paints a picture
           | beyond "a better cruise control". But maybe they meant that
           | and just named it wrong.
        
             | dragontamer wrote:
             | From Tesla's webpage:
             | 
             | > Full Self-Driving Capability
             | 
             | >
             | 
             | > All new Tesla cars have the hardware needed in the future
             | for full self-driving in almost all circumstances. The
             | system is designed to be able to conduct short and long
             | distance trips with no action required by the person in the
             | driver's seat.
        
         | H8crilA wrote:
         | Like Donald Trump, but for nerds:
         | 
         | http://elonmusk.today/
         | 
         | FSD would be equivalent to the Mexican border wall, I guess?
        
           | lostmsu wrote:
           | And the tax hike on cap gains.
        
       | manmal wrote:
       | Is FSD still operating on a frame-by-frame basis? I remember it
       | was discussed on Autonomy day that the ideal implementation would
       | operate on video feeds, and not just the last frame, to improve
       | accuracy.
       | 
       | When you look at the dashboard visualizations of the cars'
       | surroundings, the model that is built up looks quirky and
       | inconsistent. Other cars flicker into view for one frame and
       | disappear again; lane markings come and go. I saw a video where a
       | car in front of the Tesla indicated, and the traffic light in the
       | visualization (wrongly) started switching from red to green and
       | back, in sync with the indicator blinking.
       | 
       | How could a car behave correctly as long as its surroundings
       | model is so flawed? As long as the dashboard viz isn't a perfect
       | mirror of what's outside, this simply cannot work.
        
         | cptskippy wrote:
         | I would think the flickering objects in the UI is a result of
         | objects hovering around the confidence threshold of the model.
         | But... I have a Model 3 and the flickering happens when
         | stationary and nothing around you is moving.
        
         | EForEndeavour wrote:
         | You've nicely articulated what was bothering me about the
         | jittery dashboard visualizations. Why on earth is everything
         | flickering in and out of existence, and why is the car's own
         | planned trajectory also flickering with discontinuities?? It
         | seems like they aren't modeling the dimension of time, thus
         | throwing away crucial information about speed and needlessly
         | re-fitting the model to static snapshots of dynamic scenes.
         | 
         | It's like the ML system needs its inferences constrained by
         | rules like "objects don't teleport" or "acceleration is never
         | infinite."
        
         | joakleaf wrote:
         | There also seem to be general problems with objects
         | disappearing when they are obscured by other objects, and then
         | reappear later, when no longer obscured.
         | 
         | It is ridiculous, that the model doesn't keep track of objects,
         | and assume they continue with current velocity when they become
         | obscured. It seems like a relatively simple thing to add
         | depending on how they represent objects. You could even
         | determine when the objects are obscured by other objects.
         | 
         | In 8.x videos I noticed cars shifting and rotating a lot over
         | fractions of a second, so it seemed like they needed a Kalman
         | filter for objects and roads.
         | 
         | Objects in 9.0 look more stable, but I still see lanes, curbs,
         | and entire intersections shifting noticeably from frame to
         | frame. So if they added time (multiple frames) to the model, it
         | is still not working that well.
        
       | creato wrote:
       | Some of the issues shown in these videos make me wonder about
       | Tesla's strategy of using non-professional driver (customer) data
       | to train FSD. Things like changing lanes at the last second is a
       | thing that (obnoxious) humans do, and would be a bad example to
       | learn from. There might be a _lot_ of subtle garbage in Tesla 's
       | dataset.
        
         | rcMgD2BwE72F wrote:
         | Tesla can easily filter these cases out, automatically. They
         | have triggers to catch things they want (e.g sudden lane
         | changes) and campaigns have conditions too (only from prudent
         | drivers, if lane change is forced, etc). FSD does not learn
         | from all drivers all the time.
        
       | jacquesm wrote:
       | Some regulator somewhere will take this junk off the road with
       | the stroke of a pen and I'll feel that much safer on the road
       | when it happens. And we'll have Elon Musk to thank for electric
       | cars _and_ the self driving winter.
       | 
       | It's actually pretty simple: have FSD do a regular driving test.
       | If it can pass that it's good to go, if not it fails the test and
       | will not be allowed to control a vehicle.
        
       | jdofaz wrote:
       | I love my Tesla but watching these videos made it an easy
       | decision not to spend $10k on the FSD option.
        
       | gumby wrote:
       | One case that's perhaps not a bug:
       | 
       | > 4th: Tesla doesn't recognize a one-way street and the one-way
       | sign in the street, and it drives towards the wrong way
       | 
       | If the car is in Boston this is OK.
        
       ___________________________________________________________________
       (page generated 2021-07-12 23:01 UTC)