[HN Gopher] Tesla FSD beta crashes into object [video]
___________________________________________________________________
Tesla FSD beta crashes into object [video]
Author : doener
Score : 62 points
Date : 2022-02-06 18:16 UTC (4 hours ago)
(HTM) web link (www.youtube.com)
(TXT) w3m dump (www.youtube.com)
| jmacd wrote:
| I've owned 3 teslas. On each of them I have chosen not to pre-pay
| $10,000 CAD in order to get access to FSD when it is available. 2
| of those cars I owned and sold before FSD was even available.
|
| I would not have had the ability to take that license with me to
| the new car I purchased. I'm surprised this practice hasn't been
| investigated by any state justice or commerce dept (I'm not sure
| which it should be in the US?) as many consumers were sold
| something that was never provided.
| spuz wrote:
| So the FDS feature is tied to the car, rather than your
| personal account with Tesla? In that case, is it possible to
| transfer the deposit for FSD to the new Tesla owner? If so then
| presumably you could choose to reflect that in the price you
| choose to sell the car.
| jmacd wrote:
| yes. It is probably one of the worst values in the entire
| automotive industry.
|
| They always have this "buy now, the price will go up later!"
| but I had to ask myself: how much can the price go up? It's
| not going to be a $20,000 feature any time soon. I figured I
| was risking a couple thousand dollars that I would need to
| pay to upgrade later.
| yreg wrote:
| If Tesla eventually solves FSD (which is not a given) they
| might stop selling the ad-on package altogether and just
| charge a per mile subscription instead.
|
| Then all of these old Teslas with FSD for life will become
| rare, similar to the Teslas witch unlimited free
| supercharging.
| root_axis wrote:
| It's actually worse than that. The FSD is tied to your
| account _and_ the car, i.e. if you sell the car to a new
| owner they will have to purchase FSD separately and if you
| buy a new Tesla you cannot transfer FSD from the old car to
| the new one.
| yreg wrote:
| That's not really true. If you sell your Tesla with FSD
| beta directly to me, I'll keep the FSD beta. If you trade
| it in to Tesla, then they might decide to remove the
| package and resell the car without it but I don't see how
| that would be your concern.
| bink wrote:
| Tesla has been removing the features upon resale. So these
| features are really just a license you purchase to use while
| you own the car.
| yumraj wrote:
| > I'm surprised this practice hasn't been investigated by any
| state justice or commerce dept ...
|
| I'm not sure there's a way to regulate consumer stupidity.
| People are conned into losing money everyday, not all of those
| cons are crimes.
| AlexandrB wrote:
| All customers are stupid relative to the
| manufacturer/retailer because they're not privy to internal
| company information regarding the product they're buying.
| That's why consumer protection laws exist.
| yumraj wrote:
| I agree.
|
| However, consumers putting down $10,000 for a promised
| feature with no firm deadline and severe restrictions on
| the said feature's portability are in a league of their own
| when it comes to stupidity.
|
| Or, perhaps they are more akin to an Investor ..
| bink wrote:
| Many types of fraud only exist because people are "stupid".
| They're still illegal.
| jmacd wrote:
| Consumer stupidity is one of the key elements of many frauds.
| elkos wrote:
| That sounds like a cleptocracy IMHO
| AlexandrB wrote:
| Seems like the license might not even transfer with the car to
| a new owner[1]. Feels like a really expensive Kickstarter.
|
| [1] https://www.findmyelectric.com/blog/does-full-self-
| driving-f...
| saiya-jin wrote:
| feels like the last 20% of this will take much more than 80% of
| the whole delivery time
| RupertHandjob wrote:
| 5% will take more time than the 95%. There is zero chance that
| they can make it work with their potato cameras. Tesla will not
| stop selling this scam thought, too many dumb people with too
| much money.
| sydthrowaway wrote:
| A century from now, Musk's ability to get to people to PAY to
| beta test with their lives will be studied in business schools
| across the world.
| amelius wrote:
| An additional problem is that pedestrians, cyclists, etc. also
| pay for it.
| sydthrowaway wrote:
| And taxpayers?
| PinguTS wrote:
| Sad truth.
| ChuckNorris89 wrote:
| It's already documented:
|
| https://www.youtube.com/watch?v=Dkix4UzEbjU
| TooKool4This wrote:
| This video shows a lot of other ,,interesting" issues with the
| current FSD implementation. The collision with the pylon is in
| fact the least interesting thing.
|
| - 2.20: Taking a left turn with a trajectory into the opposite
| lane. The youtuber's camera clearly was adjusting exposure from
| underneath the overpass and I wonder if the car's cameras had the
| same issues.
|
| - 2.40: Right on red without stopping. I wonder if this is
| related to the stop sign rolling stop issues
|
| - 3.30: Collision with pylon. It seems like the perception system
| didn't pick up the pylon at all (or if it did, its not rendering
| on the screen)
|
| - 5.10: Some weird funky trajectory and then pulling into the
| sunken curb/sidewalk on the right turn
|
| - 6.28: Trying to drive down the railroad tracks
|
| I gave up after the above.
|
| Despite Tesla's (Elon) insistence that FSD will be safer than
| human drivers soon enough (1 year), there is very little evidence
| to support that. At the very least if we are going to allow beta
| testing on roads then Tesla should be forced to submit their FSD
| incident data to the CA DMV like the rest of car manufacturers
| that are testing AVs.
| cameldrv wrote:
| Yes, this is very far from average human level performance.
| Error rates need to go down by maybe 100-1000x to be viable.
| They are years away.
| rvz wrote:
| Crash starts from the 3:26 mark: [0]. Had they not intervened,
| the damage would have been much worse. Further in the video, it
| has repeatedly become confused in trying to drive through the
| pylons - again the driver intervenes.
|
| This is just one of the many reasons why FSD is not only putting
| drivers at risk, but it is really turning its customers into
| real-life crash dummies.
|
| A more accurate re-brand of this product should be something more
| like 'Fools Self Driving' than 'Full Self Driving', given the
| false advertising and lack of safety features for these drivers
| using the product.
|
| [0] https://youtu.be/sbSDsbDQjSU?t=206
| kmonsen wrote:
| The driver is really alert and intervenes super quickly. I
| don't think we can expect regular drivers to intervene this
| quickly.
| pcurve wrote:
| It's interesting how those green pylons were completely
| invisible to the car's sensor. If you look at dashboard LCD
| 4:30, the car seems too slow to register them as objects or
| misses it completely.
|
| 4:42 it makes a right onto tram track. And again at 6:24
|
| Jesus.
| bradfa wrote:
| As a human, I had a hard time seeing the green pylons when
| watching the video. Local to me pylons are always bright
| orange and easy to see. My brain is not looking for dark
| colored narrow barriers so it doesn't surprise me that FSD
| doesn't see them either. I imagine I'd have a hard time
| driving those roads in bad weather or at night and maybe even
| under the lighting during the video.
| lttlrck wrote:
| It didn't see the orange one later in the video either.
| Volundr wrote:
| > As a human, I had a hard time seeing the green pylons
| when watching the video.
|
| I'm not doubting you, but I find that surprising.
| Admittedly the dark green blends into the background a bit,
| but the retroreflectors catching the sunlight stood out for
| me even in the video, which would have muted the effect
| somewhat.
|
| > I imagine I'd have a hard time driving those roads in bad
| weather or at night and maybe even under the lighting
| during the video.
|
| In bad weather or at night you should have you headlights
| on and those retroreflectors will make them shine brightly.
| They'll actually probably be easier to spot. In the event
| the weather is bad enough the retroreflectors can't bounce
| your headlights back at you, it's going to be bad enough
| your not seeing the paint either.
| AlexandrB wrote:
| I think there's a huge difference between spotting them in
| a 2d video and spotting them live. Your brain is very good
| at picking out details like motion and parallax in a live
| environment.
| klyrs wrote:
| The bollards are plainly visible in broad daylight, and
| wrapped with reflective tape. If you cannot see them,
| please stay off the road.
| AtlasBarfed wrote:
| Tesla has been full of it on FSD for a while now, at least the
| current beta is rolled out to a much more restricted set of
| people, although they still sell the general one to the general
| public...
|
| But WHY is a (I'm assuming) concrete post that short and
| painted DARK GREEN? Green seems like a horrible choice for a
| post in night driving. That's not every particularly well
| designed for human driving. San Jose? Why isn't this post
| yellow? Yellow seems to be the typical concrete post color.
|
| Surface/local street FSD driving is 10-20 years away and will
| require a lot of convergence, construction standards,
| regulatory testing, common databases of "danger locations",
| etc. It's so much harder. So many "target types" so much
| variation, so many different local standards.
|
| In particular I don't agree with Tesla's lack of a database or
| route-specific training or the like. Lidar I think is actually
| less of an issue than that approach.
|
| But a good divided highway automation should be doable. That
| has a lot more standardization, and higher amounts of traffic
| would lead to better statistical models and datasets for
| proving out algorithms.
|
| These are the problems in highway driving: traffic jams (can be
| prewarned)/ stopped cars (a current tesla issue), deer/animals
| wandering onto the road, construction (can be signaled in a
| standard way and prewarned). Debris/potholes?
| lttlrck wrote:
| It doesn't see the orange post later in the video either...
| ChuckNorris89 wrote:
| _> Green seems like a horrible choice for a post in night
| driving. _
|
| Those pylons have reflective strips on them for the night.
| vanilla_nut wrote:
| > it is really turning its customers into real-life crash
| dummies.
|
| Not just the customers, _anybody else unfortunate enough to
| share the road with them_.
|
| As a frequent pedestrian/bicyclist in my city, Teslas scare the
| shit out of me. Even worse, when Tesla drivers cause accidents,
| the response is typically complete shock and disbelief that FSD
| failed them. And an utter lack of responsibility for the damage
| and/or injury they caused. Because obviously they weren't
| driving, it was the car!
| HarryHirsch wrote:
| Sanhedrin 91b applies:
| https://www.sefaria.org/Sanhedrin.91b?lang=bi
|
| Both Elon Musk and the driver are equally at fault and need
| both go to jail, they enable each other to make the public
| roadways unsafe.
| epgui wrote:
| Why do you feel like you have to bring Judaism / religion
| into this conversation?
| HarryHirsch wrote:
| The Talmud explains Jewish legal concepts, how they were
| used and applied in practice. It's not an attempt to drag
| religion in, rather to point out that 1500 years ago the
| Jewish Sages recognized that folks who enable each other
| are both guilty and need to be judged together. The
| plausible denability and diffusion of responsibility that
| you see so often nowadays is pernicious to society.
| Animats wrote:
| Hitting a bollard is inexcusable. Even for a vision-only
| system. They're really lucky that it wasn't a solid bollard.
| New York City is replacing plastic bike lane bollards with
| steel ones because cars and trucks weren't taking the plastic
| ones seriously.[1]
|
| There's another place where it tries to turn onto the light
| rail trackway and then ends up in the bus lane, but that area
| of First Street in San Jose is confusing.
|
| Tesla used to make a lot of PR about how they were training
| their AI collecting data from vehicles in the field, although
| nobody seems to have tapped in and found out what they're
| collecting. You'd think that if they did that, they'd log all
| the places where there was trouble or the human overrode the
| system. Then transmit cautions to cars getting close to trouble
| spots, so the fleet as a whole never made the same mistake
| twice. Apparently not.
|
| [1] https://nyc.streetsblog.org/2021/12/21/incoming-dot-
| commissi...
| AlexandrB wrote:
| Interestingly at the beginning of the video the route
| planning seems to work overtime to avoid making a specific
| turn. See about 25s in: https://youtu.be/sbSDsbDQjSU?t=25
|
| That's kind of like what you're describing, but more dumb
| where the car tries to avoid the trouble spot entirely.
| xeromal wrote:
| Thanks for the timestamp!
| mattlondon wrote:
| Kinda worrying that someone who was paying so much attention to
| what the car was doing that they were actually giving a _running
| commentary_ still wasn 't able to react quickly enough to prevent
| an accident.
|
| Lucky it was a plastic bollard and not someone's leg or baby
| carriage.
| bink wrote:
| The driver watches as his car runs a red light and then swerves
| into bollards, but still allows it to come within a couple of
| feet of pedestrians standing in the bollards later on. I can't
| believe people trust their cars not to kill people.
| Volundr wrote:
| In addition to the fact that we shouldn't be testing this stuff
| on public roads, it's baffling to me that anyone wants to use
| this in it's current state. To me this seems so much more
| stressful than driving the car myself. Having to maintain
| constant vigilance of what's going on around me as with normal
| driving, but with the additional complexity of having to keep
| second guessing what that car is going to do about it as well
| seems so much worse to me than _just driving the car_.
| ogjunkyard wrote:
| The removal of radar has got to be a hindrance to getting
| autonomous driving working well for Tesla. Cross-comparison
| between multiple forms of environment surveillance (Vision,
| Radar, LIDAR, etc.) has to be in place for autonomous driving to
| work reliably IMO. This much has been stated by multiple non-
| Tesla AI organizations and leaders.
|
| I think Tesla may be using the price of FSD as a way to dissuade
| people from buying it, while enabling it to be a funding
| mechanism for continued work on autonomous driving.
|
| As evident in this video, FSD is often acting like a nervous
| student driver by being unable to confidently make decisions on
| the path to take, running through red lights, driving
| unpredictably at slow speeds, and more. This is probably due to
| the fact that there is ONLY cameras being used to inform FSD,
| which can suffer from contrast issues.
|
| I've seen videos in the past where Tesla camera input is in
| black/white/grayscale for processing. It seems like if you
| converted the video to B&W, the pylon and the road are similar
| darkness/color, so I'm not surprised this had issues. Tesla
| Autopilot was suffering from issues with contrast all the way
| back in 2016 when Autopilot failed to detect a semi crossing a
| highway in Florida which lead to the death of the Tesla driver.
| This was due to the white trailer against a bright sky.
|
| Ultimately, as a consumer and former Tesla owner, I don't feel
| confident in Tesla's ability to get autonomous driving working
| well at a human-capable level, let alone the "10x safer" bar
| they've set for themselves.
| colde wrote:
| It's really strange to me, that we allow this sort of beta
| testing on public roads. The car is doing multiple things in this
| video that is problematic with the driver being slow to react in
| order to see what it ends up doing.
|
| This should not be something that is allowed on public roads by
| end-users, but rather on closed tracks by specialists. If they
| want to test it out on public roads, run the analysis and look at
| whenever it diverges from the drivers decisionmaking instead.
| llampx wrote:
| Move fast, break stuff (literally)
| yumraj wrote:
| you forgot, _kill people_
| lamontcg wrote:
| some of you may die, but it is a sacrifice i'm willing to
| make.
| randyrand wrote:
| "Fake Cities" don't have nearly enough complexity.
|
| I agree they could start there, but you'll graduate quickly
| without having learned much.
|
| The main question is, are we willing to put people in harms way
| today for the benefit of future humans? The answer seems pretty
| obvious to me. Drunk humans are considerably worse than this
| and are not going away anytime soon. If we can solve self
| driving just 1 year earlier it's equivalent to saving 30,000
| American lives.
|
| Put another way, if you want rules that delay the advancement
| of self driving driving cars, you're effectively murdering
| 30,000+ American lives every year.
| ChuckNorris89 wrote:
| _> If we can solve self driving just 1 year earlier it's
| equivalent to saving 30,000 American lives._
|
| That strawman argument works if you completely ban all human
| drivers the moment we solve self-driving.
|
| The big question is, when exactly do we consider self driving
| solved that it can ban replace drivers? All current evidence
| points it's very, very far away, if ever.
| salawat wrote:
| Uh... You'll never stop running into problems that require
| a driver to take conttol. Automation is only as reliable as
| the sum of it's parts. Can't wait to see the first set of
| dailures that prevent automated driving back of defective
| units under their own power without a human backup option.
| ChuckNorris89 wrote:
| _> You'll never stop running into problems that require a
| driver to take conttol._
|
| That's pretty much the last nail in the FSD coffin.
| Jasper_ wrote:
| There's no guarantee that computers will end up being any
| better at driving than humans.
| amelius wrote:
| Yeah, if you look at safety standards in other industries, this
| is really unacceptable. It should be stopped immediately.
|
| Also, why do we allow car manufacturers to test their own
| software? Shouldn't it be done by a third party? And shouldn't
| that third party be the only ones allowed to push updates to
| the car?
| handol wrote:
| You don't have to look at other industries. Other auto-
| manufacturers do their testing on test tracks.
| 0F wrote:
| And their self driving is miles behind tesla
| ChuckNorris89 wrote:
| I'd rather my car's safety systems be later to market but
| proven safe, than early to market and have me and the
| others around me as unpaid beta testers.
| salawat wrote:
| ...Because they weren't daft enough to commit to emplying
| blackboxes with no means of formal proofing to a safety-
| critical operation. Musk's approach is a massive public
| safety no-no. The cost of specifying and proving through
| trial the capabilities of what Musk is aiming for is the
| work of several lifetimes. Musk and Tesla just fucking
| YOLO it, yeeting out OTA's that substantially change the
| behavior of an ill-tuned system whose behavior can't even
| be reliably enumerated; and sinking the operational risk
| in drivers on the road.
|
| Sometimes, conspicuous lack of progress is a good thing.
| It isn't something you necessarily appreciate until you
| suddenly start having to confront the law of large
| numbers in a very real and tangible way. Some incremental
| changes simply are not feasible to take until they are
| complete. Level 3 automation is one of those...
| tscherno wrote:
| https://en.wikipedia.org/wiki/Development_mule
| PinguTS wrote:
| Well said. I agree up to one point, I know that beta software
| is also tested on public roads in the industry, but only by
| trained drivers. As of some project I did in the past, I was
| drivenin in such a car, when I got a lift by a collegue to the
| meetings. It was about 15 years ago. From outside it was an old
| model, but inside the electronics where all new.
| CheezeIt wrote:
| Every day you slow down FSD development with this kind of
| safetyism, a hundred people die in car accidents (in the USA).
| fassssst wrote:
| That presupposes that FSD is some major societal advancement.
| arcticbull wrote:
| If safety is your #1 goal, you should be advocating for
| busses and trains. Those are infinitely safer than cars,
| self-driving or otherwise.
|
| [edit] Transit is 10x safer than private cars. [1]
|
| > The effects extend beyond individual trip choices, too: the
| report notes that transit-oriented communities are five times
| safer than auto-oriented communities. Better public
| transportation contributes to more compact development, which
| in turn reduces auto-miles traveled and produces safer speeds
| in those areas. On a national scale, too, the U.S. could make
| large advances in safety if each American committed to
| replacing as few as three car trips per month with transit
| trips.
|
| [1] https://mobilitylab.org/2016/09/08/transit-10-times-
| safer-dr...
| ajross wrote:
| Busses and trains are safer than cars, but certainly not
| infinitely so. Nonetheless the infrastructure we have isn't
| built for them, and that won't change any time soon. If you
| want a suburban house with a yard, you need a passenger
| car. If you want to pick blackberries at the local farm,
| you need a passenger car. Making passenger cars safer
| through autonomy is clearly a good thing.
|
| By all means, advocate for more transit friendly urban
| centers. I'm with you. Just don't take away autonomy out of
| spite. Better cars are still better, even if they're not
| the solution you want.
| Volundr wrote:
| > Making passenger cars safer through autonomy is clearly
| a good thing.
|
| I'd actually disagree with this stance. Making passenger
| cars safer through autonomy is probably a good thing _if
| we can actually make it safer than human drivers_. I 've
| yet to be convinced we are anywhere close to meeting the
| bar on that if. I assume we will eventually, but I'm not
| even sure I'll live to see it.
|
| It also ignores potential knock on effects, sure in
| isolation safer cars are better, but the reality is
| nothing exists in isolation. Could we save more lives if
| instead of spending the money we are on self-driving cars
| we instead invested it into our transit systems?
|
| As an example of knock on effects, affordable cars feels
| like an easy win right? Makes travel easier for everyone.
| But by and large affordable cars are what has allowed
| suburbs to exist, but there's an argument to be made that
| urban sprawl is far from ideal and that we'd be better of
| with denser communities and public transit.
| ajross wrote:
| > I've yet to be convinced we are anywhere close to
| meeting the bar on that if.
|
| What would convince you? Data from 60k cars isn't
| sufficient?
| Volundr wrote:
| > What would convince you? Data from 60k cars isn't
| sufficient?
|
| It would be if the data showed they were safer than human
| drivers, and was independently obtained. I have yet to
| see any data that suggests this or anything close to
| this.
| arcticbull wrote:
| > Just don't take away autonomy out of spite.
|
| It's not a question of taking away autonomy, and there's
| certainly no spite about it. If you want to get around
| town, you have bikes or e-bikes. If you want to get out
| of urban centers, you can always rent a car at the
| periphery.
|
| I've lived in SF for 10 years with no car and have never
| felt unable to do anything I've wanted at any time.
|
| > If you want to pick blackberries at the local farm, you
| need a passenger car.
|
| Not really, the farm can have a bus with regularly
| scheduled pick-ups or routes like a lot of the Napa
| wineries do.
| orev wrote:
| You seem to be ignoring the last and most crucial point:
|
| > If they want to test it out on public roads, run the
| analysis and look at whenever it diverges from the drivers
| decisionmaking instead.
|
| It would be trivial to analyze the data after the fact to see
| where the AI model diverges from the human driver's actions,
| decide which one was right, then implement the correct
| action. That wouldn't slow down testing at all, as the only
| difference is who's controlling the vehicle. In any beta
| test, someone still has to analyze the data.
| cheeko1234 wrote:
| Exactly! I expect to see nothing but negativity here
| regarding this.
|
| Making intelligence out of silicon isn't easy. Let the
| computers learn this way during the transition period, and
| finally we can remove human drivers from the road.
|
| More than 38,000 people die every year in crashes on U.S.
| roadways. And Tesla makes the safest cars:
|
| The Insurance Institute for Highway Safety (IIHS), the main
| independent organization that conducts crash tests on
| vehicles in the US, released the result of its latest tests
| on the Tesla Model Y and confirmed that it achieved the
| highest possible safety rating.
| TomVDB wrote:
| Making a car that has the highest crash safety rating has
| nothing to do with the safety of their FSD solution.
| Volundr wrote:
| >Tesla makes the safest cars:
|
| >The Insurance Institute for Highway Safety (IIHS), the
| main independent organization that conducts crash tests on
| vehicles in the US, released the result of its latest tests
| on the Tesla Model Y and confirmed that it achieved the
| highest possible safety rating.
|
| That doesn't mean that Telsa makes the safest cars. There
| are roughly ~100 cars with that rating, and nothing
| suggests the Model Y is safer than any of the others. It's
| also important to note that that rating isn't based of real
| world data such as how often drivers actually crash and
| hurt other (ex, how often FSD fails), but rather how well
| the occupant is protected in the event of a crash.
| ChuckNorris89 wrote:
| So, for the sake of progress, we should let FSD also kill
| people because people die at the wheel anyway?
| 0des wrote:
| I take it you're the person who answers 'neither' when
| asked do you send the train left and kill 1 person vs
| sending it right and killing 100.
| ChuckNorris89 wrote:
| What's the trolley problem have to do with this
| situation?
|
| Are there accidents where death is unavoidable? Yes, they
| happen every single day, but after the investigations and
| trials are over, the parties found responsible pay up for
| those deaths in either money or jail-time, or both.
|
| Does that mean we should we allow machines to make deadly
| mistakes, especially when death IS avoidable? Absolutely
| not. We sentence humans for such mistakes. Machines
| (either their operator or their manufacturer) should also
| have the same liability.
|
| Those are two different things which you're trying to
| spin into a strawman.
| 0des wrote:
| Even this response is 'neither' :) I love it. Have a good
| weekend Chuck
|
| /me roundhouse kicks out of the thread
| ajross wrote:
| How about we measure and see if FSD is killing people at
| all first? It's not an unanswerable problem, after all.
| There are 60k+ of these devices on the roads now. If the
| statistics say it's unsafe, pull it and shut the program
| down.
|
| Do they? If they don't, would you admit that it's probably
| a good thing to let it continue?
| ChuckNorris89 wrote:
| _> How about we measure and see if FSD is killing people
| at all first?_
|
| We just saw a FSD car run a red light and nearly hit
| pedestrians if the driver hadn't intervened.
| ajross wrote:
| Exactly. Let's measure. Is that rate higher than seen by
| median cars in the environment? I'd argue no, given how
| distressingly common that kind of incident is (certainly
| it's happened to me a bunch of times). But I'm willing to
| see data.
|
| I think where you're going here is toward an assertion
| that "any failure at all is unacceptable". And that seems
| innumerate to me. Cars fail. We're trying to replace a
| system here that is already fatally (literally!) flawed.
| The bar is very low.
| ChuckNorris89 wrote:
| _> I think where you're going here is toward an assertion
| that "any failure at all is unacceptable". We're trying
| to replace a system here that is already fatally
| (literally!) flawed. The bar is very low._
|
| Failure is not the issue when it comes to Tesla FSD,
| accountability is.
|
| For any mistakes human drivers makes, they have to pay up
| with money, have their license suspended, or with jail
| time, depending on the severity of their mistake.
|
| You fuck up, you pay the price. That's the contract under
| which human drivers are allowed on the road. Humans
| drivers are indeed flawed, but with our law and justice
| systems, we have accountability to keep those who break
| the law in check, while allowing freedom for those who
| respect it. It's one of the pillars of any civilized
| society.
|
| In my country, running a stop sign or a red light means
| you get your license suspended for a while. When a self
| driving Tesla does the same mistake, why doesn't Tesla's
| FSD AI have its "license" suspended as well? That's the
| issue.
| CheezeIt wrote:
| Yes, exactly!
| ChuckNorris89 wrote:
| Even if it means that a loved one of yours would get run
| down by one of these, it's ok in the end, because it
| helped improve some billionaire's beta tech-demo?
| yumraj wrote:
| Chuck Norris should know better than to feed the trolls
| :)
| 0des wrote:
| I appreciate this playful response. Have a good rest of
| your weekend.
| 6gvONxR4sf7o wrote:
| FSD isn't a monolith, where speeding up one company gets us
| to the goal faster. We don't even know if it's possible with
| current tech, let alone with just cameras. Slowing down tesla
| might be just making a dead end safer. We don't know, which
| is why we need safety standards.
| kukx wrote:
| It is faster this way. And anyway it would be impossible to
| simulate the real world scenarios in the synthetic environment.
| One of the potential benefits of FSD is that it will save
| lives, hence should we go slow about it or take a reasonable
| risk and get it done. There is a risk in going slow too.
| ChuckNorris89 wrote:
| _> One of the potential benefits of FSD is that it will save
| lives_
|
| When does it plan to start doing that? Safety features in
| cars have come for decades without trying to kill people
| first. Just ask Volvo.
|
| The general non-Tesla-owning public, including pedestrians
| and cyclists, have not given their consent to be part of
| Elon's public beta test.
| kukx wrote:
| > without trying to kill people Do you suggest that Tesla
| is trying to kill people? That would be a ridiculous
| statement.
|
| I bet the risk of getting injured by Tesla beta version of
| FSD is miniscule compared to a risk of getting into
| accident caused by a human driver. I am not for banning
| either of them. Even when we get to the point where FSD is
| much safer than drivers I would be against of banning
| humans.
| ChuckNorris89 wrote:
| _> I bet the risk of getting injured by Tesla beta
| version of FSD is miniscule compared to a risk of getting
| into accident caused by a human driver._
|
| You can bet all you want, but human drivers, as flawed as
| they may be, are all fully liable by law for any mistakes
| they make at the wheel and have to pay with money or jail
| time plus losing their license.
|
| Who is liable for the mistakes FSD makes? Who goes to
| jail if it runs down a pedestrian by mistake? Elon? The
| driver? Can the FSD loose its license like human drivers
| can for their mistakes?
|
| You can't compare a human drivers to FSD safety when FSD
| has zero liability in front of the law and all the blame
| automatically goes to the driver.
| falcolas wrote:
| > Who is liable for the mistakes FSD makes? Who goes to
| jail if it runs down a pedestrian by mistake? Elon? The
| driver?
|
| Yup. The driver. Aside from image, there appears to be
| few if any incentives for FSD to improve beyond the "it
| does the right thing 80% of the time" mark.
| alamortsubite wrote:
| Perhaps, though, there should be some minimum bar before we
| allow testing on public roads. The Tesla FSD beta videos I've
| seen thus far are truly alarming. The system is nowhere near
| ready for testing in the real world, where it poses
| significant danger to many innocent bystanders.
| 6gvONxR4sf7o wrote:
| > One of the potential benefits of FSD is that it will save
| lives
|
| But that doesn't mean that _tesla_ will save lives. Maybe
| they do a bunch of this and learn that cameras alone aren't
| sufficient, and then waymo wins. Tesla wouldn't have saved
| any lives, only killed a couple people unnecessarily.
|
| Medicine is the classic example of applying this kind of
| thinking. You could do all sorts of unethical medical testing
| to speed up medical research, saving countless lives down the
| line, but we don't because it doesn't make it right.
|
| With another medical example, you could roll out snake oil
| without testing it thoroughly because it'll save lives _if it
| works._ But maybe snake oil doesn't work, and it'll be some
| other thing that works, and by rushing the snake oil, you
| just made things worse.
| kukx wrote:
| Why do you discard the possibility that Tesla will save
| lives in the long term? You may say it is unlikely, but it
| is not like Musk did not deliver world scale breakthroughs.
|
| Also, regarding the medicine, do you really believe we do
| not do "unethical" medical testings? I guess it depends on
| your ethical standards and how high they are :)
|
| But let's get back to cost benefit trade off. COVID
| vaccines tests were rushed. So it is obviously sometimes
| worth it.
|
| There is a risk in not taking a risk.
| geoduck14 wrote:
| >Medicine is the classic example of applying this kind of
| thinking. You could do all sorts of unethical medical
| testing to speed up medical research, saving countless
| lives down the line, but we don't because it doesn't make
| it right.
|
| Bringing up medicine for your stance might back fire. There
| are tactical examples where policies to "go slow and reduce
| risk" have cost lives. For example, testing some medicine
| on pregnant women is bad for the fetus - so there are
| policies to "not test on anyone who _might_ be pregnant ",
| and as a result, there are _few_ studies on women between
| the ages of 20 - 80, and women 's health has suffered as a
| result
| 6gvONxR4sf7o wrote:
| The point isn't "go slow." The points are "this area of
| ethics is well studied and much more complex than 'wild
| west research saves more lives'" as well as "society has
| rejected this particular form of utilitarianism."
| lamontcg wrote:
| There's a lot more examples from medicine where "move
| fast and break stuff" has cost lives and compromised
| public trust.
| AlexandrB wrote:
| > One of the potential benefits of FSD is that it will save
| lives
|
| Continuing this line of logic, the most advanced FSD would
| save the most lives. Thus the argument could be made that
| Tesla should abandon their research and license Waymo FSD.
| pengaru wrote:
| I find it hard to believe Tesla has remotely exhausted their
| "dry-run" training options considering how much like a
| drunken 8 year old their cars behave on FSD.
|
| The FSD AI shouldn't be connected to the real-world controls
| until it very rarely substantially deviates from what the
| human drivers do in their cars while controlling a virtual
| car from the real-world sensor inputs. And in those cases
| where it deviates, the devs should manually dig in and run to
| ground if it would have hit something/someone/broken the law.
| Not until that list of cases stops growing, particularly in
| your dense urban areas full of edge cases, do you even start
| considering linking up the real car.
|
| From what I'm seeing they're instead turning Tesla drivers
| into training supervisors with the real-world serving as the
| virtual one, while putting everyone else on/near roads at
| risk.
|
| It's criminal, and I expect consequences. It's already
| illegal to let a child steer from your lap, and that's
| without even handing over the pedals. People operating "FSD
| Beta" on public roads should be at least as liable, where are
| the authorities and enforcement?
| dawnerd wrote:
| Everyone is blaming FSD but this driver is just as responsible.
| Earlier he let the car fly through a red light.
| leokennis wrote:
| So what does the "F" in FSD stand for? Also, what does the "S"
| mean?
| geoduck14 wrote:
| >So what does the "F" in FSD stand for? Also, what does the
| "S" mean?
|
| Fully
|
| And Stupid
| AlexandrB wrote:
| The car does a right turn on a red without stopping. I wouldn't
| call that "flying through a red". It's closer to the "rolling
| stop" behavior. I'm guessing he didn't see any cross traffic in
| that lane and just let it go.
| dawnerd wrote:
| This update was supposed to disable rolling stops.
| akmarinov wrote:
| These guys must now construct additional pylons
| aeturnum wrote:
| Watching FSD driving feels like watching one of my side projects
| mostly work - it's impressive that it works at all and it's
| exciting and it would be wildly unethical pass off to someone
| without the access and ability to tweak it.
| [deleted]
| 34679 wrote:
| If it can't see the pylons, I wonder if it would see a toddler?
| kehrin wrote:
| I hope Tesla isn't testing that in production.
| amelius wrote:
| > Full Self Driving Beta 10.10 version 2021.44.30.15
|
| How often do these version number increment?
|
| Was he using a version that was tested for more than a couple of
| months?
|
| If not, how can that be justified?
| dogma1138 wrote:
| Honestly I don't understand why Mobileye isn't getting more
| attention this is one of their more recent unedited videos
| https://youtu.be/50NPqEla0CQ filmed in NYC city traffic...
|
| Intel is pushing them through an IPO and I'm pretty sure I'm
| going to put down a substantial investment out of all the players
| they seem the be the only ones that actually taking a serious
| evolutionary approach to consumer AVs.
|
| Their upcoming product stack shows an impressive strategy they
| aren't trying to sell L4/5 as something that is just around the
| corner.
|
| They are still perfecting Level 2/3+ as a consumer targeted
| product (~$1000 OEM price) and focusing on full autonomy only for
| robotaxis where an OEM price of $10,000-20,0000 for the hardware
| won't be an issue.
___________________________________________________________________
(page generated 2022-02-06 23:01 UTC)