[HN Gopher] GM's Cruise alleged to rely on human operators to ac...
___________________________________________________________________
GM's Cruise alleged to rely on human operators to achieve
"autonomous" driving
Author : midnightdiesel
Score : 96 points
Date : 2023-11-04 20:59 UTC (1 days ago)
(HTM) web link (www.nytimes.com)
(TXT) w3m dump (www.nytimes.com)
| mlinhares wrote:
| The title isn't news at all as every single trustworthy
| autonomous driving solution MUST HAVE human operators somewhere
| to take over but the actual article is a good summary of Cruise's
| current situation and I'd guess the competition as well.
| dventimi wrote:
| Where is the article does it even support the title? I'll
| reread it but I didn't see anything about human operators.
| potatolicious wrote:
| > _" Half of Cruise's 400 cars were in San Francisco when the
| driverless operations were stopped. Those vehicles were
| supported by a vast operations staff, with 1.5 workers per
| vehicle. The workers intervened to assist the company's
| vehicles every 2.5 to five miles, according to two people
| familiar with is operations. In other words, they frequently
| had to do something to remotely control a car after receiving
| a cellular signal that it was having problems."_
|
| Title of the post should be edited though since it's not the
| headline of the piece and this information, while
| interesting, isn't the main thrust of the article.
| Animats wrote:
| That's a terrible disengagement rate. Cruise claimed in
| 2020 "Cruise, for comparison, clocked 831,040 miles with a
| disengagement rate of 0.082 (per 1000 miles)" [1]
| Something's not right here.
|
| [1] https://www.engadget.com/2020-02-27-waymo-
| disengagement-cali...
| AlotOfReading wrote:
| Companies measure multiple disengagement rates for
| different purposes. The DMV numbers are usually safety
| rate numbers, as in "if a human hadn't intervened there
| may have been an accident or near miss". The specifics
| vary company-to-company, and they'll have a large
| document somewhere laying out exactly what the criteria
| are. The numbers in the article are some other metric,
| though I have no idea what. I'm a bit skeptical that it's
| the average over their entire ODD, given that it's much
| higher than my own experiences and most of their vehicles
| were running around the outer city at night, where they
| seemingly did okay.
|
| It could reflect some particular ODD (e.g. downtown at
| rush hour) where the vehicles didn't do nearly as well,
| or something else entirely.
| creer wrote:
| It's buried pretty deep all the way at the end.
| KennyBlanken wrote:
| I thought HN very strictly required the title to be the
| original article's title unless the title was really, really
| bad?
| donsupreme wrote:
| > staff intervened to assist Cruise's vehicles every 2.5 to
| five miles
| dventimi wrote:
| https://archive.ph/2023.11.04-050448/https://www.nytimes.com...
| haltist wrote:
| It's not an allegation. It's the same as using human feedback for
| tuning large language models. There are no autonomous cars
| currently regardless of what is written on the marketing
| brochures. In various "emergency" situations the cars phone home
| and ask a human operator to take over the controls.
| throwaway5959 wrote:
| The latency on that has to be massive.
| jonhohle wrote:
| Why? UAVs are piloted remotely, video games are played
| remotely with sub-100ms latency.
|
| Getting a remote driver connected might a while, but
| afterwards it seems like a mostly solved (in practice)
| problem.
| notahacker wrote:
| UAVs don't have to deal with traffic (the thought of
| driving a vehicle with the latency and intermittent
| connectivity of my drone horrifies me) and when someone
| dies in a video game they respawn...
| pests wrote:
| No lag compensation. Extremely server-authoritative.
| throwaway5959 wrote:
| I wasn't talking about signal latency, I was talk about the
| time it took for an operator to sign on and take control.
| haltist wrote:
| It's good enough for the routes that Cruise uses in the city.
| SkyPuncher wrote:
| It might not actually matter. Since the car can operate
| autonomously already, the operator doesn't necessarily need
| to literally drive the car. They might simply need to hop in
| to verification of actions in unusual situations.
|
| I'm imagining a situation where a car comes across a parked
| truck on a one-way road (common in cities). A human operator
| comes in the loop to ensure that it's actually safe to switch
| lanes and pass. Check for things like emergency vehicles,
| unusual pedestrians, etc. They don't need to literally take
| the wheel, just confirm that the vehicle can take a specific
| action.
| hartator wrote:
| There is an emergency every 2-5 miles?
| alkonaut wrote:
| That sounds pretty low if it's city driving or poor
| conditions. I know some of the trial cities are basically
| easy mode (wide streets, almost never snows..) but still.
| KennyBlanken wrote:
| First off: not even close. Waymo has a disengagement rate
| of 0.076 per 1,000 miles.
|
| Second: You're shifting the goalposts from the grandparent
| comment's assertion that these interventions are to be
| expected in an "emergency", when the frequency of the
| interventions shows they're clearly not "emergency"
| interventions but part of normal operation.
| l33t7332273 wrote:
| > It's the same as using human feedback for tuning large
| language models
|
| It isn't remotely the same. This would be like if human
| operators typed some of chatGPTs answers
| haltist wrote:
| OK, you must know more than I do about how human feedback is
| used for tuning large language models.
| ra7 wrote:
| This is completely incorrect. Remote operators cannot "take
| over controls" at all and hence cannot help in any "emergency"
| i.e. safety critical situation (e.g. preventing a crash). All
| they can do is _assist_ the vehicle with things drawing a path
| to get around a parked vehicle, instructing it to do a multi-
| point turn when it's stuck and so on.
|
| What the article says is that Cruise vehicles need some sort of
| assistance every 2.5 to 5 miles (I highly doubt this number is
| accurate). Not that they're getting into emergency situations
| that frequently.
| haltist wrote:
| Do you work at Cruise?
| ra7 wrote:
| No.
| haltist wrote:
| Then you wouldn't know if I was completely incorrect or
| not.
| ra7 wrote:
| I do because I know self driving companies have talked
| about how remote operations work. It doesn't involve
| taking control of the vehicle.
|
| Here's a Waymo engineer explaining how they can't
| joystick a car:
| https://www.reddit.com/r/SelfDrivingCars/s/2ujFLZoLbo
|
| And here is Zoox's video about their teleoperations:
| https://youtu.be/NKQHuutVx78?si=4PDnG0gQm6lEnp9v
|
| No reason to believe Cruise is doing any different. If
| you have evidence of the contrary, please share it.
| haltist wrote:
| So you are 100% certain that remote operators can not
| take over the car?
| ra7 wrote:
| Based on what I know, yes. Why would they want to do it
| with the latencies involved? It's not a reliable
| solution, so it's not used in any safety critical path.
| haltist wrote:
| > In addition to allowing emergency crews to access and
| move vehicles, Cruise says that it is also providing its
| own remote "assistance advisors" the ability to
| conditionally route its Chevrolet Bolts. This means that
| if law enforcement directs Cruise to route its vehicles
| away from an emergency scene, those advisors will
| maneuver the cars in a way that satisfies the request.
| The AV provider also says that it has enhanced the
| ability of these remote operators to clear a scene,
| should an issue arise.[1]
|
| 1: https://www.thedrive.com/news/cruises-solution-to-
| robotaxis-...
| ra7 wrote:
| Can you explain how this supports your assertions?
| Because this doesn't say they can take over control of
| the vehicle or prevent an emergency in the first place.
| They clear the cars by plotting a new path.
| haltist wrote:
| You seemed very certain that I was completely incorrect.
| My point is that you should consider that you might not
| have all the details and if you haven't actually worked
| at an AV company then you do not know what capabilities
| are granted to remote operators in emergency and non-
| emergency situations.
| ra7 wrote:
| You are still incorrect and unable to prove anything you
| claimed. The burden of proof is on you when you
| confidently say they can "take over controls".
| haltist wrote:
| I wasn't proving anything. The fact is there are articles
| explaining that remote operators can take over the
| controls in an emergency situation and that's exactly
| what you were denying. In any case, this discussion has
| run its course. You can continue to believe autonomous
| cars can not be remotely controlled and I'll believe what
| I wrote since I'm pretty sure it's correct. Every AV
| company has emergency procedures for remote takeover and
| it makes sense that they would because current ML tools
| and techniques are not good enough for self-driving cars
| and other kinds of autonomous applications.
| ra7 wrote:
| Hmm, no. No article explains remote operator can _take
| over controls_. There 's an important distinction between
| taking over and instructing a car what to do. You don't
| seem to get that.
|
| > You can continue to believe autonomous cars can not be
| remotely controlled and I'll believe what I wrote since
| I'm pretty sure it's correct. Every AV company has
| emergency procedures for remote takeover
|
| If your proof is "I believe these companies are lying"
| and nothing else, then this is not a discussion worth
| having.
| pauljurczak wrote:
| Here is a quote from https://www.reddit.com/r/SelfDriving
| Cars/comments/a0w3nb/way...:
|
| "Back in the Waymo office, a "remote assist driver" can
| view the feeds of eight of the vehicle's external- and
| internal-facing cameras and a dashboard showing what the
| software is "thinking," such as if it is preparing to
| stop, or the position of other objects around it. The
| remote drivers can monitor multiple vehicles at once. If
| a vehicle gets stuck, the remote assist driver can tell
| the car how to drive around a construction site or some
| other obstacle by using their computer to manually draw a
| trajectory for the car to follow."
| RobotToaster wrote:
| Relevant XCKD https://xkcd.com/1897/
| batmansmk wrote:
| Having to be remotely operated every 2.5 to 5 miles seem to
| defeat most of the economics of self driving cars.
|
| Back of the napkin math, cars drive at an average of 18mph in
| cities, so every 10-20min. Let's assume it takes over for 1min,
| and that you need remote drivers not too far for ping purposes,
| so at the same hourly rate. To guarantee you'll be able to take
| over all demands immediately, due to the birthday paradox, you
| end up needing like 30 drivers for 100 vehicles? It's not that
| incredible of a tech...
| spondylosaurus wrote:
| Yeah, the driver-to-passenger ratio is still way less efficient
| than a train or even a bus.
| pj_mukh wrote:
| Just FYI, Most autonomous car companies have backup drivers.
|
| Its the disengagement rate that drives the number of operators
| you need per driver and therefore the economics. Theoretically,
| this rate should be improving steadily at all these companies.
|
| Cruise seems to have a bad disengagement rate _right now_ (
| <5miles seems really low), but methinks nytimes might be
| partaking in some obfuscation here.
|
| Waymo's should be much better already. Curious by how much
| though.
| cheriot wrote:
| Wages can fall off a cliff within modest distances. To use
| unemployment rate as a proxy for driver pay, Bakersfield, CA
| 7.5% and San Francisco, CA 3.5%. Go a little farther to Los
| Vegas 5.7% and one can avoid California's minimum wage.
| batmansmk wrote:
| The current taxi market is already structured that way:
| drivers in SF aren't from SF. So no competitive advantage
| there, or not significant enough to change the game yet.
| SkyPuncher wrote:
| > you end up needing like 30 drivers for 100 vehicles?
|
| What? That's literally insane compared to the current standard
| of 100 drivers for 100 vehicles. They're literally reducing 70%
| of the labor cost compared to uber/lyft/etc.
|
| It's pretty reasonable to expect that this will improve over
| time as well. This is exactly how you want a startup to roll
| out a new technology.
|
| * Build a pretty good base implementation
|
| * Do things that don't scale for the edge cases
|
| * Reduce the things that don't scale over time
|
| Even if they can only improve this to 10 for 100, that's still
| a massive improvement.
|
| In my area, a small, rural city, this would literally be a game
| changer. Right now, there's a single Uber within 15 minutes -
| if I'm lucky. Meanwhile, cruise could drop a handful of car in
| town, let them idle (at no cost), then pay a driver for a few
| minutes of intervention every now and then.
|
| This also enables intercity transit. Most of that is highway
| miles. Outside of the start and end, those are easy and
| predictable. You could have dozens/hundreds of miles where
| Cruise can compete with the cost of privately owned vehicles.
|
| Lastly, this makes it feasible for Cruise to reposition cars
| between cities without huge costs. Currently, that's basically
| impossible. Any human driven car needs to offer the driver a
| ride in the opposite direction.
| alex_young wrote:
| https://web.archive.org/web/20231104212102/https://www.nytim...
| causality0 wrote:
| I thought being a social media moderator and being constantly
| exposed to violence, racism, and child pornography was bad.
| Having your whole day being a series of "quick, don't let these
| people die!" moments seems like the worst tech job on earth.
| wolverine876 wrote:
| > Two months ago, Kyle Vogt, the chief executive of Cruise,
| choked up as he recounted how a driver had killed a 4-year-old
| girl in a stroller at a San Francisco intersection. "It barely
| made the news," he said, pausing to collect himself. "Sorry. I
| get emotional."
|
| ...
|
| > Cruise's board has hired the law firm Quinn Emanuel to
| investigate the company's response to the incident, including its
| interactions with regulators, law enforcement and the media. /
| The board plans to evaluate the findings and any recommended
| changes. Exponent, a consulting firm that evaluates complex
| software systems, is conducting a separate review of the crash,
| said two people who attended a companywide meeting at Cruise on
| Monday.
|
| After the first [edit: the first performative charade, about
| little girl in a stroller], why should we trust the second isn't
| also a performative charade? What independence or credibility
| does some hired law firm have, that the company itself does not?
| How about using an independent third party?
| wmf wrote:
| Independent third parties don't work for free and if you pay
| them (by your logic) they're no longer independent. The best
| you can probably hope for is a government investigation.
| wolverine876 wrote:
| There are ways to do it. Non-profits don't need payment,
| always, and their mission isn't profit. For example,
| companies have worked with environmental non-profits on
| internal climate change and other issues.
| raldi wrote:
| Can you explain what you mean by "after the first"?
| simonw wrote:
| Presumably they meant that the first paragraph they quoted
| looked to them like a "performative charade".
| wolverine876 wrote:
| Yes. I'll clarify.
| raldi wrote:
| What would it have looked like if it had been sincere?
| wolverine876 wrote:
| The first thing? He wouldn't have mentioned it at all. He
| would discuss the benefits and costs, without this now
| cliche talking-point framing that they repeat
| incessently. See my other comment for some quick
| explanation of talking points.
| ciabattabread wrote:
| Cruise is trying to save itself from getting shut down by GM. I
| guess it would look slightly better for optics if the GM board
| hired them instead of Cruise's board. But it's the same money,
| and it's GM's decision at the end.
| pj_mukh wrote:
| Hmm? I saw it exactly the opposite. A lot of people in the
| autonomous driving industry are driven by exactly what Vogt
| describes (little girl in the stroller etc.). See also Chris
| Urmson of Waymo fame's TED talk, he talks about a similar
| motivation[1].
|
| Its a fallacy everyone conveniently ignores. The woman the
| Cruise car ran over was actually first hit by a human driver
| _who is still at-large_ , not a peep about him. The press kinda
| just accepts this as the "cost of doing business".
|
| The way I see it, Vogt sincerely believes autonomous cars will
| make things safer from the #2 killer of Children under 19
| (outside of guns) by a _wide_ margin [2] and therefore
| accelerated the rollout past what was safe. I see no evidence
| otherwise.
|
| [1]
| https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...
|
| [2] https://www.nejm.org/doi/full/10.1056/nejmc2201761
| sroussey wrote:
| We have become so desensitized to human deaths due to cars
| even though those numbers are higher than violent acts of
| terrorism et al that actually kill far fewer people each
| year.
|
| Many people have to be killed AT ONCE for it to be news
| worthy these days.
| oldgradstudent wrote:
| In the US on average, there's a fatality every ~85 million
| miles driven, and that's an average that includes
| motorcyclists without helmets, old unsafe unmaintained cars,
| the worst roads, and adverse weather conditions.
|
| Cruise barely drove a few million miles with new modern cars,
| good weather, the ability to choose optimal roads and
| weather, and yet it already severely injured a pedestrian.
|
| We can argue about Cruise hitting the pedestrian, but
| reportedly, the major injuries were caused by Cruise, after
| reaching a complete stop, deciding it has to clear the road,
| and dragging the screaming pedestrian and ending with the
| axle over the pedestrian.
| pj_mukh wrote:
| I'm not sure why you're comparing fatality miles vs no-
| fatality-accident-that-cruise-didn't-cause miles (i.e. we
| have no idea how safe Cruise would be if there were no
| human drivers on the road)
|
| That's not even close to a fair comparison. We just have to
| admit that there isn't a fair comparison yet and everyone's
| just got an axe to grind.
| oldgradstudent wrote:
| > I'm not sure why you're comparing fatality miles vs no-
| fatality-accident-that-cruise-didn't-cause miles
|
| Because it's not that everyone ignores road fatalities,
| it's just that cruise hasn't driven (in terms of miles
| amd conditions) nowhere near to what might result in a
| fatality with human drivers.
|
| Even then, in an incident they've not initiated, they've
| unnecessariliy made an existing bad situation far far
| worse.
|
| > (i.e. we have no idea how safe Cruise would be if there
| were no human drivers on the road)
|
| Self-driving cars have to exist in a world with human
| drivers, pedestrians, and the rest of reality. No one
| cares how well Cruise does in a sterile environment.
|
| They should not only not cause incidents, they should
| also not make existing incidents far worse because of
| terrible decisions.
| pj_mukh wrote:
| " They should not only not cause incidents, they should
| also not make existing incidents far worse because of
| terrible decisions."
|
| Just FYI, it made this terrible decision because people
| were mad at cruise for stopping in the middle of the road
| to decide if it was safe to proceed. They were asked to
| change that behavior and pull over and they did, this
| time just dragging a human along.
|
| So yes let's set these absurdly high standards, while we
| leave children to fend for themselves against human
| drivers that have met non-existent standards on a
| continual basis.
|
| But then let's actually leave the autonomous cars on the
| road to test if they're actually meeting them.
|
| As you agreed, some statistic they figure out in a
| sterile or simulation environment doesn't actually
| matter. Let's put them back on the road..
| patrick451 wrote:
| This is the problem with self driving cars. A human has
| the awareness to pull over when it's appropriate and also
| is able to recognize they just ran over somebody and it's
| best to stop completely. But AVs seem to just have a dumb
| if/else statement to control this behavior (yes, I know
| it is _actually_ more complex than that, I work in this
| space. But that is how they behave).
|
| Driving is infinitely complex. It's becoming increasingly
| clear that the current approach to AVs not up to the
| challenge.
| pj_mukh wrote:
| A humans awareness is not constant. It waxes and wanes,
| even more so with cellphones in hand.
|
| The status quo is indefensible so setting up moving
| unknowable goal posts for something to replace them
| doesn't make sense to me.
|
| This particular problem can be easily solved by cameras
| in the under carriage to make sure there aren't humans
| shoved in there by other bad drivers. I wouldn't mind
| making that a requirement across the board and moving on
| to the next challenge the unpredictability of human
| drivers throws at a repeatable robotic system.
|
| There is no evidence that there is a magical different
| approach that will work better.
| patrick451 wrote:
| > A humans awareness is not constant. It waxes and wanes,
| even more so with cellphones in hand.
|
| And even with supposedly* perfectly consistent awareness,
| the automation still failed catastrophically.
|
| > The status quo is indefensible so setting up moving
| unknowable goal posts for something to replace them
| doesn't make sense to me.
|
| AVs are not better than the status quo, making them even
| less defensible. A human would not have drug that poor
| women for 20 feet because it was compelled to execute a
| pull-over maneuver. Even an OCD psychopath knows better.
|
| * None of these things run _actual_ realtime operating
| systems with fixed, predictable deadlines. Compute
| requirements can vary wildly depending on the
| circumstance. When compute spikes, consistency drops. A
| robot can only way approximate constant awareness by
| massively undersubcribing the compute budget.
| pj_mukh wrote:
| "AVs are not better than the status quo"
|
| We don't have the data to claim this, this confidently,
| and the only way to get the data is let the experiment
| keep running _in the real world_ (only place that
| matters).
|
| There will obviously with holes in the awareness (literal
| missing cameras under the car) _that 's what the testing
| is for_. If someone says they can sit in a room, in a
| simulation environment and come up with all potential
| crazy things humans can do around autonomous cars, they
| are lying to you.
|
| To me, its either this, or we pull all human drivers off
| the road, restructure our cities and put em on public
| transit (wholly support this).
|
| I re-iterate: The status quo is unacceptable and
| indefensible. The human driver who _actually caused the
| accident_ has still not been held to account (and
| probably never will be).
|
| P.S: I accept your point about the system being non-
| realtime. Though I think there are some critical safety
| systems (LIDAR/RADAR cutoffs etc.) that might have a
| real-time response?
| oldgradstudent wrote:
| > We don't have the data to claim this, this confidently,
| and the only way to get the data is let the experiment
| keep running in the real world (only place that matters).
|
| How about we start with something simpler: have Waymo,
| Cruise and their likes produce a rigorous safety case[1]
| arguing why their vehicles are safe.
|
| Once the safety case is in the open, we can also evaluate
| how well their system satisfy the claims in the safety
| case, and if the assumption do not hold, we can stop the
| experiment.
|
| They are experimenting on humans. The usual requirement
| is informed consent.
|
| [1] https://en.wikipedia.org/wiki/Safety_case
| pj_mukh wrote:
| This is just..more paperwork, but sure, highly unlikely
| that these companies don't have this report built
| internally already. And like I said, there will be
| scenarios not covered by it, because we simply don't know
| what they are and can't think it up.
|
| But if we're doing this, lets also make human Drivers do
| this, and for real parity, make sure all human drivers
| are kitted out with all the same cameras and logging
| systems we ask of from autonomous car companies, auto
| submitted to the DMV.
|
| Then analyze all the reports on an annual basis to see if
| the human and/or autonomous agent should be allowed to
| continue to operate on the road.
|
| I think people forget that driving is not a right but a
| privilege, I agree that both humans and autonomous agents
| should earn this privilege.
|
| P.S: If the claim is that a one-time DMV driving test is
| enough, then that should be enough for autonomous cars as
| well (I'm not making that claim)
| oldgradstudent wrote:
| > I agree, but if we're doing this, lets also make Human
| Drivers do this, and for real parity, make sure all human
| drivers are kitted out with all the same cameras and
| logging systems we ask of from autonomous car companies,
| auto submitted to the DMV.
|
| Human drivers are the status quo. Once you consistently
| show that self driving can do better there would be a
| point in discussing that.
|
| The problem is that you can't because such technology
| simply does not exist. There is no perception technology
| that is reliable enough. There is no prediction
| technology that is reliable enough.
|
| To me it is obvious that Cruise and Waymo (and their
| likes) simply cannot withstand any serious scrutiny.
|
| > P.S: If the claim is that a one-time DMV driving test
| is enough, then that should be enough for autonomous cars
| as well (I'm not making that claim)
|
| The DMV driving test is just one element. We also know
| how human develop and what skills they acquire and when.
|
| We don't let them drive until they're 15-17 (depending on
| local laws) because they lack certain abilities earlier
| than that. For example, humans acquire object permanance
| at around 24 months.
|
| The Cruise incident shows that Cruise vehicles lack
| object permanance. They should not be elegible even for a
| DMV appointment.
| wolverine876 wrote:
| > A lot of people in the autonomous driving industry are
| driven by exactly what Vogt describes (little girl in the
| stroller etc.). See also Chris Urmson of Waymo fame's TED
| talk, he talks about a similar motivation[1].
|
| To me, that's evidence that it's performative. First, it's a
| talking point; it looks, smells, walks and talks just like
| typical corporate/industry framing and messaging, with even a
| 'think of the children!' line, and the redirection (from the
| safety of autonomous cars, the topic, to whatabout something
| else). Second, its repetition by Urmson is further evidence -
| that's how talking points work. Third, the public's reptition
| of it, in surprising detail, such as in your comment, is also
| what we'd expect. Finally, throw in some tears, 'I get
| emotional' lines, etc. (per the NYT article), and I don't
| know how it can be missed.
|
| Could it all be legit? Anything is possible - including fully
| autonomous cars!
| pj_mukh wrote:
| Whether the corporate honchos are "sincere" or not is
| wholly irrelevant to me (and frankly unknowable).
|
| "Think of the children" is usually a vapid misdirect,
| except of course in _the objective measurable leading cause
| of death_ right? So in terms of issues where "something
| must be done", this should be objectively pretty high.
|
| Either we drastically reduce the number of cars on the road
| and restructure American society around public transit (I
| wholly support this), or we take the humans out of the
| equation by making things autonomous. Or some combination
| of both.
|
| I dont care if this happens under some grand socialist
| program if we so hate corporations/industries, but it needs
| to happen _yesterday_.
|
| The rest is just status quo protection which is
| unacceptable.
| pests wrote:
| > performative charade
|
| How is it performative?
|
| Is it not sad that a 4-year-old girl in a stroller got killed
| by a car? That it barely made the news?
|
| Or is that just not sad and is normal these days?
| minwcnt5 wrote:
| It was a pretty huge news story actually (in SF). Kyle has a
| strong penchant for hyperbole.
| wolverine876 wrote:
| Lots of sad things happen. Why is a sophisticated public
| communicator taking the time to tell this very self-serving
| story, tear up about it, etc.? It's not incidental; he
| prepared it.
|
| Spare me your trolling.
| pests wrote:
| Sorry you are so desensitized.
| icedistilled wrote:
| People like to say self driving cars are safer than human
| drivers - but the human drivers that tend to do the most unsafe
| antics seem to be the humans that are least likely to make use
| of self driving cars.
| evbogue wrote:
| This is the same whacky theory I've been spreading about Tesla
| self-driving for a year or so. "Imagine Tesla self-driving is
| like some dude driving your car via videogame on the other side
| of the world."
|
| Most people are pretty sure my theory is wrong. I have absolutely
| no evidence this is true, it's just some crazy idea that popped
| into my head one day.
| cheeselip420 wrote:
| Like some sort of fucked up Ender's Game situation.
| evbogue wrote:
| Yah, exactly. Even if it isn't real, the sci-fi stories you
| can think of are endless.
|
| Like imagine there's some industrial block in Da Nang where
| there are thousands of guys and gals who think they're RL for
| some AI model somewhere. X takes a bathroom break and forgets
| to turn over the controls to another specialist and when he
| gets back he discovers the model has crashed.
|
| Next he reads on the news that there's been a fiery Tesla
| crash somewhere near Oakland, and he realizes that something
| is horribly wrong in his world.
|
| We could use multiple predictive language models to determine
| what direction the story line takes next, but I imagine he
| quits his job right then and there and is determined to find
| out the truth behind the program.
|
| What will happen next?
|
| Better yet, base the story off-world so that we aren't so
| close to the horrible reality of it -- if this is true and
| it's probably not.
| cheeselip420 wrote:
| Cruise is leveraging human-in-the-loop to expand faster than they
| otherwise would, with the hope that they will solve autonomy
| later to bring this down.
|
| I don't think this is a viable strategy though given the enormous
| costs and challenges involved.
|
| There doesn't exist a short-term timeline where Cruise makes
| money, and the window is rapidly closing. They needed to expand
| to show big revenues, even if they had to throw 1.5 bodies per
| car at the problem.
|
| Prediction: GM will offload cruise, a buyer will replace
| leadership and layoff 40% of the company. The tech may live to
| see another day, but given the challenges that GM has generally
| (strikes, EVs, etc), they can no longer endlessly subsidize
| Cruise.
| chaostheory wrote:
| GM actually spun off Cruise in 2018. Honda now has shares in
| Cruise. SoftBank used to own some as well, but GM bought out
| their share last year
| cheeselip420 wrote:
| GM had FOMO and now it's time for FAFO.
| dontblink wrote:
| So Waymo/Google winning here in your opinion?
| cheeselip420 wrote:
| Waymo will have challenges scaling rapidly, but they may be
| able to get some sort of favorable unit economics and expand
| more slowly.
|
| Tesla has the scale and for some reason regulators give them
| a pass. I wouldn't bet against Elon, but we aren't there
| yet...
| orwin wrote:
| Writing off lidar that early is killing Tesla's chances
| imho. The exec who took that decision should hate himself.
| Retric wrote:
| Human in the loop can be vastly cheaper than you might think.
|
| _If_ this lets them have the only level 5 system on the market
| they could double that and millions would happily pay. Suppose
| your a trucking company would you rather pay 50k / year or
| 5k/year? That's a stupidly easy choice.
|
| Americans drive roughly 500 hours per year. If they can replace
| 98% with automation and the other 2% with someone making
| 20$/hour that only costs them ~200$/year, which then drops as
| the system improves.
| cheeselip420 wrote:
| Human in the loop is fine.
|
| Negative unit economics and massive expansion are not.
| Retric wrote:
| Who says they can't recover the full cost? Cars don't last
| forever, bake the cost in upfront or charge a monthly fee.
| cheeselip420 wrote:
| I'm not saying they can't - I'm saying they are running
| out of time to do so, and with the DMV shutting them down
| they've been hamstrung further.
|
| They are burning 100s of millions every quarter. They
| needed to show either growth/expansion or some sort of
| positive cash flow. They now have neither.
| Retric wrote:
| Ahh ok that fair.
|
| I don't think Cruse is doing very well. I'm more thinking
| that the first nationwide level 5 system may have a human
| in the loop.
| wolverine876 wrote:
| If humans need to remotely intervene for a car in motion, that
| implies it could impact safety.
|
| If that's correct, then the remote signaling of a problem and the
| human's response and control must have flawless availability and
| low latency. How does Cruise achieve that?
|
| Cellular isn't that reliable. Maybe I misunderstand something.
| mwint wrote:
| Appears Cruise isn't giving these remote drivers a steering
| wheel and gas; rather they make strategic decisions: Go around
| this, follow this path, pull over, etc. The car is able to
| follow a path on its own. Determining the correct path is where
| it gets hard.
| xyst wrote:
| I get a feeling Cruise is going to get sold off within the next 5
| yrs. Waymo will likely be the leading provider for "autonomous
| vehicle" software/hardware.
|
| Government Motors can only sustain such a loss on their books for
| a short time. This is probably why Vogt has been pushing so hard
| for market dominance.
| tempsy wrote:
| I wonder if there are rails to prevent a bad actor Cruise worker
| from remote driving erratically...
| neilv wrote:
| > _Company insiders are putting the blame for what went wrong on
| a tech industry culture -- led by the 38-year-old Mr. Vogt --
| that put a priority on the speed of the program over safety.
| [...] He named Louise Zhang, vice president of safety, as the
| company's interim chief safety officer [...]_
|
| I hope Chief Safety Officer isn't just a sacrificial lamb job,
| like CISO tends to be.
|
| Is the "interim" part hinting at insufficient faith, and maybe
| future blame will be put on how the VP Safety performed
| previously (discovered after the non-interim person is hired)?
|
| > _[...] and said she would report directly to him._
|
| Is the CSO nominally responsible for safety?
|
| Does the CSO have any leverage to push back when their
| recommendations aren't taken, other than resigning?
| KennyBlanken wrote:
| @dang title not the same as original
| ProAm wrote:
| Cruise came out of YC if I recall?
| ooterness wrote:
| This was a plot point in Captain Laserhawk: All the self-driving
| cars and flying drones were actually being remotely piloted by
| prisoners in a massive VR facility.
|
| https://en.wikipedia.org/wiki/Captain_Laserhawk:_A_Blood_Dra...
| sroussey wrote:
| Fully remote driven cars is another company (can't recall their
| name).
| DelightOne wrote:
| > Having to be remotely operated every 2.5 to 5 miles
|
| Regarding Cruises' suspension, how likely is it that the backup
| driver restarted the car to drive again after the car stopped
| with the pedestrian below?
| kvogt wrote:
| Cruise CEO here. Some relevant context follows.
|
| Cruise AVs are being remotely assisted (RA) 2-4% of the time on
| average, in complex urban environments. This is low enough
| already that there isn't a huge cost benefit to optimizing much
| further, especially given how useful it is to have humans review
| things in certain situations.
|
| The stat quoted by nyt is how frequently the AVs initiate an RA
| session. Of those, many are resolved by the AV itself before the
| human even looks at things, since we often have the AV initiate
| proactively and before it is certain it will need help. Many
| sessions are quick confirmation requests (it is ok to proceed?)
| that are resolved in seconds. There are some that take longer and
| involve guiding the AV through tricky situations. Again, in
| aggregate this is 2-4% of time in driverless mode.
|
| In terms of staffing, we are intentionally over staffed given our
| small fleet size in order to handle localized bursts of RA
| demand. With a larger fleet we expect to handle bursts with a
| smaller ratio of RA operators to AVs. Lastly, I believe the
| staffing numbers quoted by nyt include several other functions
| involved in operating fleets of AVs beyond remote assistance
| (people who clean, charge, maintain, etc.) which are also
| something that improve significantly with scale and over time.
| throwaway1104 wrote:
| Keep it up, Kyle! All new tech will have hiccups and
| opposition. Really enjoyed my ride experience when I visited
| SF.
| monero-xmr wrote:
| Huge cojones on the CEO to risk public statements given the
| enormous legal and regulatory pressure being applied. I
| certainly wouldn't recommend this tactic!
| averageRoyalty wrote:
| I would. This is the correct step forward to building public
| trust, which is incredibly essential to this industry and
| onboarding a critical mass.
| pj_mukh wrote:
| "The stat quoted by nyt is how frequently the AVs initiate an
| RA session. Of those, many are resolved by the AV itself before
| the human even looks at things, since we often have the AV
| initiate proactively and before it is certain it will need
| help."
|
| Hoo boy, sure wish the NYT had clarified that. That changes
| things significantly.
| ra7 wrote:
| Thanks for the clarifying! This makes a lot of sense. I think
| NYT did a really poor job of explaining the remote assistance
| bit.
| tameware wrote:
| Gell-Mann Amnesia Effect.
| phkahler wrote:
| >> Cruise AVs are being remotely assisted (RA) 2-4% of the time
| on average, in complex urban environments. This is low enough
| already that there isn't a huge cost benefit to optimizing much
| further, especially given how useful it is to have humans
| review things in certain situations.
|
| Funny, since I thought full autonomy was the goal of the
| company. 2 percent human intervention isn't scalable.
| amluto wrote:
| Huh? 100% is scalable, and it's the common case today. 2%
| scales just as linearly as 100% does.
| tcoff91 wrote:
| That 2% is not the person in the vehicle, it's cruise
| employees. It doesn't scale because it is paid employees
| intervening instead of the customer driving. It scales in
| comparison to ride sharing competition but not in terms of
| people owning the vehicles.
| sroussey wrote:
| That 2% is much cheaper than going for five nines
| immediately. Nice bridge until then.
| amluto wrote:
| The real long term issue IMO is that this type of system
| fails pretty badly if the wireless network fails over a
| largish area.
| polishTar wrote:
| Other ridehail products like uber or lyft have 100% human
| intervention all the time. I think that's what the parent
| comment is referring to.
| Rebelgecko wrote:
| They'll just do what the robo-food delivery startups are
| doing and outsource the driving to people in other
| countries who make $5/day
| dventimi wrote:
| > it [does not scale] in terms of people owning the
| vehicles
|
| Can you clarify what you mean here?
| cheeselip420 wrote:
| remote operation of vehicles often makes a lot of sense
| economically, since you can effectively decouple drivers from
| vehicles/riders. As you pointed out, this means you can shift
| to deal with peak loads and all of that - great.
|
| Given everything you know now, was it wise to push for
| expansion over improvements to safety and reliability of the
| vehicles? On one hand, there is certainly value in expanding a
| bit to uncover edge-cases sooner. On the other hand, I'm not
| convinced it was worth expanding before getting the business
| sorted out.
|
| My guess is that given the relatively large fixed costs involve
| in operating an AV fleet, that it makes some sense to expand at
| least up to that sort of 'break even' point. Do we know what
| that point is? Put differently, is there some natural "stopping
| point" of expansion where Cruise could hit break-even on its
| fixed costs and then shift focus towards reliability?
| _boffin_ wrote:
| The first thing that came to my mind after reading, "...
| makes a lot sense" was the latency overhead that's incurred
| when RA is activated and associating it with drunk driving
| due to the increased response time.
|
| Maybe the article answers the following, but don't know since
| I haven't read it yet.
|
| - median, p95, p99 latencies for remote assistance
|
| - max speed vehicle can go when RA is activated.
| AlotOfReading wrote:
| I think a lot of the confusion here is over what's meant by
| "RA". This isn't a remote driving situation. It's like
| Waymo, where the human can make suggestions that give the
| robot additional information about the environment.
| cheeselip420 wrote:
| Exactly. Not all remote assists need a low-latency
| connection.
| sroussey wrote:
| The relevant staffing section:
|
| > Those vehicles were supported by a vast operations staff,
| with 1.5 workers per vehicle. The workers intervened to assist
| the company's vehicles every 2.5 to five miles
|
| The NYT is definitely implying 1.5 workers per vehicle
| intervene to assist driving at first read. Only after reading
| the above comment do I notice that they shoved the statements
| together using different meanings for "workers" as they didn't
| have the actual statistic on hand.
| MichaelTWorley wrote:
| Best wishes to you!
| flandish wrote:
| So when low wage mechanical turk costs turn out cheaper than
| engineering to improve driverless vehs... this will just be
| another exploitative gig job for folks in remote locations?
|
| I don't trust proper attention will be given to improvements in
| tech once profit and roi is considered compared to human labor
| costs especially in lower wage nations.
| gctwnl wrote:
| I would consider this realistic service design, just as Meta's
| Cicero (plays blitz Diplomacy) is smart design. It might work
| as a service.
|
| What the answer glances over is that even with just 3% of the
| time requiring human assistance (2 minutes out of every hour)
| the term 'autonomous vehicle' is not really applicable anymore
| in the sense everybody is using/understanding that term. The
| idea behind that term assumed 'full' autonomy. _Self_ driving
| cars. And there is no reason to assume that this is still in
| sight. The answer puts that 'self-driving car' on the shelf.
|
| PS. Human assistent seems to me a difficult job, given the
| constant speed and concentration requirements.
| southerntofu wrote:
| Hello Cruise CEO, there's a huge market for durable and
| profitable "dumb" cars. Why don't you get on that market? In a
| time when electronics represents over 30% of car costs and ~50%
| of car failures, people like me would be happy to buy a car
| that doesn't suck (low-tech) and can be maintained for decades
| for a reasonable price. In the meantime, i'll keep buying old
| Renault/Peugeot cars from the fifties/sixties i guess :(
| dventimi wrote:
| Why don't YOU get on that market if you think it's so
| worthwhile?
| jdjdjdhhd wrote:
| Can spying be disabled on your cars?
| dventimi wrote:
| Wut
| jdjdjdhhd wrote:
| They have humans remotely watching you drive
| patrick451 wrote:
| It's telling that you declined a request for an interview, yet
| still feel the need to clarify on HN. You'd be doing a lot
| better with transparency and public trust by just taking the
| interview.
| dreamcompiler wrote:
| Here we go again with a CEO who proclaims "autonomous cars are
| safer than human-driven cars." And their definition of "safer"
| conveniently ignores that autonomous cars _create new failure
| modes_ which do not exist in manually-driven cars.
|
| It may be true that statistically fewer fatalities per mile
| happen with autonomous cars than with human-driven cars. But
| that's irrelevant. If the car kills one person because it did
| something utterly stupid like driving under a semi crossing the
| highway or dragging a pedestrian along the ground, the public
| will not accept it.
|
| This is another example of the uncanny valley problem: Most
| "smart" devices are merely dumb in new ways. If your "smart"
| gizmo is only smart in how it collects private information from
| people (e.g. smart TVs), or it's merely smarter than a toggle
| switch, that's not what the public considers smart. It has to be
| smarter than a reasonably competent human _along almost all
| dimensions_ ; otherwise you're just using "smart" as a euphemism
| for "idiot savant." Self-driving cars are a particularly
| difficult "smart" problem because lives are at stake, and the
| number of edge cases is astronomical.
| bookofjoe wrote:
| I've said this here before and I will repeat it:
|
| An overwhelming majority of Americans will choose 45,000 deaths
| in car crashes annually (last year's number) in human-driven cars
| over 450 deaths/year with all self-driving cars.
|
| In the American (and probably ALL) mind(s), human agency trumps
| all.
| 1vuio0pswjnm7 wrote:
| "That is a rare level of talent," said Sam Altman, head of the Y
| Combinator startup incubator. "I can see Kyle being the next CEO
| of GM."
|
| https://www.vox.com/2016/3/11/11586898/meet-kyle-vogt-the-ro...
| neonate wrote:
| http://web.archive.org/web/20231105193346/https://www.nytime...
___________________________________________________________________
(page generated 2023-11-05 23:01 UTC)