Insurance Companies Flag “Driver Disengagement” as Factor in Robot Car Safety

By Lambert Strether of Corrente

I first wrote about robot cars back in 2016, in “Self-Driving Cars: How Badly Is the Technology Hyped?” (“Spoiler alert: Pretty badly”). Back then shady operators like Musk (Tesla), Kalanick (Uber), Zimmer (Lyft), and even the CEO of Ford were predicting Level 5 autonomous vehicles — I prefer “robot cars,” because why give these people the satisfaction — in five years. That’s in 2021, which is [allow me to check a date calculator] 42 days from now. So how’s that going?

Well, let’s go back to “Level 5.” What does that mean? Robot cars are (or are to be) classified with five levels, at least according to the Society of Automotive Engineers (SAE), the group that also classified the viscosity of your lubricating oils[1]. Here is an explanation of the levels in lay terms (including the SAE’s chart) from Dr. Steve Shladover of Berkeley’s Partners for Advanced Transportation Technology. I’ve helpfully underlined the portions relevant to this post:

[Driver Assistance]: So systems at Level 1 [like cruise control] are widely available on cars today.

[Partial Automation]: Level 2 systems are available on a few high-end cars; they’ll do automatic steering in the lane on a well-marked limited access highway and they’ll do car following. So they’ll follow the speed of the car in front of them. The driver still has to be monitoring the environment for any hazards or for any failures of the system and be prepared to take over immediately.

[Conditional Automation]: Level 3 is now where the technology builds in a few seconds of margin so that the driver may not have to take over immediately but maybe within, say, 5 seconds after a failure has occurred….

That level is somewhat controversial in the industry because there’s real doubt about whether it’s practical for a driver to shift their attention from the other thing that they’re doing to come back to the driving task under what’s potentially an emergency condition.

[High Automation]: [Level 4] it has multiple layers of capability, and it could allow the driver to, for example, fall asleep while driving on the highway for a long distance trip…. So you’re going up and down I-5 from one end of a state to the other, you could potentially catch up on your sleep as long as you’re still on I-5. But if you’re going to get off I-5 then you would have to get re-engaged as you get towards your destination

[Full Automation]: Level 5 is where you get to the automated taxi that can pick you up from any origin or take you to any destination or they could reposition a shared vehicle. If you’re in a car sharing mode of operation, you want to reposition a vehicle to where somebody needs it. That needs Level 5.

Level 5 is really, really hard.

As you can see, humans are in the loop up until level 5, albeit with decreasing levels of what I believe an airline pilot would call authority. However, the idea behind the SAE’S levels seems to be that human beings are static, will not adapt to a situation where a robot has authority by disengaging themselves from it.

The insurance industry, alive to the possibility that they may, one day, need to insure self-driving cars[2], has been studying how humans actually “drive” robot cars, as opposed to how Silicon Valley software engineers and Founders think they ought to, and have come up with some results that should be disquieting for the industry. Insurance Institute for Highway Safety has produced a study, “Disengagement from driving when using automation during a 4-week field trial” (helpfully thrown over the transom by alert reader dk). From the introduction, the goal of the study:

The current study assessed how driver disengagement, defined as visual-manual interaction with electronics or removal of hands from the wheel, differed as drivers became more accustomed to partial automation over a 4-week trial.

And the results:

The longer drivers used partial automation, the more likely they were to become disengaged by taking their hands off the wheel, using a cellphone, or interacting with in-vehicle electronics. Results associated with use of the two ACC systems diverged, with drivers in the S90 exhibiting less disengagement with use of ACC compared with manual driving, and those in the Evoque exhibiting more.

The study is Level 2 — that’s where we are after however much hype and however many billions, Level 2 — and even Level 2 introduces what the authors refer to as “the irony[2] of automation”:

The current study focuses on partial driving automation (henceforth “partial automation”) and one of its subcomponents, adaptive cruise control (ACC). Partial automation combines ACC and lane-centering functions to automate vehicle speed, time headway, and lateral position. Despite the separate or combined control provided by ACC or lane centering, the driver is fully responsible for the driving task when using the automation (Society of Automotive Engineers, 2018). These systems are designed for convenience rather than hazard avoidance, and they cannot successfully navigate all road features (e.g., difficulty negotiating lane splits); consequently, the driver must be prepared to assume manual control at any moment. Thus, when using automation, the driver has an added responsibility of monitoring it. This added task results in what Bainbridge (1983) describes as a basic irony of automation: while it removes the operator from the control loop, because of system limitations, the operator must monitor the automation; monitoring, however, is a task that humans fail at often (e.g., Mackworth, 1948, Warm, Dember, & Hancock, 2009; Weiner & Curry, 1980).

To compound this irony, ACC and partial automation function more reliably, and drivers’ level of comfort using the technology is greater in, free-flowing traffic on limited-access freeways than more complex scenarios such as heavy stop-and-go traffic or winding, curvy roads (Kidd & Reagan, 2019; Reagan, Cicchino, & Kidd, 2020).

And the policy implications:

One of the most challenging research needs is to determine the net effect of existing implementations of automation on crash risk. These systems are designed to provide continuous support for normal driving conditions, and they exist in tandem with crash avoidance systems that have been proven to reduce the types of crashes for which they were designed (Cicchino, 2017, 2018a, 2018b, 2019a, 2019b). There is support from field operational tests that the automated speed and headway provided by ACC may confer safety benefits beyond those provided by existing front crash prevention (e.g., Kessler et al., 2012), and this work exists alongside findings that suggest drivers remain more engaged when using ACC (Young & Stanton, 1997) relative to lane centering. In contrast, the current field test data and recent analyses of insurance claims are unclear about the safety benefits of continuous lane-centering systems extending beyond that identified for existing crash avoidance technologies (Highway Loss Data Institute [HLDI], 2017, 2019a, 2019b). Investigations of fatal crashes of vehicles with partial driving automation all indicate the role of inattention and suggest that accurate benefits estimation for partial automation will have to account for disbenefits introduced by complacency. This study provides support for the need for a more comprehensive consideration of factors such as changes in the odds of nondriving-related activities and hands-on-wheel behaviors when estimating safety benefits.

Shorter: We don’t know how even Level 2 robot cars net out in terms of safety[2]. That means that insurance company actuaries can’t know how to insure them, or their owners/drivers.

The obvious technical fix is to force the driver to pay attention. From Insurance Institute for Highway Safety, “Automated systems need stronger safeguards to keep drivers focused on the road” (and there’s your irony, right there; an “automated system” that also demands constant human “focus”; if a robot car were an elevator, you’d have to be monitoring the Elevator floor Indicator lights, prepared at all times to goose the Up button if the elevator slowed, or even stopped between floors (or, to be fair, the Down button)). From the article:

When the driver monitoring system detects that the driver’s focus has wandered, that should trigger a series of escalating attention reminders. The first warning should be a brief visual reminder. If the driver doesn’t quickly respond, the system should rapidly add an audible or physical alert, such as seat vibration, and a more urgent visual message.

They provide a handy chart of the “escalating attention reminders”:

(It’s not clear to me how the robot car “deploy the hazard lights and gradually slow the vehicle to a stop.” “Gradually slow” is doing some heavy lifting. Does the robot car stop in the middle of a highway? Does it pull over to the shoulder? What if there is no shoulder? What is the road is slippery with ice or snow? Etc. Sounds to me like they have to solve Level 5 to make Level 2 work.) I’m not sure what those “physical alerts” should be. Cattle prods?

Anyhow, I’m not a driver, but trying to imagine how a driver would feel, I’ve got to say that being placed in a situation where I have no authority, yet must remain alert at all times reminds me forcibly of this famous closing scene:

Granted, my robot car, having removed my authority, will demand constant attention so I can seize my authority back in a situation it can’t handle[3], so my situation would not be identical to Alex’s; perhaps I went a little over the top. Nevertheless, that doesn’t seem like a pleasurable driving experience. In fact, it seems like a recipe for constant low-grade stress. And where’s the “convenience” of a robot car if I can’t multitask? Wouldn’t it be simpler if I just drove the car myself?


[1] “Heavy on the thirty-weight, Mom!”

[2] I’m not sure Silicon Valley is big on irony. They would, I suppose, purchase legislation that would solve the insurance problem by having the government take on the risks of this supposedly great technical advance.

[3] “Seize” because an emergency would be sudden. I would have to transition from non-engagement to engagement instantly. That doesn’t seem to be a thing humans do well either.

Print Friendly, PDF & Email
This entry was posted in Auto industry, Guest Post, Infrastructure, Regulations and regulators, Ridiculously obvious scams on November 20, 2020 by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered.
To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.

Leave a Reply