Adults make an estimated 35,000 decisions in any given day. Additionally, they change their minds twice on average for every one of those instantaneous decisions. So on average, a highway driver makes and remakes eighteen decisions for every eight football fields they traverse when driving 55 mph (or 89 kph). How many [s]he makes correctly depends upon experience, conditions and behaviors (e.g., eyes on the road, hands on the wheel). Historical evidence is very clear on two points: human decision making, shockingly, is quite good (e.g., in the worst U.S. state, Mississippi, your chances of being in a fatal accident are only 0.022%), but deaths per million mile will not achieve zero deaths without some type of intervention.
Conventional wisdom and experts have long hailed Artificial Intelligence (a.k.a. AI) as that technological intervention. Automated decision making, in theory, improves over time as vast amounts of situational data tied to varying scenarios and outcomes can be used to train models. Few expected AI to be an instantaneous bullet (e.g., Tim Cook’s message of “responsibly advance our products” is much more the industry norm versus Musk’s hype of an impending “Chat GPT moment … when millions of his intelligent cars overnight come to life and drive themselves entirely on their own without human supervision,” as summarized by Fortune.com last week). Crash records support that AI still hasn’t achieved perfection, e.g., Autopilot’s 736 crashes with 17 associated deaths since 2019 while its AI learns, per Car and Driver.
And so the question becomes “How long until we’re there?” A new controlled-track study from Virginia Tech Transportation Institute (VTTI) in association with Motive AI compared the alerts generated to assist drivers from three different providers for multiple, driver-distraction related scenarios (e.g., phone call, texting) as well as general unsafe behavior (e.g., seatbelt usage, rolling stop). “The study highlights significant performance differences between the AI-powered dashcam providers in alerting drivers of unsafe driving behaviors across multiple conditions and scenarios,” said Susan Soccolich, Senior Researcher, Virginia Tech Transportation Institute.
The conclusion of the study: there’s both vast inconsistencies across dashcam providers to reliably provide AI-generated alerts and to recognize a given situation. For example, Motive and Lytx were both 100% accurate in detecting a very predictable, detectable situation such as seat belt usage, whereas for driver inattention alerts following a text message the success range was from a best of 71% (Motive) all the way down to 13% (Lytx). “The results of VTTI’s research are not just about comparing products,” explained Shoaib Makani, co-founder and CEO of Motive. “They show that these technologies don’t all perform the same, which can have major implications for accident prevention.”
Maybe the meta-conclusion: AI is the near-term solution for some applications, probably the mid-term solution for others, and closing that gap comes down to responsible engineering and controlled testing. Unfortunately, that’s not always the case, and there are still a “…large percentages of users (53% of [GM’s] Super Cruise, 42% of [Tesla’s] Autopilot and 12% of [Nissan’s] ProPILOT Assist) [that] indicated they were comfortable treating their systems as self-driving,” per a study by IIHS last October. This seeming blind behavior was described by The New York Times’s Greg Bensigner as, “Tesla drivers may [be falling] victim to a version of what’s known in clinical drug trials as therapeutic misconception, in which trial participants (beta testers, in this case) tend to overlook the potential risks of participating in an experiment, mistakenly regarding themselves as consumers of a finished product rather than as guinea pigs.”
So, in the meantime, don’t let yourself be distracted; either by texts or hype. Buy the AI systems that have been proven worthy by experts and avoid the ones spelling “failsafe” incorrectly.
Author’s Note
I have no doubts that some of my dedicated readers will opine that such alerts are the perfect use case for beta testing on the roadways, learning in mass in real-time and subsequent, ongoing software updates. I beg to differ, exactly per my “Wirelessly Updating Software Creates Dangerous Mindset In Auto Industry” article in May.
So please allow me to share a personal history to support my point of view. A friend of mine led Connected Services for a telematics company and, in real-time, tested a prototype, geoboxed driver alert with an unsuspecting employee on open roads. Basically, the software intelligence looked for specific conditions within a GPS area and then created a gentle, audible alert for the driver. It worked perfectly except …
The driver crashed the company car. Minor damage. But still a crash due to being startled.
The moral of the story: software is a powerful and ever-growing part of any mobility solution. Per Peter Parker’s mantra, “With great power comes great responsibility.” AI will continue to improve and, in most cases, will outperform humans.
But how and when it deploys is the “responsibility” part.
Read the full article here