Thursday, March 28, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningQuestionable Practices At Some AI Autonomous Car Makers Spurring Whistleblowers 

Questionable Practices At Some AI Autonomous Car Makers Spurring Whistleblowers 

By Lance Eliot, the AI Trends Insider 

There is the old saying that you ought to not look in the kitchen when you go to some restaurants or eateries for a bite to eat. When you see how the food is being prepared, it might just make you sick to your stomach.  

You can apply this rule to just about any entity that makes any kind of product. Imagine if a toaster was being manufactured in a faulty manner and was likely to catch fire when put into use. We would undoubtedly welcome having an insider that worked in the company making the toaster come forward beforehand.   

Complex products are especially instances that we would hope an insider would come forward. Your car for example is a quite complex product. It contains thousands upon thousands of components. The automotive industry has had some notable cases of insiders that aided in revealing serious internal issues regarding car production.   

All told, it sure would be nice if someone on the inside were to speak up and try to rectify a behind-the-scenes issue, particularly when dangerous. 

Such an insider that speaks out is typically referred to as a whistleblower. 

You might vaguely be aware that Ralph Nader in the 1960s and 1970s aided in popularizing the whistleblower catchphrase during his activist efforts. Up until that point in time, the notion of speaking out about an internal issue was frowned upon. Employees were generally expected to be fiercely loyal to their employer and dare not to speak out of turn. In fact, to this day, sometimes a whistleblower is ostracized and labeled as a rat or snitch for their efforts.   

To clarify, not all whistleblowers are right about what they report. There are occasions when a whistleblower might be mistaken about what they perceive as a problem. They might misconstrue things. They might overinflate the concern. You cannot blankly assume that a whistleblower is perfect in their aims. At times, a whistleblower might be harboring personal biases and seeking some form of twisted revenge or have other unseemly motives in mind.   

But that also doesn’t mean that a whistleblower should be axiomatically tainted as a malcontent simply due to acting as a whistleblower. Not at all.   

Going the whistleblower route can be arduous and ruinous in many ways. A person can be listed as a kind of traitor and be forever tarnished, wherever they go and whatever else in life they try to do. Deciding to become a whistleblower requires some hefty thinking, trying to balance a personal sense of ethical codes versus the potential for being known as an informer or tattletale.   

We can be thankful for those whistleblowers that chose to do the right thing, despite the personal costs, and have exposed substantive problems that would have not otherwise gotten the light of day. And, in many such cases, those types of demonstrative problems were already integral to harming those using the product or potentially could do so in the future. The whistleblower was able to start a chain of events that eventually curtailed or reduced those unsavory or outright deplorable outcomes.   

How does a whistleblower come forward and essentially blow the whistle, as it were?   

There are usually two avenues for a whistleblower to make known the internal issues that they believe are untoward. 

One approach is to be a so-called internal-reporting whistleblower, which generally means that the person doing the whistleblowing does this within the confines of the entity that they are working in. They might bring their concerns to their supervisor or manager. Perhaps the firm has a formalized whistleblower process that entails submitting a safety concern in writing. And so on. 

The other approach is the external-disclosing whistleblower. This is when a person that is or was an insider opts to tell externally about the internal aspects that they perceive as disconcerting. The person might tell reporters about what is going on. The person might decide to talk to a third-party consumer protection entity. Etc. 

Earlier, I mentioned that cars and the automotive industry have had various whistleblowers. 

In the United States, the Vehicle Safety Act (VSA) serves as the overarching guideline about whistleblowing and whistleblowers in the context of vehicular considerations. Here’s the formal VSA definition given about what exactly is a whistleblower: “The term ‘’whistleblower’’ means any employee or contractor of a motor vehicle manufacturer, part supplier, or dealership who voluntarily provides to the Secretary original information relating to any motor vehicle defect, noncompliance, or any violation or alleged violation of any notification or reporting requirement of this chapter, which is likely to cause unreasonable risk of death or serious physical injury” (as a side note, the Secretary being referred to is the Secretary of Transportation). 

The normal path for a car-related whistleblower that is going to externally-divulge an issue would be to contact the NHTSA (National Highway Traffic Safety Administration) which is an agency of the U.S. Department of Transportation (US DOT). The stated mission of the NHTSA is to save lives, prevent injuries, and reduce vehicle-related crashes in the United States.   

Per the website detailing the NHTSA whistleblower program: “Whistleblowers are an important source of information for NHTSA about potential vehicle safety problems and violations of law. The Vehicle Safety Act protects the confidentiality of whistleblowers and allows NHTSA to pay a monetary award to a whistleblower whose information leads to the successful resolution of an enforcement action for violations of law.” 

There are a wide variety of nuances to the whistleblower program. If you or someone that you know is considering being a whistleblower related to an automotive issue, make sure to get up-to-speed on the matter. Best to look before you leap. In addition, as stated in the VSA: “A whistleblower may be represented by counsel.”   

In terms of the scope of what might be considered a whistleblowing warranted submission, here’s what the NHTSA website indicates: “NHTSA receives information from whistleblowers on a wide variety of topics, including potential vehicle safety defects, noncompliance with the Federal Motor Vehicle Safety Standards, and violations of the Vehicle Safety Act. NHTSA investigators consider information provided by whistleblowers, which may lead to formal actions like an investigation, recall, or civil penalty enforcement action. NHTSA protects the confidentiality of whistleblowers. NHTSA may pay a monetary award to a whistleblower who provides information that leads to the successful resolution of an enforcement action for violations of law.”   

You might be thinking that whistleblowing about cars is a somewhat ho-hum topic; whistleblowing about cars has been taking place for years.   

Well, there is something new that you need to give due consideration toward.   

The future of cars consists of AI-based true self-driving cars. 

Allow me a moment to elaborate.   

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. 

Here’s an intriguing question that is worth pondering: Are we going to see whistleblowers about the advent of AI-based true self-driving cars, and if so, what impact might we expect?   

Before jumping into the details, I’d like to further clarify what is meant when I refer to true self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Whistleblowers   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. 

Why this added emphasis about the AI not being sentient? 

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad aspects that come to play on this topic.   

You might be shocked to think that there would be any kind of possible whistleblowing related to self-driving cars. 

Most people tend to assume that the internal efforts of developing those state-of-the-art AI driving systems and putting together a self-driving car are being done with the highest and best of intentions. Some liken the effort to landing on the moon, a rather awe-inspiring aspiration.   

Surely, everything happening behind the scenes in the act of crafting self-driving cars is completely above board and hunky-dory. No one would seek to produce a self-driving car that is dangerous. Furthermore, the hope is that self-driving cars will bring forth an era of mobility-for-all, allowing those that do not readily have access to automotive transit today to have seamless access to ubiquitous mobility. And there is also the belief that self-driving cars will dramatically reduce the number of human injuries and fatalities due to human-driven car crashes.   

Well, I don’t want to shock you, but what is happening in the kitchen has the potential for serving up a meal that has the possibility of harming people. 

Some self-driving car efforts are cutting corners, trying to do the urgent bidding of upper management, and not able to undertake as much double-checking and triple-checking, as they believe is necessary. Likewise, there are those in upper management that are unaware about the lack of attention to detail and the omission or skirting of safety quality aspects during the self-driving car development activities in their development groups.   

It works both ways.   

Keep in mind that there is a race of sorts going on. 

Who will attain true self-driving cars first?   

Akin to getting to the moon, a lot is riding on the first to make this self-driving car a “giant leap” for mankind. Billions upon billions of dollars are pouring into these vaunted efforts. There is almost no revenue as yet from the self-driving car efforts. Money is going in, and there is an expectation that something miraculous is going to emerge.   

Under that kind of pressure, you can imagine that some elements might be given short shrift in an effort to push ahead. Concerns about safety aspects can at times be placed on the back burner. Features for the AI driving system that seem pivotal can be shoved onto the existing list of edge or corner cases, meaning that those are capabilities that it is assumed can be dealt with later on.   

This does not suggest that anything of this nature is somehow rampant or found everywhere. That would be completely unfair to all of those striving mightily to produce self-driving cars. These are the heroes that are striving mightily, each and every day, aiming to bring to the world self-driving cars, and in so doing reach those earlier cited ideals.   

Also, as I have repeatedly mentioned in my columns, in the last several years the automakers and self-driving carmakers have taken a much stronger stance toward safety. This includes having hired high-profile automotive safety experts and launched significant and meaningful internal efforts to boost the understanding of what safety consists of. In addition, many of the self-driving car firms now have internal “whistleblower” programs, usually coined more plainly as safety reporting programs, that are intended to get insiders to come forward when they see something that they perceive as untoward. 

The ostensible concern is that there are real chances in such a high-pressure gung-ho atmosphere of a bad apple here or there. To make clear, it isn’t necessarily the case that a nefarious person is going to plant something nefarious into an AI driving system (though, that can happen). When AI developers are running around at full speed, there is a chance of missing some important safeguards or not having the right development tools, or being overwhelmed and understaffed.   

All in all, the result can be that a hidden bug or problem has gotten into the AI driving system. Not necessarily intentionally so. Just because there are insufficient checks and balances and other complications involved.   

When a self-driving car has a backup human driver in the vehicle, the belief is that this is sufficient to catch any untoward actions that the AI driving system might make. In that way of thinking, it is like having a food taster that is sitting at your dining table. The person is supposed to catch anything that can lead things astray.   

As mentioned in my columns, not everyone believes this to be a proper way of catching problems and some decry that using the public roadways in this manner is wholly improper. If we are using the analogy of a food taster, it is as though the food taster is eating the meal at the same time as a person is dining on the meal. Not a satisfactory form of safety for the person that is simultaneously eating what might be toxic food. In essence, the backup driver might not catch an untoward driving act in time, and the vehicle crashes into a pedestrian or a human-driven car.    

Also, when you see a self-driving car driving down the street, and doing so without a human backup driver, how can you know that the AI driving system is assuredly going to be a safe driver?   

You don’t.   

The usual retort is that if the autonomous vehicle is making its way down a city street and hasn’t hit anyone, this must mean that the AI driving system is doing just fine, thank you very much. Indeed, those that keep taking those rides in self-driving cars and doing those gushing testimonials tend to fall into that same category. They believe that just because the self-driving car safely got them to the grocery store, it is as though this provides ample and unquestionable proof that the self-driving car can go anywhere safely.   

They are falling into a classic mental trap. It is a common statistics-oriented mistake. They assume that their one particular instance makes for outstretched generalizations.   

It could be that the AI driving system will come upon a situation such as a pedestrian that suddenly darts into the street, and the self-driving car won’t stop in time (let’s assume it could have). The person that had taken the trip to the grocery store had not encountered an interloping pedestrian, and thus they have no idea what the self-driving car will do in such a setting. Out of sight, out of mind.   

Let’s get back to what the NHTSA says about cars overall and the scope of possible whistleblowing that might be applicable: “NHTSA receives information from whistleblowers on a wide variety of topics, including potential vehicle safety defects, noncompliance with the Federal Motor Vehicle Safety Standards, and violations of the Vehicle Safety Act.”   

Traditionally, this kind of scope tends to focus on the mechanical parts of a car.   

Nowadays, cars have become essentially computers on wheels. The software is increasingly veering into the scope of topics that entail vehicle safety defects, noncompliance, and violations of the VSA.   

In what areas of the AI driving systems might we anticipate potential issues? 

First, sensors are used to collect data about the driving scene. Self-driving cars usually make use of video cameras, radar, LIDAR, ultrasonic units, thermal imaging devices, and the like. These are the eyes and ears of the AI driving system. 

Suppose an insider knows that the sensors chosen for the self-driving car model they are working on have safety concerns that are not being dealt with. Once the self-driving car is fielded, it could be that under certain conditions a particular sensor is going to provide faulty data. From this, the AI driving system might not have been programmed to calculate properly what to do.   

A bad result can happen while on our public roadways.   

Another possible avenue consists of the fusion of the data across multiple sensors. Known as MSDF (multi-sensor data fusion), perhaps the fusion technique being used is based on the video cameras being considered “right” even if the radar input says otherwise (the discrepancy is simply discarded rather than raising a flag or requiring some added steps to try and resolve). How are safety concerns being incorporated into this form of programming?   

The list goes on and on.   

There is the virtual model that is maintained by the AI driving system and internally depicts the existing and predicted surroundings. There could certainly be safety concerns about how that code is working. There is the action-planner portion of the AI driving system that tries to compute what next actions of the autonomous vehicle are most prudent to take. Again, safety concerns could reside there. Etc.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Here’s the bottom line.   

As self-driving cars begin to increasingly emerge from labs and R&D efforts, they will be used on public roadways. The number of such self-driving cars on the roadways will start to increase. The increased number of such autonomous vehicles will tend to somewhat increase the odds that something adverse is going to arise. 

These usually make big headlines in the news. 

Did the self-driving car incident that gets the headlines have an internal issue that might have been known by an insider, and yet they decided not to tell about what they knew or seriously suspected?   

I would wager that we are likely to see an emergence of whistleblowers that come forth about self-driving cars. 

Not all such instances will be valid. Some will.   

The impact is likely that this will cause some automakers and self-driving car tech makers to revisit what they are doing. You can certainly expect that those types of whistleblower reports will stoke further regulatory interest.   

This will be a surprise to some, and not at all a surprise to others. Those that have worked in such “kitchens” before, and already sense that some of the self-driving cars that are coming onto the roadways are a scarily and inadequately cooked meal.   

We’ll find out once the whistles start to blow. 

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments