Wednesday, December 11, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningThe Rocky Road Toward Explainable AI (XAI) For AI Autonomous Cars 

The Rocky Road Toward Explainable AI (XAI) For AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider  

Our lives are filled with explanations. You go to see your primary physician due to a sore shoulder. The doctor tells you to rest your arm and avoid any heavy lifting. In addition, a prescription is given. You immediately wonder why you would need to take medication and also are undoubtedly interested in knowing what the medical diagnosis and overall prognosis are. 

So, you ask for an explanation. 

In a sense, you have just opened a bit of Pandora’s box, at least in regard to the nature of the explanation that you might get. For example, the medical doctor could rattle a lengthy and jargon-filled indication of shoulder anatomy and dive deeply into the chemical properties of the medication that has been prescribed. That’s probably not the explanation you were seeking.   

It used to be that physicians did not expect patients to ask for explanations. Whatever was said by the doctor was considered sacrosanct. The very nerve of asking for an explanation was tantamount to questioning the veracity of a revered medical opinion. Some doctors would gruffly tell you to simply do as they have instructed (no questions permitted) or might utter something rather insipid like your shoulder needs help and this is the best course of action. Period, end of story.   

Nowadays, medical doctors are aware of the need for viable explanations. There is specialized “bedside” training that takes place in medical schools. Hospitals have their own in-house courses. Upcoming medical doctors are graded on how they interact with patients. And so on.   

Though that certainly has opened the door toward improved interaction with patients, it does not necessarily completely solve the explanations issue. 

Knowing how to best provide an explanation is both art and science. You need to consider that there is the explainer that will be providing the explanation, and there is a person that will be the recipient of the explanation. 

Explanations come in all shapes and sizes. 

A person seeking an explanation might have in mind that they want a fully elaborated explanation, containing all available bells and whistles. The person giving the explanation might in their mind be thinking that the appropriate explanation is short and sweet. There you have it, an explanation mismatch brewing right before our eyes. 

The explainer might do a crisp explanation and be happily satisfied with their explanation. Meanwhile, the person receiving the explanation is entirely dissatisfied. At this point, the person that received the explanation could potentially grit their teeth and just figure that this is all they are going to get. They might silently walk away and be darned upset, opting to not try and fight city hall, as it were, and merely accede to the minimal explanation proffered. 

Perhaps the person receiving the explanation decides they would like to get a more elaborated version. They might stand their ground and ask for a more in-depth explanation. Now we need to consider what the explainer is going to do. The explainer might believe that the explanation was more than sufficient, and see no need to provide any additional articulation.   

The explainer might be confused about why the initial explanation was not acceptable. Maybe the person receiving the explanation wasn’t listening or had failed to grasp the meaning of the words spoken. At this juncture, the explainer might therefore decide to repeat the same explanation that was just given and do so to ensure that the person receiving the original explanation really understood what was said.   

You can likely anticipate that this is about to spiral out of control. 

The person that is receiving this “elaborate” explanation is bound to notice that it is the same explanation repeated, nearly verbatim. That’s insulting! The person receiving the explanation now believes they are being belittled by the explainer. Either this person will hold their own tongue and give up trying to get an explanation, or try hurtling insults about how absurd an explanation the explanation was.   

It can devolve into a messy affair, that’s for sure.   

There is a delicate dance between the explainer and the providing of an explanation, along with the receiver and the desired nature of an explanation.   

We usually take these differences for granted. You rarely see an explainer ask what kind of explanation someone wants to have. Instead, the explainer launches into whatever semblance of an explanation that they assume the person would find useful. Rushing into providing an explanation can have its benefits, though it can also start an unsightly verbal avalanche that is going to take down both the explainer and the person receiving the explanation.   

Some suggest that the explainer ought to start by inquiring about the type of explanation that the other person is seeking. This might include asking what kind of background the other person hasin the case of a medical diagnosis, whether the other person is familiar with medical terminology and the field of medicine. There might also be a gentle inquiry as to whether the explanation should be done in one fell swoop or possibly divided into bite-sized pieces. Etc.   

The difficulty with that kind of pre-game formation is that sometimes the receiver doesn’t want to go through that gauntlet. They just want an explanation (or so they say). Trying to do a preamble is likely to irritate that receiver, and they will feel as though the explanation is being purposely delayed. This could even smack of hiding from the facts or some other nefarious basis for delaying the explanation. 

All told, we expect to get an explanation when we ask for one, and not have to go through a vast checklist beforehand.   

Another twist to all of this entails the interactive dialogue that can occur during explanations.   

The manner of explanations is not necessarily done in a one-breath fashion from start to end. Instead, it is more likely that during the explanation, the receiver will interrupt and ask for clarification or have questions that arise. This is certainly a sensible aspect. If the explanation is going awry, why have it go on and on, wherein instead the receiver can hopefully tailor or reshape the direction and style of the explanation.   

For example, suppose that you are a medical professional and have gone to see a medical doctor about your sore shoulder. Imagine that the doctor doing the diagnosis does not realize that the patient is a fellow medical specialist. In that case, the explanation offered is likely to be aimed at a presumed non-medical knowledge base and proceed in potentially simplistic ways (with respect to medical advice). The person receiving the explanation would undoubtedly interrupt and clarify that they know about medicine and the explanation should be readjusted accordingly.   

You might be tempted to believe that explanations can be rated as being either good or bad. Though you could take such a perspective, the general notion is that explanations and their beauty are in the eye of the beholder. One person’s favored explanation might be a disastrous or terrible one for someone else. That being said, there is still a modicum of a basis for assessing explanations and comparing them to each other. 

We can add a twist on that twist. Suppose you receive an explanation and believe it to be a good one. Later on, you learn something else regarding the matter and realize that the explanation was perhaps incomplete. Worse still, it could be that the explanation was intentionally warped to give you a false impression of a given situation. In short, an explanation can be used to purposely create falsehoods. 

That’s why getting an explanation is replete with problems. We often assume that if we ask for an explanation, and if it seems plausible, this attests that the matter is well-settled and above board. The thing is, an explanation can be distorted, either by design or by happenstance, and lead us into a false sense of veracity or truthfulness at hand. 

Another angle to explanations deals with asking for an explanation versus being given an explanation when it has not been requested. An explainer might give you an explanation outright because they assume you want one, whereas you are satisfied to just continue on. At that point, if you disrupt the explanation, the explainer might be taken aback.   

Why all this talk about explanations? Because of AI.   

The increasing use of Artificial Intelligence (AI) in everyday computer systems is taking us down a path whereby the computer makes choices and we the humans have to live with those decisions. If you apply for a home loan, and an AI-based algorithm turns you down, the odds are that all you’ll know is that you did not get the loan. You won’t have any idea about why you were denied the loan.   

Presumably, had you consulted with a human that was doing the loan granting, you might have been able to ask them to explain why you got turned down.   

Note that this is not always the case, and it could be that the human would not be willing or able to explain the matter. The loan granting person might shrug their shoulders and say they have no idea why you were turned down, or they might tell you that company policy precludes them from giving you an explanation. 

Ergo, I am not suggesting that just because a human is in the loop you will necessarily get an explanation. Plus, as repeatedly emphasized earlier, the explanation might be rather feeble and altogether useless.    

In any case, there is a big hullabaloo these days that AI systems ought to be programmed to provide explanations for whatever they are undertaking.   

This is known as Explainable AI (XAI). 

XAI is growing quickly as an area of keen interest. People using AI systems are going to likely expect and somewhat demand that they get an explanation provided to them. Since the number of AI systems is rapidly growing, there is going to be a huge appetite for having a machine-produced explanation about what the AI has done or is doing.   

The rub is that oftentimes the AI is arcane and not readily amenable to generating an explanation.   

Take as an example the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching algorithms that examine data and try to ferret out mathematical patterns. Sometimes the inner computational aspects are complex and do not lend themselves to being explained in any everyday human-comprehensible and logic-based way. 

This means that the AI is not intrinsically set up for providing explanations. In that case, there are usually attempts to add on an XAI component. This XAI either probes into the AI and tries to ferret out what took place, or it sits aside from the AI and has been preprogrammed to provide explanations based on what is assumed has occurred within the mathematically enigmatic mechanisms. 

Some assert that you ought to build the XAI into the core of whatever AI is being devised. Thus, rather than bolting onto the AI some afterthought about producing explanations, the design of the AI from the ground-up should encompass a proclivity to produce explanations.   

Amidst all of that technological pondering, there are the other aspects of what constitutes an explanation. If you revisit my earlier comments about how explanations tend to work, and the variability depending upon the explainer and the person receiving the explanation, you can readily see how difficult it might be to programmatically produce explanations.   

The cheapest way to go involves merely having pre-canned explanations. A loan granting system might have been set up with five explanations for why a loan was denied. Upon your getting turned down for the loan, you get shown one of those five explanations. There is no interaction. There is no particular semblance that the explanation is fitting or suitable to you in particular. 

Those are the pittance explanations. 

A more robust and respectable XAI capability would consist of generating explanations on the fly, in real-time, and do so based on the particular situation at hand. In addition, the XAI would try to ascertain what flavor or style of explanation would be suitable for the person receiving the explanation. 

  

And this explainer feature ought to allow for fluent interaction with the person getting the explanation. The receiver should be able to interrupt the explanation, getting the explainer or XAI to shift to other aspects or reshape the explanation based on what the person indicates.   

Of course, those are the same types of considerations that human explainers should also take into account. This brings up the fact that doing excellent XAI is harder than it might seem. In a manner of speaking, you are likely to need to use AI within the XAI in order to be able to simulate or mimic what a human explainer is supposed to be able to do (though, as we know, not all humans are adept at giving explanations).   

Shifting gears, you might be wondering what areas or applications could especially make use of XAI.   

One such field of endeavor entails Autonomous Vehicles (AVs). We are gradually going to have autonomous forms of mobility, striving toward a mobility-for-all mantra. There will be self-driving cars, self-driving trucks, self-driving motorcycles, self-driving submersibles, self-driving drones, self-driving planes, and the rest.   

You might at first thought be puzzled as to why AVs might need XAI. We can use self-driving cars to showcase how XAI is going to be a vital element for AVs.   

The question is this: In what way will Explainable AI (XAI) be important to the advent of AVs and as showcased via the emergence of self-driving cars? 

Let’s clarify what I mean by self-driving cars, and then we can jump further into the XAI AV discussion.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. 

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And XAI   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient? 

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

Now that we’ve laid the stage appropriately, time to dive into the myriad of aspects that come to play on this topic about XAI.   

First, be aware that many of the existing self-driving car tryouts have very little if any semblance of XAI in them. The initial belief was that people would get into a self-driving car, provide their destination, and be silently whisked to that locale. There would be no need for interaction with the AI driving system. There would be no need for an explanation or XAI capability.   

We can revisit that assumption by considering what happens when you use ridesharing and have a human driver at the wheel.   

There are certainly instances wherein you get into an Uber or Lyft vehicle and there is stony silence for the entirety of the trip. You’ve likely already provided the destination via the ride-request app. The person driving is intently doing the driving and ostensibly going to that destination. No need to chat. You can play video games on your smartphone and act as though there isn’t another human in the vehicle.   

That’s perfectly fine.   

Imagine though that during the driving journey, all of a sudden, the driver decides to go a route that you find unexpected or unusual. You might ask the driver why there is a change in the otherwise normal path to the destination. They would hopefully prompt an explanation from the human driver. 

It could be that the human driver gives you no explanation or provides a flimsy explanation. Humans do that. In theory, a properly done XAI will provide an on-target explanation, though this can be challenging. Maybe the human driver tells you that there is construction taking place on the main highway, and to avoid a lengthy delay, an alternative course is being undertaken. 

You might be satisfied with that explanation. On the other hand, perhaps you live in the area and are curious about the nature of the construction taking place. Thus, you ask the driver for further details about the construction. In a sense, you are interacting with an explainer and seeking additional nuances or facets about the explanation that was being provided. 

Okay, put on your self-driving car thinking-cap and consider what a passenger might want from an XAI. A self-driving car is taking you to your home. The normal path that would be used is unexpectedly diverted from the AI driving system. You are likely to want to ask the AI why the driving journey is altering from your expected traversal. Many of the existing tryouts of self-driving cars would not have any direct means of having the AI explain this matter, and instead, you would need to connect with a remote agent of the fleet operator that oversees the self-driving cars. 

In essence, rather than building the XAI, the matter is shunted over to a remote human to explain what is going on. This is something that won’t be especially scalable. In other words, once there are hundreds of thousands of self-driving cars on our roadways, the idea of having the riders always needing to contact a remote agent for the simplest of questions is going to be a huge labor cost and a logistics nightmare.   

There ought to be a frontline XAI that exists with the AI driving system.   

Assume that a Natural Language Processing (NLP) interface is coupled with the AI driving system, akin to the likes of Alexa or Siri. The passenger interacts with the NLP and can discuss common actions such as asking to change the destination midstream, or asking to swing through a fast-food eatery drive-thru, and so on.   

In addition, the passenger can ask for explanations.   

Suppose the AI driving system has to suddenly hit the brakes. The rider in the self-driving car might have been watching an especially fascinating cat video and not be aware of the roadway circumstances. After getting bounced around due to the harsh braking action, the passenger might anxiously ask why the AI driving system made such a sudden and abrasive driving action. 

You would want the AI to immediately provide such an explanation. If the only possible way to get an explanation involved seeking a remote agent, envision what that might be like. There you are, inside the self-driving car, and it has just taken radical action, but you have no idea why it did so. You have to press a button or somehow activate a call to a remote agent. This might take a few moments to engage.   

Once the remote agent is available (assuming that one is readily available), they might begin the dialogue with a usual canned speech, such as welcome to the greatest of all self-driving cars. You, meanwhile, have been sitting inside this self-driving car, which is still merrily driving along, and yet you have no clue why it out-of-the-blue hit the brakes. 

The point here is that by the time you engage in a discussion with the human remote operator, a lot of time and driving aspects could have occurred. During that delay, you are puzzled, concerned, and worried about what the AI driving system might crazily do next.   

If there was an XAI, perhaps you would have been able to ask the XAI what just happened. The XAI might instantly explain that there was a dog on the sidewalk that was running toward the self-driving car and appeared to be getting within striking distance. The AI driving system opted to do a fast braking action. The dog got the idea and safely scampered away.   

A timely explanation, and one that then gives the passenger solace and relief, allowing them to settle back into their seat and watch more of those videos about frisky kittens and adorable puppies.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion   

There are lots and lots of situations that can arise when riding in a car and for which you might desire an explanation. The car is suddenly brought to a halt. The car takes a curve rather strongly. The car veers into an adjacent lane without a comfortable margin of error. The car takes a road that you weren’t expecting to be on. Seemingly endless possibilities exist.   

In that case, if indeed XAI is notably handy for self-driving cars, you might be wondering why it isn’t especially in place already.   

Well, admittedly, for those AI developers under intense pressures to devise AI that can drive a car from point A to point B, doing so safely, the aspect of providing machine-generated explanations is pretty low on their priority list. They would fervently argue that it is a so-called edge or corner case. It can be gotten to when the sunshine of having achieved sufficiently self-driving cars has been achieved. 

Humans that are riding in AVs of all kinds are going to want to have explanations. A cost-effective and immediately available means of providing explanations entails the embodiment of XAI into the AI systems that are doing the autonomous piloting.   

One supposes that if you are inside a self-driving car and it is urgently doing some acrobatic driving maneuver, you might be hesitant to ask what is going on, in the same manner, that you might worry that you would be distracting a human driver that was doing something wild at the wheel.   

Presumably, a well-devised XAI won’t be taxing on the AI driving system, and thus you are free to engage in a lengthy dialogue with the XAI. In fact, the likeliest question that self-driving cars are going to get is how does the AI driving system function. The XAI ought to be readied to cope with that kind of question. 

The one thing we probably should not expect XAI to handle will be those questions that are afield of the driving chore. For example, asking the XAI to explain the meaning of life is something that could be argued as out-of-bounds and above the pay grade of the AI.   

At least until the day that AI does become sentient, then you can certainly ask away. 

Copyright 2021 Dr. Lance Eliot  http://ai-selfdriving-cars.libsyn.com/website 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments