Friday, April 19, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningFouled Timestamps on Mars Helicopter Ingenuity Have Lessons for Autonomous Cars 

Fouled Timestamps on Mars Helicopter Ingenuity Have Lessons for Autonomous Cars 

By Lance Eliot, the AI Trends Insider 

Have you ever glanced at a snapshot and asked someone when they took that photo? I’m sure that you have. You wanted to place the picture into a context of date and time. Maybe the photo was snapped years ago and showcases the past. Or perhaps the picture is quite recent and displays the way things are today. All in all, knowing when a photo was taken can be useful and at times essential.   

In the computer field, we often refer to timestamping things.   

When a computer is hooked up to a camera, the taking of a picture is usually accompanied by adding a timestamp to the collected image. The timestamp merely indicates the date and time of the picture. This can be stuffed inside the data that contains the actual image or might be added as a supplemental piece of metadata that otherwise describes or indexes the photo. 

If a series of photos are being taken, the timestamp begins to be extremely important. 

Imagine that you own a car that has a camera mounted on the dashboard and is pointed at the roadway. You opt to go on a driving journey and decide to have the camera periodically take pictures of the road ahead. Picture after picture is being snapped. No big deal, easy-peasy.   

During the drive, you at one point are in a residential neighborhood and a dog wanders across the street. Fortunately, you see the dog and come to a stop to let it proceed safely. Shortly thereafter, a toddler runs across the street. Since you already had come to a halt, the child is able to dart through the street without incident. There was nothing especially untoward about the event, and you might chalk it up to just an everyday driving trek. 

A few weeks later you tell someone generally about the driving experience. As you begin to explain the travails, you suddenly cannot remember whether the dog appeared first and then the child came afterward, or possibly that the child was the first to come through the street and the dog was following the toddler. It would be easy to have muddled the relatively uneventful matter and particular sequence in your mind. 

Aha, you have those pictures stored in your dashboard camera!   

You download the pictures to your laptop. 

While pulling up each of the images, suppose that there wasn’t an apparent timestamp. This means that each picture was without any definitive indication of the date and time they were taken. You can plainly see each picture, and you can attest that they are accurate portrayals of what you saw during your driving effort. Unfortunately, they are lacking timestamps.   

It would almost be as though you scattered the pictures around on a tabletop and had to figure out which came before which other one. This can be a tricky puzzle to solve. 

Sure enough, you find the picture that shows the dog that was in the street, and you find the picture that shows the toddler that was in the street. But you do not know for sure which photo is first in order. Darn, this does not aid in solving your quest to remember the sequence of those events.   

As mentioned, timestamps can be crucial. 

If you were determined, you could closely inspect each of the two photos. There is a chance that there could be a sighting of the child in the photo that has the dog in the middle of the street. If the child is off to the side and running away from the street, you could infer that the child likely came first and the dog followed. Likewise, if the photo of the child in the middle of the street showcases the dog, either heading toward or away from the street, you can try to infer the sequence that must have occurred.   

It sure would be a lot simpler to have the timestamps.   

Well, you dig around and discover that there is a timestamp embedded into the metadata of the image. You use a special program to ferret out the timestamp. The issue of ascertaining the sequence seems to be solved.   

Life is never that easy, of course, and you suddenly notice that the timestamp has a date of June 31, 1777, which makes absolutely no sense at all. You know that cannot be right. The times shown on each of the pictures indicate an evening time period, though you know for sure that you encountered the dog and the toddler during the daytime hours.   

Yikes, the timestamps are messed up.   

How far are you willing to trust the timestamps? 

For example, you could discard the date and assume that the date was just somehow improperly preset. You could ignore that the time stated was in the evening and go with the assumption that the clock was not properly set to begin with. At least you can see whether the timestamp via the time listed on each photo will reveal which photo was taken first.   

Are you willing to accept that this will be irrefutable evidence as to which photo was taken first and therefore whether the dog or the child was first to enter into the street? 

Seems like you would be on shaky ground. The fact that the date is wrong would be worrisome. The fact that the times are apparently wrong is also troubling. At this juncture, you are going to presume that at least the sequence is right due to the timestamp showing one time on the dog photo and another time on the photo containing the toddler. 

If you were only doing this quest out of innate curiosity, it might not matter much whether you were willing to accept that the timestamp time was presumably apt and you now knew which event happened first. Would you be quite so sanguine if the matter was vitally important? Envision that for some reason this is a top-stakes issue and needs to be seriously resolved.   

I guess we could play that ever-popular kid’s game of asking whether you would be willing to bet your life on it. The photos appear to clearly demonstrate that you did indeed see a dog and you did indeed see a child. The sequence is regrettably still somewhat up in the air.   

Speaking of being up in the air, let’s shift our attention to the topic of autonomous helicopters. We’ll swing back around to the whole matter about photos and timestamps after covering this added ground.   

You might be aware that NASA has put an autonomous helicopter on Mars. The helicopter is part of the overall Mars 2020 Perseverance rover mission. With the catchy name of Ingenuity, the autonomous helicopter has already established new records based on its flights on Mars, and the NASA teams should be proud of the accomplishments so undertaken.   

The reason that the helicopter is referred to as autonomous is that it has to fend for itself when flying around. If human controllers here on earth were to try and fly the craft directly, the time delay due to the distances and transmission times would not be adequate. By the time that the human pilot here saw what was taking place there, and then issued a driving or piloting command, which then had to be transmitted and received, the helicopter could have encountered a problem that utterly ruined the faraway craft. 

By having developed an autonomous piloting system, Ingenuity can pretty much fly around on its own.   

That being said, this does not imply that the craft is willy-nilly going wherever it wants to go. NASA and the Jet Propulsion Laboratory (JPL) have established predetermined missions, and each such mission has been carefully planned and prepared for. 

A series of missions are being performed. Each mission has particular goals of what is to be accomplished. As the missions proceed, they are getting a bit more complex each time, almost like starting by crawling and then proceeding to walk and then running (or, in the case of a helicopter, short up and down vertical flights that are followed by longer distance and multi-pathed horizontal flights). 

This brings us to Flight Six and an interesting in-flight anomaly that could have led to the autonomous helicopter nosediving into the Mars surface. If that were to occur at any point during the Mars excursion, you can likely assume the helicopter is out for the count. There would be no means to repair the craft. Alone it would sit, having done its part for science, and be a silent marker saying that humans have been here.   

What happened with Ingenuity?   

I’ll start by emphasizing that it was able to complete the assigned mission and exists, still intact, and ready for the next mission (well, with some tweaking to be done remotely). 

Partway through the sixth mission, the craft began to unnervingly roll and pitch in an undesirable manner. This rapid tilting and adjustments in velocity can be seen in the video recorded on Mars. Anyone watching the recorded video has to feel their heartstrings being pulled, since the autonomous helicopter looks like it has gone crazy and appears to be flying erratically. You would likely assume that something awful has happened to Ingenuity and that its demise (of sorts) is imminent.   

The basis for wild flying is somewhat complicated, but there is one word though that sums up the issue: Timestamps.   

Whoa, you must be saying to yourself, weren’t we just discussing timestamps a moment ago? How fortuitous! Actually, the point of that earlier saga about timestamps was to get you ready for identifying what happened to Ingenuity on Mars during its Flight Six.   

Strap yourself in and let’s jump into the fray.   

When the autonomous helicopter is flying, it tends to use a downward-facing camera that is usually being aimed at the Mars surface. About every thirty seconds, a picture is taken. Each picture is computationally analyzed by an onboard computer image processing system. Over time, a series of pictures can aid in ascertaining where the craft is, along with changes being made in velocity, altitude, attitude, position, and the like.   

Notably, this encompasses comparing one photo to another photo. 

Hint: Remember the earlier tale of the photos captured about the toddler in the street and the dog in the street. This will come in handy in a moment.   

This particular camera on Ingenuity is generically referred to as a navcam, meaning that it is a camera that primarily has to do with aiding the navigation of the craft. The computer that is autonomously piloting the system uses the images and the analyses of the images to help figure out where it is, where it is heading, etc. This analysis is matched with other navigational capabilities, including the use of an Inertial Measurement Unit (IMU). 

With a bit of a drum roll, I now present you with the official statement about the anomaly (as per the NASA website): “Approximately 54 seconds into the flight, a glitch occurred in the pipeline of images being delivered by the navigation camera. This glitch caused a single image to be lost, but more importantly, it resulted in all later navigation images being delivered with inaccurate timestamps. From this point on, each time the navigation algorithm performed a correction based on a navigation image, it was operating based on incorrect information about when the image was taken. The resulting inconsistencies significantly degraded the information used to fly the helicopter, leading to estimates being constantly ‘corrected’ to account for phantom errors. Large oscillations ensued.” 

It doesn’t seem yet fully clear as to what the timestamps were and how they got misassigned or misaligned, but we’ll just take this as it is for now and go with it.  

You might be wondering why the craft didn’t go so haywire that it crash-landed. Glad that you asked. 

One separate, but integral aspect of the Ingenuity autonomous piloting is that it apparently tries to keep the craft within some preferred or reasonable thresholds of operation. 

This is oftentimes a kind of failsafe mechanism for Autonomous Vehicles (AVs).   

If everything else is going berserk, at least the core driving or piloting component is usually supposed to keep the vehicle from veering radically from normally expected parameters. The programmers usually include stated thresholds that indicate to stay within those boundaries, and even if there is some other internal component trying to force the vehicle to stray outside those limits, the core piloting system refuses to do so and tries to counter the actions by blocking or counterbalancing what is taking place.   

As per NASA’s explanation about how Ingenuity overcame the timestamp snafu: “One reason it was able to do so is the considerable effort that has gone into ensuring that the helicopter’s flight control system has ample ‘stability margin’: We designed Ingenuity to tolerate significant errors without becoming unstable, including errors in timing. This built-in margin was not fully needed in Ingenuity’s previous flights, because the vehicle’s behavior was in-family with our expectations, but this margin came to the rescue in Flight Six.” 

There was also a lucky rabbit’s foot that helped out too. 

When Ingenuity reaches the final phase of a flight and is starting into its descent, the navcam is no longer actively used for navigational purposes. This makes sense due to the likelihood of dust being tossed up when the autonomous helicopter gets closer to the Mars surface and would obscure or make the images unusable or unreliable. The NASA description is stated this way: “That design decision also paid off during Flight Six: Ingenuity ignored the camera images in the final moments of flight, stopped oscillating, leveled its attitude, and touched down at the speed as designed.” 

You could quibble somewhat on this latter aspect. As suggested, this might be more a matter of luck rather than a purposeful design basis. It seems highly unlikely that an envisioned situation was that the navcam itself might be generating difficulties and therefore upon landing that it would ergo make indubitable sense to switch away from using it. Instead, in this case, the other reason for no longer using the navcam when the landing stage began became unintended and yet an altogether welcomed helping hand. 

Now that we’ve covered the matter of an autonomous helicopter operating on Mars, let’s shift our attention down to earth. There are going to be all sorts of autonomous vehicles here on earth, including autonomous helicopters, autonomous drones, autonomous trucks, autonomous ships, autonomous cars, and so on. For ease of reference, consider those autonomous vehicles to be self-driving.   

I’d like to see what lessons we can learn from the Ingenuity situation and apply those to the advent of AI-based true self-driving cars. 

Self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: How do timestamps and image processing apply to AI-based true self-driving cars and could something akin to the Ingenuity anomaly happen to a self-driving car? 

Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). 

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Timestamps Issues 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. 

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. 

Let’s dive into the myriad aspects that come to play on this topic. 

A suitable place to start involves the appearance of an anomaly while an autonomous vehicle is in the field and essentially underway. That’s a bad time for things to go awry.   

In the case of a Mars autonomous helicopter, there had likely been a vast amount of careful design, building, and testing long before the AV was sent along to Mars. Yet, despite rigorous efforts to identify beforehand potential issues that might arise, a quite serious issue nonetheless did arise. 

Some might have heartburn referring to the issue as a so-called “anomaly” which perhaps uses some queasy or loose semantics to overshadow the aspect that this seems to be an outright error or bug in the system. The phrasing of the less alarming wording via the innocuous “anomaly” seems to soften the bruising qualm that the developers, leadership, and development process let the flaw slip through and eventually arise while the craft was at least a long-distant 34 million miles away from home and in the midst of its mission. 

Questions include how did a dropped image lead to the consequent series of dysfunctional outcomes? Wouldn’t a dropped image be considered part of the foundational design and have been anticipated? If not, at least this ought to have been a test case. If it was a test case, what happened during the test? Did it not reveal the subsequent problems associated with the timestamp? Maybe it did but wasn’t noticed, for which that alone is cause for concern. What provision did the design have about the verification or validation of timestamps? Were there not tests involved that purposely messed with the timestamps to see how the rest of the system would react? And so on. 

In any case, fortunately, the AV was able to remain aloft and landed without damage or being destroyed. We can be thankful for that. 

This though seems to have happened by the reliance on an overall failsafe rather than having some distinct formulated provision to cope with this specific error or bug. A catchall saved the day.   

The thing is, the depth and intensity of design and testing that takes place for a craft like Ingenuity are some of the most impressive and robust work done on any AVs of any kind. In contrast, for some of the existing self-driving car development efforts, the quality and intensity of design and testing are not nearly as thorough and exhaustive.   

In short, if this type of bug can skip through and end up in the final system of an especially meticulously devised AV, we ought to be keeping a keen eye on the efforts of developing self-driving cars. 

We can also ponder the consequences in the case of a self-driving car. A downed autonomous helicopter on Mars as a trial experiment would ostensibly be bad, sad, and disappointing, but no one would be dead. A self-driving car here on earth that experiences a serious bug or error while on the public roadways could spell disaster in terms of the possibility of getting into a fatal car crash, harming the occupants of the self-driving car, along with the possibility of injuring or killing pedestrians and other riders in other nearby cars. 

That covers the somewhat generic matter about hidden bugs or errors and the need to surface them beforehand and excise them or at least have purpose-built targeted provisions to cope with them (in addition to, and not in lieu of overall failsafe capabilities too). 

That being said, is there any chance of dropped images from the video cameras that are being used on self-driving cars?   

You might be wondering about that. Perhaps there is zero chance of a similar error arising for self-driving cars. No such luck. Sorry to say, there is a strong chance of this occurring (due to space constraints here, I’ll not go into the details herein, though am likely to cover this in later columns). 

Is there any chance of the timestamps utilized in self-driving cars going awry in one manner or another? Absolutely. 

But that doesn’t necessarily imply that the various self-driving car efforts are all focusing on those specific potential issues and are devoting substantive resources toward those particular types of errors or bugs. Keep in mind that many of the AI development teams are already thinly stretched with just trying to get their self-driving car project to successfully go from point A to point B, doing so safely and without incident. If there are qualms about dropped images or a snafu with timestamps, it is likely at a considered low-chance possibility right now and not acutely getting outsized attention at this time. 

Perhaps the Ingenuity snafu will be a helpful wake-up call. 

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Some insist that self-driving cars should be entirely tested on private closed tracks or proving grounds before they are allowed on public roadways. Some similarly insist that self-driving cars should entirely be tested via computer-based simulations, doing so before being allowed on public roadways. Presumably, a combination of simulation and proving grounds would seem relatively satisfactory to those camps (I’ve discussed this overall topic at length, see my columns).   

Will whatever catchall failsafe provisions that each automaker or self-driving tech firm is selectively devising be sufficient to overcome any unforeseen errors or bugs? 

Don’t know. Can’t say for sure. 

We know this much, namely that the lives of that dog and toddler running into the street might depend on it. 

Go ahead and put an indisputable timestamp on that solemn thought.   

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments