by Dot Cannon
“How are we going to get to Car 2.0?”
“What is Car 2.0?”
With these questions, and an announcement, Autonomous Vehicle Sensors Conference Chair Will Tu started off a new day-long conference, on Tuesday morning in San Jose’s McEnery Convention Center.
This inaugural conference, just prior to the opening of the Sensors Expo and Conference on Wednesday, would provide a number of possible answers.
“I think (this is) an exciting time to be in automotive,” said Will, who is the Senior Director of the Automotive Business Unit at Michigan-based Xilinx Solutions, in his opening remarks. Then, he told the audience that Daimler had selected his company to provide their AI-based auto applications.
Currently, Will said, autonomous vehicles were at Level Two of five possible ADAS (advanced driver-assistance system) levels. (Level Two, according to a slide which the Yole Developpement speakers would show during their presentation, is “feet off”. Level Three would be “hands off”, Level Four “eyes off”, and Level Five, “mind off”.)
“I think the question is, ‘which is the right path?’,” Will continued. “Incremental evolution, or is it going to be a complete revolution to get to Car 2.0?”
“Before we get to autonomy, we have to have this ability to get to Level Three. Are we monitoring the driver?…What if the driver has a heart attack?”
Will previewed the day’s events, which would include a LIDAR “face-off” from four different companies.
“There are 67 LIDAR startups going on in the world right now, all in competition. Each of the four …companies (we’ll hear) will be pitching to you what they’re doing, that they think is unique.
“By the end of this day, you’ll get a good feel of what’s going on with all the elements (that make ADAS possible).”
An approaching wave
“Here, we wanted to show what we think will be the evolution,” said Dr. Guillaume Girardin, Yole Developpement MEMS and Sensors Analyst. He was the first speaker of Yole’s joint presentation on “A View From Above: OEM Perspective of the Market”.
“Today, we have some Level Three (vehicles) on the road, but very few of them. So we are at the start of this Level Three wave.”
But, he said, that wave wasn’t coming quickly. More than seventy percent of all vehicles sold, Guillaume predicted, would integrate some autonomous capabilities–by the year 2045.
“When you look at the different sensors, you can split (them) into three different categories,” he continued. Those categories? Safety, comfort and engine management.
“Today what is driving (evolution of ADAS systems) is the safety and the comfort.”
Offering a big-picture look at the LIDAR market for robotic vehicles, Guillaume said that its current high manufacturing cost precludes immediate adoption for consumer applications, as opposed to commercial ones such as trucking companies and taxi services. But with a complex market and evolving technology, things could change.
“You can see that, in Level Three, in 2022 we put some LIDAR on these cars. So we hope some of the players will be ready for this date.”
But, he said later in his presentation, being ready didn’t mean a race to market.
“I think we have to keep in mind that there’s room for everyone. You have life in your hands when you are designing those vehicles. You have to be careful enough (with) the evolution of such technology on the roads because you can have some incidents for sure.
“The smallest incident can be painful, Uber can testify.”
Interactive value
“We can see two different things (in the robotic vehicles ecosystem),” said Yole Developpement Software and Market Analyst Yohann Tschudi.
Those two things? Autonomous vehicles and ADAS, or advanced driver-assistance systems.
In the future, Yohann said, sensor fusion could be the key to making everything work well. “Each radar, each sensor, has a precise role in the chain. Right now, the camera is one of the most important (elements).”
“If we look at cameras and radar and LIDAR, this in terms of complexity is incredible. So, which set of sensors, for autonomy? There is no rule. At this point, we have two different types of data.”
“We’re trying to find the best hardware solution,” Yohann continued. “For computing, you need to have (the parts working together), and if you look at the players, they are not playing very well.”
An ecosystem of fusion, he said, was the direction he foresaw the industry taking.
“It’s not a value chain. It’s more like a value network. Everybody is talking to everybody.”
Deep learning–slimmed down
“Deep learning is so important to the algorithm,” said DEEPHi Partner and CTO Yi Shan, in his presentation on “Machine Learning Development”. “But there is a big issue: the computation cost.”
“We always talk about peak performance. But when you design hardware, the peak performance is easy to achieve if you add enough resources to it.”
New system-level platforms, he said, would be needed to host AI algorithms and applications for robotic vehicles.
In his presentation, Shan outlined a DEEPHi platform that exploited sparsity.
“This chip is very small and could easily make it into the sensor modules,” he said, displaying a slide.
“The second part is about the software. Inside our systems (are there some elements that are unnecessary?)
“We found, this kind of model compression is (effective),” he explained. “And for pruning, our requirement is, the results stay the same, for the best models.
Accuracy, he said, stayed nearly the same with the compression he and his team used for the software.
“For the detection, our motivation is to detect (important objects including vehicles, pedestrians and traffic signs,” Shan continued.
“This is the most accurate algorithm right now, in the academic community.”
The challenges of LIDAR
“LIDAR was our first product, so that’s our bread and butter,” said Deepen AI Founder and CEO Mohammad Musa, beginning his presentation on LIDAR Training Data Best Practices. “All of our tools have a lot of AI in them.”
But creating artificial-intelligence training systems is cumbersome, he continued.
“Basically, a human sits down and draws contours and boxes, to train AI to recognize lanes, vehicles, etcetera,” Mohammad said. “Another type of labeling is called semantics segmentation. This is very manual and very labor-intensive. People have been spending, literally, tens of thousands of dollars on it.”
And even with all the work and expense, challenges still exist.
“The main problem in LIDAR, is that even a human can’t tell what’s going on sometimes,” Mohammed explained. “As you can see in this image, these are people and…trains, maybe?”
Recognition of objects is “only a piece of the puzzle”, he continued.
“What is the orientation of that object? Is that person or car in my lane, in the same direction, in the opposite direction?
“In LIDAR, what you see is literally only the points you’re getting reflections from. (And some areas, you get no reflection points at all.) These differences make it hard for the industry to decide on what is the best standard.”
Even the best car companies, Mohammad said, struggled with making LIDAR work.
“The biggest challenge we’ve seen, with LIDAR, is that we don’t have camera data. It’s really a lot of guesswork…The industry hasn’t really converged on formats and standards for benchmarking these data sets.”
Some of the best practices at this point, he said, included simplifying the labeling rules as much as possible. Meanwhile, he added, his company was investing in some techniques to help learn across multiple fields.
But, Mohammed added, a lot still remains to be resolved with AI, especially for spontaneous events.
“Some truck drops a sofa in the middle of the highway. If you’ve never trained your AI on sofas, what does it do?
“There are a lot of things we haven’t identified in deep learning, around detection and identification.”
The human component
“We all know driver drowsiness and distraction are a huge issue to safety,” said Kevin Tanaka, Seeing Machines’ Senior Director of Marketing, Automotive, as he began his presentation on “Level 3 Driver Monitoring Systems”.
Kevin cited some sobering statistics, including the World Health Organization’s figure of 1.25 million people who die in auto accidents, worldwide, every year.
Meanwhile, he said, just over nine percent of all U.S. car accidents are caused by drowsiness. Fifteen percent of all U.S. injury crashes involve distraction.
“Here’s a really scary statistic,” he continued. “On average, people out on the road today are distracted thirty percent of the time that they’re on the road.
“This is a problem (regulators are looking at closely, because it’s such a huge issue).”
Kevin explained that his company does real-time tracking of drivers’ head pose, face, and eye motion.
“We’re tracking the eyelid motion. We’re also tracking the pupils, the iris…and that’s all with a single camera,” he said of one system which Seeing Machines has developed.
“We can track through sunglasses, and (if the driver is wearing a cap).”
Seeing Machines’ systems measure driver drowsiness levels, as well as impairment and engagement levels.
“Eyes on, or off the road? Open or closed?” Kevin said.
When a driver is falling asleep at the wheel, he explained, that’s happening in actual stages.
“One of the key things we’re trying to do in our industry is to look at ‘micro sleeps’, ten minutes before someone falls asleep,” he said.
Beyond AI
But, Kevin continued, deep learning alone wouldn’t solve the problems of driver distraction, impairment or falling asleep at the wheel.
“How do you apply physiological and psychological measures to the detection technology?” he said. “To do this, we have to actually capture data from drivers, and that takes a lot of time.”
Presently, he explained, one of the ways his company is doing that, is by wiring up long-haul trucks.
“We have over 20,000 long-haul trucks wired up to gather driver data,” Kevin said. “We had one driver actually fall asleep with his eyes open.”
“In Australia, we’ve got a commercial fleet wired-up freight company. In November, we added driver alarms and haptic (alarms that actually shake the seat if a driver is falling asleep). (Just by) the drivers knowing there’s a system in the cab, we saw a reduction of fatigue events by 3.7 percent.”
And in the overall fleets in which driver monitoring has been implemented, Kevin said, Seeing Machines saw a 90 percent reduction in fatigue events, and an 80 percent reduction in cell phone use.
“It’s not just war on the driver. It’s actually training the driver to be more vigilant about the road. What we’re looking at …is to change driving behaviors.”
Kevin concluded his presentation with a personal note.
“I’m actually a volunteer firefighter up on Highway 17, one of the windi-est roads in this region. (I’ve seen a reduction in people getting killed.) I’d like to see that continuing to go forward, ’cause I actually see that in everyday life.”
Perception and projection
“What does it mean to be ‘high-tech’?” asked General Motors’ End-to-End Feature Owner of Connectivity Leonard Nieman, in the final presentation of the morning, “Keep Moving Forward: the Evolution of the IVI Experience”.
“It depends. So, what does it depend on? Our perception.
“I believe the only reason there are sensors on a vehicle is to provide a positive experience to our customer.”
Leonard took the audience through some of the “high-tech” music applications of the past decades–starting with the hi-fi record player in 1955. After a look at cassettes and CDs, he moved on to a current application.
“We got a jump start on streaming in this industry,” he said. “This is where we went with streaming. We offered apps that don’t require a phone.
“GM was the first to come out with what GM calls, ‘projection’.
Leonard said General Motors put streaming content into vehicles in two different ways: through apps and projection. Each one, he told the audience, had different benefits.
“Today, in our latest and greatest vehicle, we’ve got over 2000 data signals, and so that capability will always be better in apps, in my opinion.”
Relevant capabilities, he said, were key.
“Starting just a few years ago, we allowed the capability for customers to log into their vehicles. Customers want ‘something that’s right for me, right now’.”
“I know there’s a million cool things we can do with sensors. What I care about is (giving a customer an awesome experience).”
With that in mind, Leonard encouraged the attendees to have a conversation with him, if they had other cool ideas they’d like to suggest.
His comment was a timely one–lunch was served and networking was on the menu!
And coming up, for the afternoon, was a “face-off”, of the best kind. The “LIDAR Face-Off” was scheduled for the afternoon.
This is Part One of a two-part series.
Sensors Expo and Conference, 2018, took place Wednesday morning, June 27th through Thursday afternoon, June 28th, in San Jose’s McEnery Conference Center. Dates have been announced for the Sensors Midwest conference, in Rosemont, Illinois October 16th through 17th, 2018. In addition, the 2019 Sensors Expo and Conference is scheduled for Tuesday, June 25th through Thursday, June 27th, 2019 in San Jose.