by Dot Cannon
The time had come to get into the car.
But on Tuesday, June 25th, at McEnery Convention Center in San Jose, that phrase had a whole different meaning.
The second annual Autonomous Vehicle Sensors Conference, run concurrently with Sensors Expo and Conference pre-conference symposia, was in full swing. Speakers had just returned from lunch. Everyone had spent the morning examining the roles which outside sensing systems, including LIDAR, radar, and cameras, would play, in the development of self-driving vehicles.
And now, the afternoon’s presenters would be taking a look at interior sensors–and what in-cabin sensing could mean for the cars of the future.
Long-range radar, long-term plans
“I have one point, and one point only, to make,” began Uhnder, Inc. CEO Manju Hegde.
“Radar is transforming, for the next generation…You don’t need to go to Level 4 to save lives.”
And during his presentation, “4D Imaging Radar: Radar Sensors for the Long Range”, Manju would explain why transformation was necessary.
“It’s going to be important that all of this be done on one chip,” he said. “There are three arguments, why we need better radar.”
The three areas he outlined were existential, performance and use-case argument.
“Our eyes, we have great cognitive capability,” Manju continued. “But our ‘sensors’ are not going to compete with electronic sensors.”
And the various individual types of sensors, he explained, will each have a specific shortcoming.
“LIDAR…can’t see reflective objects. Radar certainly addresses weather (issues, and darkness, but can’t necessarily distinguish small objects).”
The shape of things to come
Within the next three years, Manju said, radar inclusion in vehicles would undergo a major change.
“Until two years ago, maybe one percent of cars had radar. But when 2022 comes, you’ll see cars, Level 2 plus, with radar.”
Precision, he added, was crucial, as “almost” wasn’t acceptable with lives at stake. And getting to that point would be a matter of advancing the technology by following the lead of the communications industry.
Manju referenced the ways cellphones have advanced since 1995–the year automotive radar was first introduced in the U.S. Advances in telecommunications technology, he said, could serve as a blueprint.
“You take communications technology and repurpose it to radar.”
All of his company’s sensor modules, Manju continued, were one-chip based. Among the benefits of the single chip: increased accuracy in detecting individual objects, with precise distance assigned to each one.
(“With conventional radar, a large object swallows a small one,” ) he said, giving the example of a small child darting out from behind a parked truck.
“If you want to go (t0) automated driving, you can’t just be human,” Manju said. “You’ve got to be superhuman…automated driving should be ten times safer than human drivers.”
He showed a slide of his company’s “Magna”: the first digital automotive radar on chip.
This system, he explained, could detect, classify and track objects as well as predict motion.
“All of this is achieved by pushing signal processing to the limits, like the communications guys have done (since 1995).”
Filling in the gaps
“We need to continue to strive for excellence,” said Analog Devices Product Marketing Manager Eric Arntzen.
And throughout his talk, “Inertial Sensors are Breaking Through the Autonomous Vehicle Hype Cycle,” Eric illustrated the role of IMUs–or inertial measurement units–in that effort. Good sensor inputs, he said, led to good decisions. And the need for those inputs could be more immediate than originally predicted.
Eric cited a 2018 research report on emerging technologies from strategic-planning firm Gartner.
“They gave it more than ten years out, (to reach Level 4 autonomous vehicles),” Eric said. “There are still some markets that are moving at a much more rapid pace.”
Robotic taxis, he continued, could be one such market. So could truck convoys, serving major cities, with one human driver in the lead and the rest driving autonomously.
And the need for inertial measurement units, Eric explained, involved dealing with the unexpected.
“LIDAR and radar are very good at understanding a vehicle that’s coming towards you. But what if one is cutting across your (field of view) at a 90-degree angle?” he asked.
Weather was another issue.
“Tunnels in the city, fog, snow, rain–all problems,” Eric said. “(Other sensors may give different reports on the same object.) Inertials are positioned to be the route of trust, or the referee in these systems.”
Preparing for the unexpected
An additional scenario, he continued, could be a Level 3 vehicle coming to a sudden stop.
“Level 3 means the car is autonomous unless there’s an emergency,” Eric explained. “There’s a ten-, fifteen-second window until the human driver takes over.
“(But) you don’t want to slam on the brakes and get rear-ended. The inertials are filling in the gaps and soetimes providing the sole sense of input in an emergency situation.”
Eric showed a series of slides, comparing the possibility for error between GPS, ADAS and IMUs under various driving conditions. The slides illustrated that IMUs could perform well in every instance–including entering a tunnel and losing traction on slick roads.
“With (fast-)developing events, the IMU is the first line of defense,” he said. “I was driving a few days ago, and (the water pump broke, in my car). The car had no idea it was broken. You need more sensors on the cars of the future.”
Projecting the reasons those cars would have a greater need for added sensors, Eric explained that these future cars would have experienced a major shift from personal-vehicle usage. Future “robotaxis” and other autonomous commercial vehicles, such as trucks, could be driving up to 14 hours a day, 300 days per year. With this increased wear, engines, tires, brakes and other components might be expected to last five years.
So vehicle-health monitoring, through sensors, would be a necessity. And given the increased crowding of cities, Eric suggested, these innovations could arrive before 2029.
“We’re going to beat (those) ten-plus years,” he said.
An amiable “battle”
Next in the afternoon program came an answer to the hypothetical question : “If you could have only one sensor to power your autonomous vehicle, what would it be?”
ON Semiconductor Driving Lead Radhika Arora, CalmCar CTO Dr. Faroog Ibrahim, Innovusion Chief Executive Officer and Founder Junwei Bao and Uhnder, Inc. CEO Manju Hegde took the stage for the “Battle of the Sensors” panel.
The first question Autonomous Vehicle Sensors Conference Chair Willard Tu asked them, as he moderated the panel:
“If you had to choose one sensor, which one would you choose?”
“Image-quick LIDAR,” responded Junwei.
“We really need to think about what LIDAR needs to achieve, to be the last line of defense. ..(We) still can barely achieve 90-percent accuracy. Cameras themselves cannot achieve that.”
“(An) innate sensor or camera,” Radhika replied. “Road sign recognition, (lane departure errors)…are all best understood by innate sensors.”
“What you want, is a sensor that has become superhuman,” Manju answered. “Eyes can do everything a camera can’t. Humans can’t detect velocity (or) see through darkness. Radar can.”
“Tell me how the radar can distinguish the school bus, where you need to stop,” offered Dr. Ibraham. “Indirectly, the cost is also the factor. So it’s very clear, you cannot do many things without (an) imager.”
Needs and solutions
Next, Will asked the panel, “How many sensors do you really need, that are LIDAR?”
“I think you need a radar in the front, a LIDAR in the front and a camera in the front and in the corners,” said Manju.
“If you are using low-resolution sensors, then it’s going to be three (LIDAR),” Radhika answered.
“I think we need at least one LIDAR and a few cameras (at) Level 3,” Junwei replied. “(When we get to Level 5), we need to cocoon.”
The panel would go on to examine LIDAR power requirements and possible solutions to some of the current issues with autonomous driving.
“According to (survey data), 75 percent of accidents happen in darkness,” Will said. “What’s your solution for that?”
“Use (higher-performance) sensors and use illumination on the car,” Radhika replied.
“You need a sensor that can detect nighttime driving,” Manju offered. “Most of these (casualties) are pedestrians. That’s why (velocity) is critical.”
Our next destination
As the panel concluded, each speaker offered a summary statement on LIDAR and autonomous-vehicle development.
“Simulation is very important,” Dr. Ibrahim said. “And I think sensor fusion is the right way to do that. Everybody (currently) gives you an object list. That’s not sufficient at all.”
“The traditional sensors are becoming better,” said Manju. “They’re upping the ante for Level 4. I can see a path where you need high-level radars, fused together around the car.
“To achieve true autonomy, you need redundancy,” Radhika commented. “Robotaxis can afford to have all the sensor modalities. (A personal vehicle cannot.)”
“The level of safety is no gray area,” Junwei offered. “It’s either actual safety, or not. The ‘smart’ part is much bigger than the phone itself. The same is true for autonomous vehicles.”
The human component
During the next session, Veoneer Product Director Tom Herbert explored an often-overlooked aspect of autonomous driving: passenger safety.
“I want to change the whole mindset,” he began.
“When you talk about Level 2 and under, to me the best sensor in the car is the human.”
And with his presentation, “Going Beyond: In-Cabin Sensing, the Next Wave of AI”, Tom would illustrate the ways in-cabin sensors could improve safety.
“A lot of the money being spent on autonomous (vehicle development, doesn’t focus on what’s happening inside the cabin),” Tom commented. “Once I understand the interior…it really creates a lot of new value for the customer.”
That value, he said, included monitoring for drowsiness and distraction, while keeping privacy considerations in mind.
“What does privacy mean?” he asked. “Different things in different regions.”
Tom told his audience that Europe’s New Car Assessment Program would begin occupant-status monitoring in 2020. By 2022, he said, advanced fatigue and distraction detection systems would be in place, in new European vehicles. And those systems were part of a larger safety goal.
“What Europe is saying, is this…monitoring system will become mandatory in all cars by 2025.”
But, in the United States…
Meanwhile, safety strategies in the U.S. had plenty of room for improvement, he continued.
The recent U.S. “Motor Vehicle Extreme Heat Protection Act” was one paradox Tom cited. This legislation, currently in effect, allows “good Samaritans”, as well as emergency personnel, to break into a car and rescue an animal believed to be in danger on a hot day, without liability.
“I love pets, but the fact that they’re not addressing children (left in hot cars) is beyond me,” he commented. “It’s something that’s easily solvable, based on the sensors in the vehicle.”
Tom also pointed out the ways crash-test safety ratings are currently putting drivers at risk.
Crash-test dummies’ weight, he said, is based on 171 pounds for a male driver–and 108 for a female. However, in 2016, the average male weighed 198 pounds, according to the 1999-2016 National Health and Statistics Report. (As of today, the Centers for Disease Control list the average weight for a woman, age 20 and over , at 170.5 pounds!)
Another sensor-based solution, he suggested, would be a driver-monitoring system that fuses both external and internal content. “This could be annoying, but I (notice how often) my 16-year-old (looks one way and not the other while driving),” he said.
“If (my daughter) went out and drove and at the end of the day I have a driver score…this gamification (may encourage her to drive more safely).”
“Driver monitoring, that’s what’s biggest in the news today. That gateway sensor. (That can take us to where) I start relying on the car to do certain things, and the car relies on me to do certain things.”
What could the car hear?
“Microphones are just an enabling technology, that are bringing much more into autonomous vehicles,” said STMicroelectronics Senior Product Marketing Manager Edoardo Gallizio.
During his presentation, “Audio Sensing: The Search for AV’s Alexa”, Edoardo outlined both the challenges and opportunities of voice-activated technology for autonomous vehicles.
“More sensors in today’s vehicles are bringing more challenges,” Edoardo said. “I believe this is just the beginning, as these voice-activated listening devices are proliferating (in) our lives.”
Privacy, he continued, was one challenge. So was the fact that newer electrical vehicles tend to be “too quiet”–making the occupants aware of surrounding road noise!
“The fusion of human sensors and vehicle sensors (is) fundamental,” Edoardo commented. “We’re still not there…We will have vehicles that can sense surroundings, that can sense drivers, that can maybe take some action.”
And he alluded to the ways future autonomous vehicle could connect to the Internet of Things.
“…Within your vehicle, you can control (and interact with)…objects at home.”
Meanwhile, Edoardo continued, road-noise sensing technology could improve the user experience inside a vehicle. So could the fusion of audio and motion-sensing technology.
“I believe we can bring the learning of the last ten years into the automotive area,” he said. (“Having a portfolio of partners is key.). We need to be in collaboration mode all the time.”
“By 2025, there will be a really good (contingent) of semiautonomous vehicles. So we need to be ready (to provide the sensors they’ll require). There will be the need for up to 12 microphones and 12 accelerometers.
“The voice is the immediate way we can address the immediate need.”
Shifting focus–and the future
Eyeris Founder and CEO Modar Alaoui began his presentation, Ecosystem for In-Cabin Sensing”, with a prediction.
“Only about one percent of vehicles today are autonomous and shared. That (figure will grow to) 37 percent by 2030.
“Everything will become more about experiences, and not so much the actual ride itself.”
“Cars will become the third living space. There will be a focus shift from driver to occupant, and from occupant to understanding the entire scene.”
During his presentation, “Ecosystem for In-Cabin Sensing”, Modar illustrated the ways his company gathered in-cabin information to facilitate that understanding.
“We can only achieve optimized (vehicle) safety…if we have synchronized vision from the inside and the outside,” he explained.
“Eyeris placed a bunch of cameras on the inside, to understand what was going on.”
The goal, Modar said, was to understand three separate components of the “entire scene”: human behavior, objects localization and surface classification.
The cameras enabled Eyeris to collect data on posture, some emotions, activities, and the position of objects in the cabin, in relation to a vehicle’s occupants. The Eyris system’s AI portfolio, according to a slide which Modar exhibited, allowed for a complete understanding of what was happening in a vehicle’s interior, as opposed to monitoring the driver alone. Its artificial intelligence delivers data about the body, face and activities.
A final prediction which Modar shared, through his slides, was that future autonomous vehicles would be equipped with multiple camera systems, for in-cabin vision. And these would be in place at all levels of autonomy.
“This is how we believe this data will be used in the future,” Modar said. “I can tell you where is the closest airbag to the passenger’s right shoulder. Where is the water bottle.”
“We’re specing this out, for model years 2020 and beyond.”
Consolidation and looking forward
“We’re going to truly see a significant revolution in autonomy over the next few decades,” predicted Ethernovia Co-Founder and CEO Ramin Shirani.
Ramin started the day’s final presentation, “When Can I Order My Autonomous Vehicle?”, by pointing out a current problem with current construction.
“The car’s nervous system is really backwards,” he explained. “The car coming off the (assembly) line right now will have 80 different networks.”
Ethernet, he said, was the key to getting everything to work together.
“…What Ethernet does, it consolidates the network,” Ramin said. “You run Ethernet on a single, twisted cable.
“…If you think of a car, it’s basically a data center on wheels. So you have no choice but to have (a central network). ”
Ramin predicted that Ethernet would change the way manufacturers build networks–and automobiles.
“By 2023 or 2024, there could be an excess of two billion Ethernet ports in vehicles,” he said.
And he shared the story of his recent tour of a major European automobile manufacturer, which had illustrated, firsthand, one of the reasons Ethernet would be key to autonomy.
“From where the sheet metal gets into the factory, it’s all automated,” Ramin explained. “(But then) you get to the point where they’re doing wiring. Suddenly, it’s all by hand. Unless you consolidate (the systems) into Ethernet, you can’t automate.
“They also have eight, or ten, or fifteen different software platforms inside the car.
“It’s really about fundamentally changing how the cars of the future are made.”
LIDAR on the agenda
The day was coming to an end–but the 2019 Automated Vehicle Sensors Conference had another set of events forthcoming.
On the following morning, the topic would be LIDAR. A LIDAR Face-Off, along with a “pre-game” overview and LIDAR Panel, were scheduled.
In his closing remarks for the day, Will, who is Senior Director of XILINX‘ Automotive Business Unit, pointed out a “blind spot” in new-vehicle design and manufacturing.
“The automotive industry has always struggled with trying to reinvent its own technology,” he observed.
“Why not leverage what they’ve already done?”
This is Part Two of a two-part series. Here’s the link to our first installment!