by Dot Cannon
Just before the 2018 Sensors Expo and Conference, in San Jose, a brand-new conference took place.
The inaugural Autonomous Vehicle Sensors Conference, on Tuesday, June 26th, focused on the role of sensors in the creation of safe and reliable autonomous vehicles.
On that Tuesday, when conference attendees returned from lunch, the topic was LIDAR–which automobile manufacturers consider a key element in enabling self-driving cars. And the conference addressed it by having a series of LIDAR experts discuss their organizations’ work, in a feature called the “LIDAR Face-Off”.
(So, what is LIDAR? The National Oceanic and Atmospheric Administration explains it as “a remote sensing method that uses light in the form of a pulsed radar to measure ranges”.)
Not surprisingly, every speaker–and every represented company–would have a very different approach to the LIDAR concept.
A collaborative quest
“We want to work with everybody,” said SIPM and LIDAR Market Specialist Jake Li, of Hamamatsu Corporation. “Even the LIDAR concepts are going to vary significantly, from one (company) to the other. That’s why we’re here to help the industry.”
Li gave the audience an overall look at the idea behind Hamamatsu’s design.
“You need to send out light, you need some kind of steering component to send that to different locations (and back).”
During his presentation, Li told the audience that he’d counted more than 50 LIDAR designers–so far. But he predicted that that number would grow.
“We are hoping to see consumer technology…by 2025 (but I personally have a more conservative estimate).”
Li grouped current LIDAR concepts into three categories: mechanical, polygon scanners, and flash LIDAR.
But, he said, no one concept could do everything.
“Every technology has its ‘gap’. There’s going to be the need to fill the gap with other technologies. ”
A solution based on the brain
“We don’t consider ourselves a LIDAR company,” said AEye Head of Marketing and Communications Stephen Lambright. “We consider ourselves an artificial perception company.”
In January, he said, AEye had introduced its first product, the AE100 robotic perception system, at CES 2018 in Las Vegas.
Lambright explained that the AE100 system combines what AEye refers to as an “agile LIDAR camera” with an actual physical camera.
And that system, he explained, is modeled after the human visual cortex.
“It’s actually flexibility scanning, depending on situational awareness. No one knows the right way to scan, yet. But we’re giving engineers something they can play around with, and find solutions.”
While this technology delivers much less data than other systems, Lambright said, the information gathered is higher-quality data.
“Not all objects are equal, within a scene. We can do change detection, where we detect…a stop light is changing. (We can determine,) not just that there’s a yellow blob, but that that yellow blob’s a school bus.
“At each point, there can be an interrogation of the scene by the software itself.”
Color and precision
“There’s more than one way to modulate a LIDAR movement,” said Jim Curry, Vice-President of Product at Blackmore Sensors and Analytics, Inc.
“We have a continuous wave…with no pulsing whatsoever.”
Blackmore’s approach to LIDAR, Curry continued, involved a small amount of light, retained inside the sensor, which would mix with incoming light.
“You’re probably wondering, why isn’t everybody doing this?” he continued. “(The reason is, the Doppler shift leads to blurry images.)”
But, Curry said, the benefits of this technology included the level of interference rejection, as well as “a huge catalog” of telecommunications components that could be integrated into the company’s LIDAR systems.
Displaying a slide with different intensities of color, Curry explained, “In this scene, fully saturated reds and blues belong to vehicles. Paler reds and blues (signify) pedestrians. They’re moving more slowly.
“(This system allows for high spatial precision.) We can tune (our application) to meet specific needs.”
Disrupting Silicon Valley
Ouster Vice-President of Corporate Development Raffi Mardrosian offered five criteria for effective LIDAR sensors for autonomous vehicles.
The requirements to be met, he said, were: performance, price, form factor, quality and reliability, and scalability.
(“No LIDAR technology hits all five areas of these currently, but the goal is to hit all five,”) he told the attendees.
Mardrosian listed some of the practical considerations.
“How big is the sensor? Can it be truly integrated into a vehicle…or does it have to be on the roof? Is it a real product or a PowerPoint slide? (Can you buy it today?)
“Are there gaps in the field of view? Can you see everything out there, or are (some) things going to be missed?”
“I have (our) sensor here. We’ve already been shipping (since December),” he said, displaying it. “We started about three years ago, with the goal of flipping the Silicon Valley model on its head.”
That disruption, he added, focused on performance, low cost and practicality “from Day One”.
“We care a lot more about showing performance than…flashy marketing. (With our sensor) we’re able to get horizontal angular resolution down to literally thousandths of a degree.
“We do our own manufacturing, (and) we make our sensors in less than an hour.”
Ruling out “one sensor fits all”
“(With) a lot of sensors around the cars, there’s no ‘one-sensor-fits-all’ solution,” said Phantom Intelligence CTO Eric Turenne, in the final presentation of the “LIDAR Face-Off”.
“How you will fit the sensor into the vehicle is a key factor of LIDAR adoption.”
By its nature, he said, LIDAR is collaborative.
“There is no silver bullet in the LIDAR industry. We don’t believe there’s one (system) to rule them all. All the players provide one part of the LIDAR.
“At Phantom, we provide signal processing.”
That signal processing, he continued, involves distinct layers: acquisition, filtering, detection, tracking and threat assessment.
And Phantom Intelligence interprets the return light signal by analyzing multiple LIDAR echoes, which eliminate false returns from elements such as fog, water or snow.
“What we look for is actual echo shapes,” Turenne said. “The LIDAR can be adapted to every client.
“We have flexibility where everything is programmable, reprogrammable. (We have) flexibility to integrate sensors all around a vehicle.”
A new type of radar
Next, the conversation shifted to radar–but in its twenty-first century incarnation.
In her presentation on “Next Generation Sensing: 4D Imaging Radar”, Metawave Corporation Co-Founder and CEO Dr. Maha Achour discussed Warlord™, her startup’s “smart” radar system for autonomous vehicles.
“I think the next few years will be the ‘radar era’, and I will explain why, ” she began.
Displaying a slide, she continued, “On the bottom right, you see radar today. On the top right is our radar.
Standard radar, Dr. Achour said, involved numerous limitations.
“Three hundred meters (that standard radar can reach) is not very far. You have to expect the car to go further. (With) rain, fog or dirty roads, how will the camera see?
“Physics prevails,” she said. “I’m sorry, but that’s reality.”
On autonomous vehicles, Dr. Achour continued, radar had to be able to deliver its signal in “microseconds”.
“For this, you need analog processing. Digital processing takes too long.”
Dr. Achour told her audience that her company had introduced Warlord™ at CES. The system involvess one very narrow, “pencil” beam, that scans continuously in nine directions.
Object identification with standard radar, she said, took too long due to working with digital processing.
“We do everything in analog. We don’t do anything in digital,” she explained. “On the top (of this slide) is the top radar today. On the bottom is our Warlord™ radar. (The other manufacturer needs 1.7 seconds to process information.)
“That’s too long. We only need four chirps.”
An industry in progress
“Why do we need heterogeneous processing engines in the automotive space?” asked Dr. Aniket Saha, ARM® Director of Product Marketing, as he began the final presentation of the day.
A short answer, as he would explore in his talk, “Why AD Needs Heterogenous Processing Engines and the Battle on AI Engines”, was that with autonomous vehicles, “one size does not fit all”.
“Every level of perception has some blind spots,” Dr. Saha commented. “Two hundred sensors will be used in a car by 2020. (That) means there’s going to be a lot of processing.”
Dr. Saha pointed out the different functions of the processors, breaking their work down into four main areas. These tasks: sensing, perception, decision and actuation.
“The perception is where things start getting interesting,” he said. “…In the perception mode, you have a combination of little CPUs (for image acquisition) and big CPUs which can make a difference on what was perceived.
“For image acquisition, you don’t require a high level of compute. The functions are very simple, but you have to do a lot of them.”
Dr. Saha referenced ARM®’s Project Trillium, introduced earlier this year. Project Trillium, according to the ARM® website, is a suite of processors designed to accelerate machine learning across “almost all devices”.
“What we are doing is enabling products to be built (and integrated),” he explained.
And that integration would cut costs without significantly affecting performance.
“The (amount of power needed to process) is going down. Your accuracy is not going down more than four percent.
“We have enabled high levels of performance at very low power.”
Project Trillium, Dr. Saha indicated, was one step in the direction towards the goal of autonomous vehicles.
“I think when you look at (sensors and the creation of autonomous vehicles) there’s a lot of mix and match capability.
“…Sensing, computing, even the IP that’s coming, all still (need) a lot of advancement. I think autonomous driving is still a child in a lot of ways.
“The road ahead is going to be built on adaptable intelligence, and we still have a ways to go.”
This is Part Two of a two-part series. Here is Part One of our coverage of the inaugural “Autonomous Vehicle Sensors Conference”.
Dates have now been announced for Sensors Expo and Conference 2019, in San Jose–the engineering industry’s only technical program to focus exclusively on sensors. The 2019 conference will take place June 25th through 27th in San Jose; here’s the link for updates.
Never been to Sensors Expo and Conference–but would like to know what you might see? Here’s what we saw and heard at the 2018 edition!