Why automated driving needs extraordinary computing power

Why automated driving needs extraordinary computing power

Electronics engineer, Cristiano Amon, described electric vehicles as computers on wheels. Slightly reductive, perhaps, but consider that Amon is CEO of Qualcomm, the California-based tech giant with assets worth US$50 billion, and the cliché takes on a higher meaning.

The age of connected and automated mobility is upon us, and Qualcomm is already working on 5G connectivity with 23 of the 26 leading global car brands.

The sheer volume of data being collected is soaring. Forget gigabytes, it’s time to start thinking in petabytes (PB) and exaflops.

A petabyte, as we all know, is one thousand trillion bytes. However, rather than looking at storage capacity, there’s been a shift towards performance, measured in floating-point operations per second (FLOPS).

Horiba Mira estimates that vehicle manufacturers are now collecting over 11PB of data a year from connected cars, and it would be remiss not to mention the personal data ramifications.

A survey by Parkers found that only 14% of people would be happy to share their driving data with third parties. Let’s leave that aside for today, along with over-the-air updates, another game-changing technology with myriad implications!

In terms of onboard processing, in September, chip company NVIDIA unveiled a new computing platform, Drive Thor, designed to centralise self-driving and assisted driving, along with other digital functions. Founder and CEO, Jensen Huang, described it as a superchip of epic proportions.

“Manufacturers can configure it in multiple ways,” he said. “They can dedicate all of the platform’s 2,000 teraflops to the autonomous driving pipeline, or use a portion for in-cabin AI and infotainment.”

That’s mighty impressive computing, but translating those ones and zeros into decisions requires rules-based software. For example, if the forward-facing camera image contains a pixel pattern associated with a car, and the radar confirms this, and a collision is predicted, then a solution will be deployed, such as emergency braking.

Josh Wreford, automotive manager at software simulation firm rFpro, uses digital twins to develop advanced driver assistance systems for carmakers including Ferrari, Ford, Honda and Toyota.

“While others use gaming engines, our simulation engine has been designed specifically for the automotive industry, and particularly connected and autonomous vehicles,” said Wreford.

“That’s a big difference because gaming software can use clever tricks to make things seem more realistic, whereas our worlds are all about accuracy. We can go into incredible detail, for example, with different render modes for lidar, radar and camera sensors.

“Safety critical situations are extremely difficult to test in the real world because it’s dangerous and crashing cars is expensive. That’s why digital twins are great for things like high-speed safety critical scenarios. You can test human inputs in any situation in complete safety.

“Ethical questions are always interesting, but ultimately a control engineer has to decide what the next action should be based on the exact situation. Our simulations drive robust engineering and better algorithms, so you get the best reaction no matter what occurs.”

While debate about the acceptability of simulation data for homologation rumbles on, autonomous vehicle software Oxbotica has developed a new validation and verification tool called MetaDriver.

“We give the system the ability to test itself in simulation and find the edge cases much more rapidly,” explained Oxbotica’s VP of Technology, Ben Upcroft. “It will enable us to deploy new products more quickly, so everyone can gain the advantage of whatever new feature is available.”

For some this may bring to mind ‘the singularity’, the point when technological growth becomes uncontrollable and irreversible. But in automotive it could be the point where the vehicle just becomes a computer on wheels.