getting the right balance of ADAS architecture

Recent analyst reports reveal that the treatment of ADAS and infotainment systems is expected to expand significantly over the next five years. They see advances on multiple fronts, not just in AI but general computing, and changes in how OEMs want to structure electronic content, from edge to zonal to central management. A key consideration for any system builder hoping to benefit from this growth is how to address these diverse automotive architecture needs through unified product families.

Yole Développement announces growth up to 3 times greater over the next five years thanks to direct innovation in active safety capabilities and in the digital cockpit. Additionally, advances in adjacent technologies and regulations are driving rapid developments in driver and occupant monitoring systems. One question is where this growth is happening – around sensors at the edge, or around central processing in zones, or central processing in the car. Innovation is always important at the periphery where new entrants can introduce competitive solutions to slower centralized systems. Conversely, cost, security, and centralized software control push for more centralization.

Multiple edge processing nodes still need global central control for security

Before the advent of ADAS systems, the rapid growth of electronics in cars caused automotive OEMs to rethink how they wanted to distribute these electronic components. Edge detection has now accelerated this request. Part of the problem was the cost and management of data communication, further amplified by smart sensing – heavy cabling consumes a lot of power to transport data from edge to consolidated processing.

However, sensor fusion needs to merge data from multiple perspectives and sensor types, which often does not map well to the periphery or center. We need state-of-the-art AI for fast reconnaissance and data reduction, but communication and fusion now pushes some of the AI ​​to zone processors. Meanwhile, as we move to smarter cars with some level of autonomy, these processors must consolidate distributed inputs under a driving policy manager. This type of AI cannot be distributed. It should be managed in a central controller, for security and for a consolidated perspective.

Sensor fusion needs to merge data from multiple perspectives and sensor types, which often doesn’t map well to the periphery or center. We need state-of-the-art AI for fast reconnaissance and data reduction, but communication and fusion now pushes some of the AI ​​to zone processors. (Image: CEVA)

Obviously, three different classes of ADAS system processing are needed – peripheral, zonal and central – with three different profiles. Edge AI should continue to be fast and inexpensive (as there will be plenty around the car), using a single processor delivering up to 5 TOPS. Zonal processors, consolidating inputs from multiple peripheral devices, must provide a higher level of parallelism and performance, requiring a superior premium multi-core implementation operating at up to 20 TOPS. Finally, the core driving policy engine must perform inference against scenario-trained behaviors – and may also require some level of on-the-fly training. This engine will most likely be a high-end multi-chiplet device, with each chiplet being a multi-core, supporting up to 200 TOPS or more.

So how should a SoC developer design a scalable ADAS system?

It’s still unclear how revenue opportunities will segment between many low-cost edge devices, with fewer but more high-end area devices, and perhaps just one high-end core device per car. Smart money seems to be preparing for opportunities in every segment. Under these conditions, how should developers of SoC products design their solutions?

Training, optimization, and infrastructure software are some of the largest investments in deploying an ADAS system. Supporting them consistently across a product family then becomes critical to economic success. A peripheral solution may be suitable for a lighter purpose than a zonal or central solution, but it should allow for a simplified version of the same basic capabilities. This allows a common trained network, with different compiler options, to be compiled and inferred into edge, zonal, and core solutions.

As a result, the AI ​​hardware platform should allow upscaling/downscaling. It should have the same architecture, deployable as a single neural engine or multiple parallel engines, with uniform data traffic control and memory hierarchy optimization, even allowing evolution to multi-chiplet implementations if needed.

But here’s the real thing. Network developers must be able to use all cutting-edge AI methods to achieve their goal, including better performance with less power, without compromising scalability. There are a number of state-of-the-art AI methods to achieve this goal.

Take Winograd transformations, for example, which provide 2X performance at reduced power with little to no degradation in precision at dramatically reduced wordwidths, which is a popular option in advanced inference. Support is also often available for a wide range of activation and weight data types in fully mixed-precision neural MAC networks. The precision of the rotation layer can significantly reduce memory requirements and power. Parsimony engines go one step further, eliminating the need to multiply values ​​by zero that are becoming even more common in low-precision layers. This increases performance while reducing power.

Custom operations – essential in state-of-the-art accelerators – can be added to inference via external accelerators. Performing the calculations in an embedded vector processing unit at the same level as the native engines is also possible. There are other features that next-generation network architectures can leverage, such as fully connected layers, RNN, transformers, 3D convolution, and matrix decomposition.

A modern AI processor can provide all of these features in a central engine without compromising flexibility to meet future needs. By deploying such a solution, software and network development scalability from this engine to the zonal engine and to an edge engine is possible. At the same time, the same scalable hardware platform can read the same networks trained, mapped, and tuned to each target objective so design engineers can optimize AI performance through unified product families.


Gil Abraham - CEVA

Gilles Abrahamdirector of business development for The CEVAs vision business unit.


Related content:



Comments are closed.