21st of March, 2016: Disruptive changes in E/E and software architectures for automated driving

  • Speaker: Dr. Kai Richter, symtavision GmbH
  • Title: Disruptive changes in E/E and software architectures for automated driving
  • About: Many people see automated driving as the hottest and most compelling automotive trend ever as it promises turning science fiction into reality, possibly changing the entire way of passenger transport, world-wide. And it comes with a number of disruptive technologies, totally challenging the way automotive E/E and software systems have been build and integrated in the past, and with a high potential of changing also the business models and the key players in the market.Introducing automated driving is a process. According to SAE J3016, ADAS (advanced driver assistance) functionality precedes automated driving. In J3016 levels 0-2, the human driver still has total responsibility with the duty to at least monitor what the car is doing. In levels 3-5, the system itself does the monitoring and offers automation for some or all driving modes and conditions.

    From an innovation viewpoint, we find new types of sensors, new object and trajectory recognition functions, and new virtual reality video projection already in the pure ADAS scope (J3016 level 1). For higher J3016 levels, the sensor and object data is also used by the automated driving functions that predict what can happen in the future and decide how the vehicle should act or react with respect to its primary (and traditional) control functions: accelerating, steering, and braking (or waking up the human driver). Finally, these control function must cope with new safety requirements.

    To execute all these new functions, both ADAS and automated driving rely on computing performance and communication bandwidth that exceed that of established vehicle E/E systems by at least one order of magnitude. To facilitate that, we will need a hybrid mix of AUTOSAR and non-AUTOSAR software components, embedded microcontrollers as well as video processing and machine learning chips, altogether offering a variety of operating systems and architectural choices. Furthermore, the topologies are changing drastically, away from heavily distributed controllers towards few centralized high-performance computing centers that are connected directly to dozens of sensors and actuators spread over the vehicle.

    This brings us to the architectural aspects: Ethernet is going to dominate on-board and in-vehicle communication not only for ADAS or video or multimedia or something else but it will be used for everything together, plus the vehicle control signals that presently travel through CAN and sometimes FlexRay. Software integration looks at new concepts such as virtualization to simplify portability, provide interoperability between vehicle domains, and facilitate partitioned and protected scheduling to reach the required level of software integration. Deploying these new technologies, components, and topologies is a big challenge already.

    Another challenge comes from the huge number of cross-dependencies within such systems that lead to unprecedented level of software and system integration complexity. There is a class of functional dependencies along the signal and data flows though sensors, ADAS, automation, and vehicle control. Vehicle functions usually must meet requirements that are related to the concerted execution of software from all these domains. The second, maybe even more complex, class of non-functional dependencies results from massively shared resources and safety requirements.

    We must understand and control these dependencies, configure the aforementioned components and mechanisms, and on top of that select E/E-level architectural concepts that the whole vehicle electronics holds coherently together which means: execute it when it is needed (real-time capability), have the resources available at that time (CPU and network capacity), execute communicating software in the right order and fast enough (event chains, German “Wirkketten”), and make sure that the different software components do not interfere in unanticipated ways (safety&security). For this, we need a vehicle-level view on the architectural requirements and a development process (and organization) that supports coherent decision making and optimization.

    In his presentation, Kai Richter will illustrate these challenges and look at ADAS and automated driving examples from recent OEM publications.