Newsletter 01/2024
Our experts André Schneider, Olaf Enge-Rosenblatt, and Björn Zeugmann respond to the question of how the correct hardware and software platform can be found for AI algorithms in each instance.
In recent years, there has been a growing tendency to implement data-driven approaches for the continuous monitoring of industrial plants as part of digitalization and Industry 4.0 initiatives. The hope is to detect critical conditions at an early stage, minimize maintenance and downtimes, and continuously achieve high product quality or process stability. In addition to traditional solutions – using sensors to monitor signals and evaluate them with regard to critical threshold values, for example – we are now seeing a stronger focus on data analyses based on artificial intelligence (AI) algorithms. These generally offer the advantage of finding hidden patterns, trends, or anomalies in very large volumes of monitoring data and providing much more precise information on the current status of the system or process.
However, building AI-based solutions hinges on first answering a series of application-specific and context-related questions: Which physical effects provide reliable information on the system status? Which sensors can be used or supplemented, and where should they be fitted, so as to record the necessary data sufficiently well and completely? What are the options for organizing sensor and process data aggregation? How can relevant features be extracted from the raw data? Which AI models deliver the desired results reliably and with sufficient accuracy? All these questions initially concern the design of a prototype solution. Once this has been found, the focus shifts to the efficient implementation of this AI solution in the respective overall industrial environment. The primary aim here is to set up the sensor and edge device infrastructure on-site in a cost-effective, minimally invasive manner and then ensure reliable, low-maintenance, and energy-efficient operation.
To investigate the challenges in the field of edge devices in more detail and build up know-how and expertise, Fraunhofer IIS/EAS is working in the Application and Test Center for Artificial Intelligence (ATKI) on the question of which AI algorithms are suitable for which hardware and software edge platforms and what initial and running costs to expect in each case. To this end, a series of benchmarks are currently being defined, which will then be validated and quantitatively evaluated on the basis of performance and resource requirement parameters. In parallel with the benchmarks, pilot applications are being developed and investigated in collaboration with partners from industry. The aim in each case is to find promising approaches for the use of AI in a company setting; to propose solutions, including an implementation strategy and cost estimate; and to provide quantitative evidence based on the benchmark results.
At present, the edge device portfolio includes hardware that covers all device classes currently on the market, from microcontrollers to high-performance industrial PCs. These include TPU-based boards and devices from the Google Coral series (Micro, Mini, Dev Board, USB Accelerator), GPU-based devices from the Nvidia Jetson series (Nano, Xavier, Orin) as well as various (primarily CPU-based) systems such as Raspberry PI devices and the systems produced by Beckhoff, Siemens, and Gantner Instruments, which are established in the industrial environment.
Foremost among the key criteria for implementing AI solutions on the hardware under consideration are those solutions’ memory requirements and computing power, followed by their costs and energy requirements. Microcontrollers and TPU- and GPU-based hardware solutions in particular promise high performance with low resource requirements and a compact design. However, the only way to benefit from these advantages is if the feature extraction and the AI algorithm, which usually means the appropriate pre-trained neural network, can also be implemented correctly at the software level. What is needed here is expertise in reducing very large neural networks (pruning) and quantizing pre-trained models.
To be able to answer the questions raised for specific applications in industry in a targeted and effective manner, the ATKI at Fraunhofer IIS/EAS is working not just to develop edge device hardware but also to implement semi-automated AI workflows. Especially for large amounts of sensor data, which partners often provide online or offline for analysis, these workflows allow an estimation of the analysis goals that can be achieved with AI in a short time. Moreover, in what is often a very large scope of options (for example, in feature extraction and AI model architecture), the workflows enable an effective, parallel, and automated search for potentially robust solutions and a well-founded assessment of their potential.