Ansys SCADE Vision automates the identification of potential vulnerabilities in autonomous vehicle perception systems, reducing the costs of testing and safety activities for AI-based embedded perception software.
SCADE Vision powered by Hologram is a powerful software tool that can address inherent weaknesses in AI-supported perception systems by pinpointing hundreds of potential edge cases, quickly and affordably.
SCADE Vision minimizes the cost of edge case resolution and perception system validation by automatically identifying and flagging a range of edge cases, so autonomous perception systems can be visually trained to recognize and respond appropriately to these objects.
Ansys SCADE Vision speeds up the discovery of weaknesses in your embedded perception software that may be tied to edge cases, and helps identify their root causes – also called triggering events – by automatically applying augmentations to your input sensor data to identify fragility in your AI-based embedded perception software
Automated identification of edge cases via SCADE Vision helps perception engineers eliminate hours of manual effort and the associated costs, while ensuring a high degree of confidence in autonomous perception systems.
Increasing the ability of AI-based perception systems to identify and respond correctly when they encounter an edge case is essential to the safe, reliable operation of autonomous vehicles and robots. Until the issue of edge cases is successfully addressed, the performance of autonomous systems cannot be guaranteed in mission-critical situations.
SCADE Vision minimizes the cost of edge case resolution and perception system validation by automatically detecting the weaknesses in these systems. As it reviews video-based sensor data while exercising AI-based perception system algorithms, SCADE Vision augments key areas for closer analysis. Based on that augmentation, which is intended to reproduce the wide variety of conditions that a deployed system will encounter, SCADE Vision automatically identifies and flags any abnormalities, with no human intervention.
As SCADE Vision automatically analyzes video-based sensor data from test drives of simulated scenarios, it augments the original video input with artificial disturbances to identify scenes where the object recognition of the software under test comes close to its limits. This empowers perception engineers to review far fewer scenes that show edge cases and assign tags that indicate potential root causes of the abnormality, such as poor weather conditions, resulting in weak detection. Analysis is also in compliance with the Safety of the Intended Functionality (SOTIF) standard.
SCADE Vision offers a complete, automated solution for developing maturity of AV systems at every level
SCADE Vision enables automated testing of the AI-based AV perception software under test (SUT), usually a convolutional neural network (CNN). Testing consists of running the SUT inference algorithm twice against each raw input video captured from the AV sensors: the first inference is run on the baseline, unmodified frames, while the second inference is run on an augmented/modified version of the input video frames, when there are objects of interest (e.g., pedestrians, cars) detected in the scene.
The SCADE Vision engine then analyzes the SUT outputs stored in the results database using several defect analysis algorithms to identify weaknesses and fragilities in the AV perception software, including weak detections or false negatives. SCADE Vision does not require labeled data to support AV perception software testing; instead, it searches through raw sensor data recorded by the autonomous vehicles.
After the SCADE Vision engine has analyzed the AV data lake, a web-based UI sorts probable defects into various types of “triggering events”. These triggering events could include weather or lighting conditions, infrastructure, unexpected road users, or even an incomplete training of the machine learning system. This sorting helps your team identify not only the individual vulnerabilities in the form of edge cases but to see patterns of weakness and gaps in the AI system.
Abnormalities, together with the assigned tags, can then be exported into medini analyze to complete the causal analysis. In compliance withperception-algorithms-autonomous-vehicles SOTIF standards, a safety analyst can draw the right conclusions and recommend counteractions, ranging from improved training sets for perception algorithms to specific filters or even the incorporation of additional sensors.
SCADE Vision’s automatic safety report generator streamlines communication between development and safety teams. Produce web and printable reports using a dedicated UI that allows analysts to provide commentaries on key triggering events, including solutions and example defects. Create a communication feedback loop that can help you find problems, solve them, and thoroughly prepare to maximize value from your more expensive tests.
RESOURCES & EVENTS
It's vital to Ansys that all users, including those with disabilities, can access our products. As such, we endeavor to follow accessibility requirements based on the US Access Board (Section 508), Web Content Accessibility Guidelines (WCAG), and the current format of the Voluntary Product Accessibility Template (VPAT).