DARPA aims at improving cyber defenses with assistance from Intel.

Machine learning is a type of artificial intelligence that enables systems with new data and experiences to develop over time. One of its most common uses today is the identification of objects, such as an image and a definition.

That can allow visually impaired people to know what is in a picture, for example, but it can also be used by machines, such as autonomous cars, to identify what is in the lane.

Intel, a chip manufacturer, was chosen to lead a new US military research wing program called DARPA to strengthen cybersecurity against machine learning frustration attacks. However, although rarely, attacks by deception can interfere with algorithms for machine learning. In the case of an automobile, small modifications to real-world objects may have disastrous consequences.

A few weeks earlier, McAfee researchers fooled a Tesla by applying a two-inch tape to a speed limit indicator to travel 50 miles an hour above its intended speed.

Earlier this year, the research arm announced that they were working on a GARD system or the AI robustness against disappointment. Current anti-machine-learning mitigations are typically pre-defined and supervised, but DARPA hopes to build GARD into a wider defense mechanism for a variety of attacks.

Intel Labs leading Intel’s GARD team Jason Martin has said that the chipmaker and the Georgia Tech would collaborate to “enhance object recognition and to increase AI and machine learning capabilities to respond to adversarial attacks.”

In the first part of the project, Intel mentioned that it focuses on the increase in the use of spatially, temporally, and semanticized object detection technologies. DARPA said that GARD could be used in a variety of contexts, for example, in biology.

For instance, in the immune system which detects, learns, and remembers the attack, which creates a more efficient response to future encounters, the broad-based protection we are looking for can be seen. Dr. Hava Siegelmann, a Program Manager at DARPA’s Innovation Office said,” We need to ensure that machine learning is secure and unable to be disappointed.”