AI Training Algorithms Susceptible to Backdoors, Manipulation

By | August 27th, 2017
No Comments on AI Training Algorithms Susceptible to Backdoors, Manipulation

Three researchers from New York University (NYU) have published a paper

this week describing a method that an attacker could use to poison deep

learning-based artificial intelligence (AI) algorithms.


Researchers based their attack on a common practice in the AI community where research teams and companies alike outsource AI training operations using on-demand Machine-Learning-as-a-Service (MLaaS) platforms.

For example, Google allows researchers access to the Google Cloud Machine Learning Engine, which research teams can use to train AI systems using a simple API, using their own data sets, or one provided by Google (images, videos, scanned text, etc.). Microsoft provides similar services through Azure Batch AI Training, and Amazon, through its EC2 service.

The NYU research team says that deep learning algorithms are vast and complex enough to hide small equations that trigger a backdoor-like behavior.

For example, attackers can embed certain triggers in a basic image recognition AI that interprets actions or signs in an unwanted way.

In a proof-of-concept demo of their work, researchers trained an image recognition AI to misinterpret a Stop road sign as a speed limit indicator if objects like a Post-it, a bomb sticker, or flower sticker were placed on the Stop sign’s surface.


Nisheeth Bhakuni