AI Training Algorithms Susceptible to Backdoors, Manipulation

Researchers based their attack on a common practice in the AI community where research teams and companies alike outsource AI training operations using on-demand Machine-Learning-as-a-Service (MLaaS) platforms.

For example, Google allows researchers access to the Google Cloud Machine Learning Engine, which research teams can use to train AI systems using a simple API, using their own data sets, or one provided by Google (images, videos, scanned text, etc.). Microsoft provides similar services through Azure Batch AI Training, and Amazon, through its EC2 service.

The NYU research team says that deep learning algorithms are vast and complex enough to hide small equations that trigger a backdoor-like behavior.

For example, attackers can embed certain triggers in a basic image recognition AI that interprets actions or signs in an unwanted way.

In a proof-of-concept demo of their work, researchers trained an image recognition AI to misinterpret a Stop road sign as a speed limit indicator if objects like a Post-it, a bomb sticker, or flower sticker were placed on the Stop sign’s surface.

Source

Leave a Comment

Your email address will not be published. Required fields are marked *