IARPA seeks to plug privacy holes in AI

IARPA seeks to plug privacy holes in AI

Tuesday, 15 January 2019 15:25

IARPA seeks to plug privacy holes in AI

 

Hackers are using adversarial techniques to corrupt artificial intelligence and machine learning models by tampering with their training data.

 

According to the Intelligence Advanced Research Projects Agency, recent research shows AI systems are vulnerable to exploits such as "reconstructing training data using only output predictions, revealing statistical distribution information of training datasets, and performing membership queries for a specific training data example."

 

Read the full story on Government Computer News (GCN)