Recent paper on fooling Neural Networks

Message boards : Science : Recent paper on fooling Neural Networks
Message board moderation

To post messages, you must log in.

AuthorMessage
pianoman [MLC@Home Admin]
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Jun 20
Posts: 462
Credit: 21,406,548
RAC: 0
Message 250 - Posted: 26 Jul 2020, 1:35:12 UTC

I wanted to mention this paper, recently posted to the Reddit ML community:

https://openreview.net/forum?id=BJeGA6VtPS
Title "TrojanNet: Exposing the Danger of Trojan Horse Attack on Neural Networks"

The general idea of this paper is the theme of a lot of papers. Because neural networks are so complex, it is relatively easy to hide unknown, even malicious, behavior within a network that appears to perform its designed task very well. A classical trojan horse. Now this paper has some issues, but its a good illustrative example of the problems such complex models face.

I bring this up because a few people have asked what's the purpose of MLC@Home, and this is a good example of a problem this research is trying tackle: Can we identify unintended behavior of a network, whether it be accidental or malicious? Current model evaluation methods don't catch this, we hope to learn enough to do better.

I don't want to oversell where we are at the moment, but this is one of the things we're moving towards. In fact, Dataset 2 is basically Dataset 1 with a "trojan horse" (not directly in the same sense as the paper above, but spiritually similar, in that the network changes behavior if presented with a specific magic set of inputs). If we can tell the difference between the same networks trained with Dataset 1 vs Dataset 2, that would be a strong indication we could detect the type of attack shown in the above paper.

Note our work is applicable in other areas, beyond just detecting malicious behavior, but this paper is relevant and topical, so I thought I'd share.
ID: 250 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Science : Recent paper on fooling Neural Networks

©2022 MLC@Home Team
A project of the Cognition, Robotics, and Learning (CORAL) Lab at the University of Maryland, Baltimore County (UMBC)