CERES
Library Services
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "McCarthy, Andrew"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • No Thumbnail Available
    ItemOpen Access
    Interactive Methods for Improving Robustness of Neural Networks Against Adversarial Attacks
    (Cranfield University, 2020-12-07 14:10) McCarthy, Andrew
    Neural network based Machine Learning Systems are improving the efficiency of real-world tasks including, speech recognition, network intrusion detection, and autonomous vehicles. For example, network intrusion detection systems are well suited to machine learning, giving highly accurate classification. However, nefarious actors, ranging from lone hackers to advanced persistent threats seek to fool classifiers through influencing the output of the model. Unfortunately, most well trained neural network models may be fooled using gradient descent attacks algorithmically producing perturbed images known as adversarial examples.Bad actors wish to fool classifiers across application domains including Image recognition, speech recognition, and network intrusion detection. Humans and computers perceive the same data in different ways. Humans generally overlook minor differences in data. For example, minor changes in pixel size and colour. People easily overlook the visual difference between colour codes rgb(255,0,255) and rgb(254,0,254); whereas the numeric difference is strongly evident in computers algorithms, even within large quantities of data. Adversarial examples exploit this difference. Humans have difficulty detecting anything improper in a successful attack, because the perturbations are so small.Consequentially successful attacks against neural networks mean systems are vulnerable and therefore dangerously deployed in application domains. For example, incorrect classifications of road signs in autonomous could have dire consequences. Moreover, the increasing size of data being processed by neural networks enlarges the attack surface available to attackers whilst obfuscating the attack to humans. If unaddressed future mature attack methods will facilitate more destructive attacks. I therefore address the urgent research need in this area. My research explores the robustness of neural networks, aiming to understand the principles behind successful attacks and consider mitigations in key domains of network intrusion detection and image and speech recognition. I am designing tools to aid visualization of weak points in training datasets, and neural network models, to unearth attacks. Discovering ways to improve robustness of neural network models whilst retaining acceptable classification accuracy. Improving robustness of neural networks enables safe deployment across a wider range of domains.

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback