PhD Seminar • Computational Neuroscience | Artificial Intelligence | Machine Learning • How Predictive Coding Rescues Traditional Neural Networks on Adversarial ExamplesExport this event to calendar

Monday, April 8, 2024 — 9:00 AM to 10:00 AM EDT

Please note: This PhD seminar will take place in DC 2310 and online.

Ehsan Ganjidoost, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Jeff Orchard

Adversarial examples represent meticulously crafted inputs exploiting vulnerabilities in machine learning models, leading to erroneous predictions. These inputs are generated by introducing perturbations, often imperceptible to the human eye, to legitimate samples, causing the model to misclassify or output incorrect results. The phenomenon not only underscores the fragility of state-of-the-art deep learning architectures in the face of seemingly minor modifications but also poses significant security and reliability concerns for applications relying on machine learning. Understanding and mitigating the impact of adversarial examples thus remains a critical area of research, aiming to enhance the robustness and trustworthiness of machine learning models in real-world deployments.

In our research, we used the Predictive Coding Network (PCnet) equipped with a local learning algorithm to predict the immediate lower layer along the hierarchies at each level. Inspectors are situated between the layers to match the prediction with lower layer values. After checking the accuracy of the prediction, inspectors signal the amount of revision required to apply by the higher layer. These dynamics comply with certain differential equations. Consequently, the system of ODEs reaches an equilibrium where the total of the prediction errors all over the network is at a local minimum.

Unlike Feedforward Networks (FFNs), as we described the structure of the PCnet and its learning process, at each moment, each layer can revisit its immediate lower layer and adjust its prediction while simultaneously being affected by the lower layer. In other words, pairs of layers collaborate to construct the network’s state. Based on this property, a simple PCnet classifier outperforms regular FFN models on adversarial examples, and PCnet can also help regular FFNs prevent failing on such adversaries.

TL; DR: Adversarial examples can exploit vulnerabilities in machine learning models, leading to inaccurate predictions, but the Predictive Coding Network (PCnet) offers a reliable solution to this problem. By allowing pairs of layers to collaborate and construct the network's state, the PCnet outperforms regular Feedforward Networks (FFNs) on adversarial examples and can aid regular FFNs in avoiding failure on such adversaries. This approach enhances the robustness and trustworthiness of machine learning models in real-world applications, ensuring their accuracy and reliability.


To attend this PhD seminar in person, please go to DC 2310. You can also attend virtually using Zoom at https://uwaterloo.zoom.us/j/94426546074.

Location 
DC - William G. Davis Computer Research Centre
Hybrid: DC 2310 | Online PhD seminar
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
  1. 2024 (126)
    1. May (9)
    2. April (40)
    3. March (27)
    4. February (25)
    5. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)