Home Greater New England Research Interest Group Meeting [February 2022]

Greater New England Research Interest Group Meeting [February 2022]

View recording of meeting

Date: February 1, 2022

Meeting Agenda: AI/ML Research Project Updates from Boston University Red Hat Collaboratory Students


1.   Cam Garrison (15 min)
Solving the problem of adverseriality against neural network classifiers is difficult, even when you have access to the model’s structure and weights. I’ll be discussing existing advanced and naive techniques to perturb images against a classifier, and then discussing some initial experimental findings in using the game theory concept of Shapley values to generate perturbed images against neural networks.


2. Christina Xu (15 min)
Machine learning algorithms have a tendency to memorize sensitive information in a dataset which can later be exploited by an adversary. While traditional privacy preserving techniques have sufficiently protected sensitive information in the past, with modern technological advances and more data available at our fingertips than ever before, these outdated approaches fail to preserve privacy. Thus, differential privacy (DP) was developed as the first and only method of guaranteeing that any individual record within a dataset can not be identified. It has become the leading technique to address users’ concern for their privacy on online platforms with current deployments at Apple, Google, and even the United States Census Bureau. However, some companies tend to misuse the technique, only achieving DP in name, while other companies have been slow to implement it, skeptical of the privacy guarantees that it provides. In both cases, they view DP as a blackbox due to a lack of knowledge about the underlying mathematical theory. Furthermore, a major barrier preventing such companies from adopting DP is a gap of knowledge between individuals who are familiar with the technique and their peers who are less comfortable with the underlying mathematical theory. The ultimate goal of my research is to help bridge this gap. In this presentation, I will explain why traditional privacy preserving te do not work, provide an intuition for DP leading to a formal mathematical definition, and go over the advantages and challenges of implementing DP. 

3. Anqi Lin (15 min)
CNNs are prevalent tools in the field of Machine Learning and have outperformed humans in image recognition tasks. Still, they remain vulnerable to adversarial attacks – tiny perturbations that drastically change predictions, thereby increasing the number of misclassifications. This talk will give an overview of the types of adversarial attacks that exist.

Date

Feb 01 2022
Expired!

Time

3:00 pm - 4:00 pm

Local Time

  • Timezone: America/New_York
  • Date: Feb 01 2022
  • Time: 3:00 pm - 4:00 pm

Location

Virtual

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.