Seminar Schedule
Chronological | By Speaker | ||||||||||||||||||||
|
|
Title | Towards Robust and Secure Deep Learning: From Algorithmic Hardening to Hardware-Aware Defense |
Speaker | Ruyi Ding |
Northeastern University | |
Abstract | As artificial intelligence systems become increasingly pervasive, securing them demands a holistic approach that spans learning algorithm to deployment hardware. In this talk, I will present a layered defense framework that encompasses optimization algorithm design, model architecture pruning, and protection mechanisms leveraging hardware-level signals. We first tackle applicability authorization-protecting pre-trained model's IP from unauthorized transfer-by designing EncoderLock, a systematically method that blocks malicious probing. By embedding task-specific authorization into pre-trained encoders, it ensures models restrict illegitimate classification heads while maintaining intact performance on benign ones. However, optimization-centric protections alone are insufficient for locally deployed models. To fortify security at the model architectural level, we introduce Non-Transferable Pruning, which transforms efficiency-driven pruning into a defense mechanism, hardening model's IP. Yet, even robust algorithmic defenses remain vulnerable to high-priority adaptive attacks. To further enhance model robustness, our design incorporates hardware-level defenses. EMShepherd exemplifies hardware-software co-design: by analyzing electromagnetic emissions from DNN accelerators, it detects adversarial inputs in real time-without relying on model internals. This hardware-informed approach complements algorithmic safeguards, creating a unified defense where physical-layer observability reinforces software resilience. By combining these complementary strategies, my work demonstrates that robust AI security relies on a harmonious, multi-layered defense. I conclude by envisioning future AI systems that combine hardware-based defenses with robust algorithms to secure deployment against evolving adversarial threats. |
When | Tuesday, 11 February 2025, 9:30 - 10:30 |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |
Title | Securing Computer Systems Using AI Methods and for AI Applications |
Speaker | Mulong Luo |
The University of Texas at Austin | |
Abstract | Securing modern computer systems against an ever-evolving threat landscape is a significant challenge that requires innovative approaches. Recent developments in artificial intelligence (AI), such as large language models (LLMs) and reinforcement learning (RL), have achieved unprecedented success in everyday applications. However, AI serves as a double-edged sword for computer systems security. On one hand, the superhuman capabilities of AI enable the exploration and detection of vulnerabilities without the need for human experts. On the other hand, specialized systems required to implement new AI applications introduce novel security vulnerabilities. In this talk, I will first present my work on applying AI methods to system security. Specifically, I leverage reinforcement learning to explore microarchitecture attacks in modern processors. Additionally, I will discuss the use of multi-agent reinforcement learning to improve the accuracy of detectors against adaptive attackers. Next, I will highlight my research on the security of AI systems, focusing on retrieval-augmented generation (RAG)-based LLMs and autonomous vehicles. For RAG-based LLMs, my ConfusedPilot work demonstrates how an attacker can compromise confidentiality and integrity guarantees by sharing a maliciously crafted document. For autonomous vehicles, I reveal a software-based cache side-channel attack capable of leaking the physical location of a vehicle without detection. Finally, I will outline future directions for building secure systems using AI methods and ensuring the security of AI systems. |
When | Thursday, 13 February 2025, 10:00 - 11:00 |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |
Title | Towards Efficient and Robust Deployment of Graph Deep Learning |
Speaker | Yue Dai |
University of Pittsburgh | |
Abstract | Inspired by the success of Graph Neural Networks (GNNs), recent graph deep learning studies have introduced GNN-based models like Graph Matching Networks (GMNs) and Temporal Graph Neural Networks (TGNNs) for diverse tasks in various domains such as social media, chemistry, and cybersecurity. Despite these advances, deploying such models efficiently and robustly in real-world settings remains challenging. Three core issues impede their broader adoption: (1) limited training efficiency, which hinders rapid model development for targeted applications; (2) suboptimal inference latencies, which fail to meet real-world responsiveness needs; and (3) fragile robustness against adversarial attacks, posing serious security and privacy concerns. This talk will present my research on full-stack optimizations for GNN-based models. First, I will introduce Cascade, a dependency-aware TGNN training framework that boosts training parallelism without compromising vital dynamic graph dependencies, resulting in faster training while preserving model accuracy. Second, I will detail CEGMA, a software-hardware co-design accelerator that eliminates redundant computations and data movement in GMNs, and introduce FlexGM, a GPU runtime that adaptively optimizes GMN inferences. Finally, I will present MemFreezing, an adversarial attack that nullifies TGNN dynamics by exploiting node memory mechanisms. Building on these advances, my future work will push the frontier of deep graph learning by optimizing emerging models—including GNN-LLM hybrids, developing robust memory defenses for dynamic graphs, and applying graph-based reinforcement learning to address system design challenges. Through this holistic approach, I aim to enable efficient, scalable, and secure GNN-based solutions across a wide range of real-world applications. |
When | Thursday, 20 February 2025, 10:00 - 11:00 |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |
Dissertation | Towards Utility of Unlabeled Data and Training Efficiency of Federated Learning |
Speaker | Xiaobing Chen |
Ph.D. Candidate LSU Division of Electrical & Computer Engineering | |
Abstract | This research focuses on advancing machine learning in resource-constrained settings by improving self-supervised learning for Human Activity Recognition (HAR) and developing robust federated learning frameworks. It introduces Temporal Contrastive Learning in Human Activity Recognition (TCLHAR), a method leveraging adjacent time windows to reduce labeled data reliance and better model dynamic processes. Besides, the work addresses federated learning efficiency by optimizing client selection and training procedures for diverse data and latency conditions. It also proposes a Multi-Group Transmission (MGT) scheme using OFDMA to reduce stochastic variance and accelerate model convergence, and finally explores incentive mechanisms that enhance both server and client utility in large-scale federated learning deployments. |
When | Thursday, 10 April 2025, 8:30 - 9:30 (Public Portion) |
Where | Room 3316E Patrick F. Taylor Hall |
More | Hide Abstracts. Announcement (PDF) |
Dissertation | Participation of Battery Energy Storage Systems in Load Frequency Control of Power Systems |
Speaker | Zakaria Afsharbakeshloo |
Ph.D. Candidate LSU Division of Electrical & Computer Engineering | |
Abstract | In this dissertation, functionalities and roles of Battery Energy Storage Systems (BESSs) in power systems are extended beyond primary frequency control (PFC). The BESSs, while participating in PFC, are controlled to first maintain their state-of-charge (SOC) within an acceptable range through a primary charge controller, and to recharge the BESS to its maximum SOC through a secondary charge controller. This forms a hierarchical frequency and SOC control of power grids with BESSs. As an important participation of BESS in frequency control, grid inertia enhancement can be performed through BESS for the inertial response of grid frequency that has been presented in the literature. However, this is contingent upon knowing the accurate knowledge of the equivalent grid inertia, which is variable due to the connection and disconnection of renewable energy resources, calling for inertia estimation to determine the deficit in the grid inertia. Here, adaptive control theory is used to control the grid inertia without the need for accurate information about the equivalent grid inertia. As another role of the BESS in frequency control, the cyber-attack mitigation is performed on the BESS-side to detect and mitigate the adverse effect of a cyber-attack on the communication links used for secondary frequency control. It is proven that BESS-side attack mitigation excels in synchronous generator-side while considering the communication link delay, which always exists in power systems. |
When | Monday, 30 June 2025, 9:00 - 10:00 (Public Portion) |
Where | Held Via Zoom LSU Only |
More | Hide Abstracts. Announcement (PDF) |