Ashwinee Panda

I am a Postdoctoral Fellow at UMD working with Tom Goldstein.

I received my PhD from Princeton University working with Prateek Mittal on trustworthy artificial intelligence and privacy preserving machine learning. My PhD was funded by fellowships and grants , most recently the OpenAI Superalignment Fast Grant.

Before Princeton I worked in the UC Berkeley RISE Lab where I was co-advised by Joey Gonzalez and Raluca Ada Popa, researching federated learning.

If you are interested in working with me, send me a DM on Twitter or Wechat.

CV  /  Google Scholar  /  Twitter  /  Github

profile photo

Research

I am currently working on a number of topics in LLMs. I am pretraining Mixtures-of-Experts, aligning models for safety, and adapting models without forgetting old information or memorizing new information.

lota Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
Ashwinee Panda, Berivan Isik, Xiangyu Qi, Sanmi Koyejo, Tsachy Weissman, Prateek Mittal
At ICML 2024 ES-FoMO / FM-Wild (Oral)
paper / code / thread

Lottery Ticket Adaptation (LoTA) is a new adaptation method that handles challenging tasks, mitigates catastrophic forgetting, and enables model merging across different tasks.

privacy auditing Private Auditing of Large Language Models
Ashwinee Panda*, Xinyu Tang*, Milad Nasr, Christopher A. Choquette-Choo, Prateek Mittal
At ICML 2024 NextGenAISafety (Oral)
paper

We present the first method for doing privacy auditing of LLMs.

dp zo Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Xinyu Tang*, Ashwinee Panda*, Milad Nasr*, Saeed Mahloujifar, Prateek Mittal
At TPDP 2024 (Oral)
paper

We propose the first method for performing differentially private fine-tuning of large language models without backpropagation. Our method is the first to provide a nontrivial privacy-utility tradeoff under pure differential privacy.

linear scaling A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization
Ashwinee Panda*, Xinyu Tang*, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal
At ICML 2024
talk / paper / code

We find that using scaling laws for Differentially Private Hyperparameter Optimization significantly outperforms prior work in privacy and compute cost.

gpt phish Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal
At ICLR 2024
talk / paper / thread

We propose a new practical data extraction attack that we call "neural phishing". This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data.

dp icl Privacy-Preserving In-Context Learning for Large Language Models
Tong Wu*, Ashwinee Panda*, Tianhao Wang*, Prateek Mittal
At ICLR 2024
talk / paper / code / thread

We propose the first method for performing differentially private in-context learning. Our method generates sentences from in-context learning while keeping the in-context exemplars differentially private, that can be applied to blackbox APIs (ex RAG).

VLM Visual Adversarial Examples Jailbreak Aligned Large Language Models
Xiangyu Qi*, Kaixuan Huang*, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal
At AAAI 2024 (Oral)
paper / code

We propose the first method for generating visual adversarial examples that can serve as transferrable universal jailbreaks against aligned large language models.

dp random priors Differentially Private Image Classification by Learning Priors from Random Processes
Xinyu Tang*, Ashwinee Panda*, Vikash Sehwag, Prateek Mittal
At NeurIPS 2023 (Spotlight)
paper / code

We pretrain networks with synthetic images that have strong performance on downstream private computer vision tasks.

dp diffusion Differentially Private Generation of High Fidelity Samples From Diffusion Models
Vikash Sehwag*, Ashwinee Panda*, Ashwini Pokle, Xinyu Tang, Saeed Mahloujifar, Mung Chiang, J Zico Kolter, Prateek Mittal
At ICML 2023 GenAI Workshop
paper / poster

We generate differentially private images from non-privately trained diffusion models by analyzing the inherent privacy of stochastic sampling.

neurotoxin Neurotoxin: Durable Backdoors in Federated Learning
Zhengming Zhang*, Ashwinee Panda*, Linyue Song, Yaoqing Yang, Prateek Mittal, Joseph Gonzalez, Kannan Ramchandran, Michael Mahoney
In ICML 2022 (Spotlight)
paper / poster / code

Neurotoxin is a novel model poisoning attack for federated learning that stays present in the system for up to 5X longer than the baseline attack.

sparsefed SparseFed: Mitigating Model Poisoning Attacks in Federated Learning via Sparsification
Ashwinee Panda, Saeed Mahloujifar, Arjun Bhagoji, Supriyo Chakraborty, Prateek Mittal
In AISTATS 2022
paper / code

SparseFed is a provably robust defense against model poisoning attacks in federated learning that uses server-side sparsification to avoid updating malicious neurons.

fetchsgd FetchSGD: Communication-Efficient Federated Learning with Sketching
Daniel Rothchild*, Ashwinee Panda*, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, Raman Arora
In ICML 2020
paper / code

FetchSGD is a communication-efficient federated learning algorithm that compresses gradient updates with sketches.

softpbt SoftPBT: Leveraging Experience Replay for Efficient Hyperparameter Schedule Search
Ashwinee Panda, Eric Liang, Richard Liaw, Joey Gonzalez
paper / code

Not Research

WeChat  /  LinkedIn  /  Instagram  /  Yelp  /  Goodreads /  Spotify

I was born and raised in San Jose, California. In high school I taught math, played sax, argued vociferously, sang, danced, and wrote slam poetry. Before studying EECS at Cal I spent the summer in China working at a robotics company. I've been back a couple times.

While at Berkeley I founded DiscreetAI, a venture-backed startup building privacy-preserving machine learning as-a-service. You can check out our ProductHunt launch or our GitHub for more information. Among other things we won the first YCombinator Hackathon and built federated learning solutions for Fortune 500 companies.

  • I gave a lecture on hashing for CS70, UC Berkeley's undergraduate discrete mathematics and probability course. I have served on course staff for Cal's CS70 and CS189, and Princeton's COS432.
  • I worked on R&D at Blockchain at Berkeley. I don't work in crypto anymore, but I'm happy to direct you to any of my amazing friends who have started companies in the space.
  • I read voraciously, about 100 books a year, almost entirely fiction. My favorite genres are xianxia, SFF and horror. My favorite book is The Brothers Karamazov by Fyodor Dostoevsky.
  • I frequently go on food tours and post reviews on Yelp. Feel free to ask me for restaurant recs in NYC, Edison, San Francisco, Los Angeles, and Baltimore.

Website template from Jon Barron.