Tom Ristenpart, Professor at Cornell Tech and in the Computer Science Department at Cornell University


 

Around Cornell

News directly from Cornell's colleges and centers

Thomas Ristenpart Honored With “Test of Time” Award

Thomas Ristenpart, a Professor at Cornell Tech and in the Computer Science Department at Cornell University, received the esteemed Test of Time Award at the 33rd USENIX Security Symposium. This accolade recognizes his co-authored 2014 paper, “Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing,” for its enduring impact on the field over the past 10 years.

The Test of Time Award is reserved for papers that have significantly influenced their areas of research and must have been presented at their respective conference at least a decade prior. The USENIX Security Symposium serves as one of the most prestigious academic venues for research on the latest advancements in the security and privacy of computer systems and networks.

“We are incredibly proud of Professor Ristenpart’s contributions and the long-term impact he has made,” shared Dean Greg Morrisett, the Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech. “This award is a testament to Cornell Tech’s commitment to pioneering research that addresses critical challenges in our society and to the distinguished scholars who make up our faculty.”

The paper, which appeared at USENIX Security 2014, was written by Ristenpart in 2014 alongside co-authors Matthew Fredrikson from Carnegie Mellon University, Eric Lantz, Somesh Jha, and David Page from the University of Wisconsin, and Simon Lin from the Marshfield Clinic Research Foundation. It received the Best Paper Award that year.

“It’s really quite an honor that this paper won the Test of Time Award,” says Ristenpart. “I think it’s a testament to how important it is to understand privacy in machine learning, even more so now when we see explosive growth in use of it due to generative AI.”

In the paper, the team explores privacy concerns in pharmacogenetics, which uses machine learning to tailor medical treatments based on a patient’s genetic makeup. The paper specifically examines warfarin dosing — a critical medication for preventing blood clots — revealing how one can use a model for unintended purposes via what they termed “model inversion.” In particular, it shows experimentally that certain predictive models could be used to help predict a patient’s genetic information given their demographic data, especially for those people whose data was used in training the machine learning model in the first place.

To mitigate these risks, they tested differential privacy (DP), a method aimed at ensuring that models do not rely too heavily on an individual’s data. While DP can help prevent these attacks when applied carefully, it has drawbacks. Simulated clinical trials showed that using DP inappropriately could increase the risk of severe health issues like strokes and bleeding, threatening patient safety. The study concludes that future work would be required to understand the relationship between different kinds of privacy risks, countermeasures such as DP, and improvements to them. Subsequently, much follow-up research over the past decade has focused on addressing these specific issues.

Media Contact

Media Relations Office