Cornell Tech will help make computers ‘accountable’
By Bill Steele
If computers are going to run everything, we probably should keep an eye on how they do it. A nationwide team of computer scientists has launched a project to make automated decision-making systems “accountable,” ensuring they respect privacy and make decisions fairly.
“There’s a lot of new technology being deployed in a variety of important settings, and we don’t fully understand all the ramifications,” said Thomas Ristenpart, associate professor of computer science at Cornell Tech.
Ristenpart and Helen Nissenbaum, Cornell Tech professor of information science, will be co-principal investigators for the project, which also includes scientists at Carnegie Mellon University and the International Computer Science Institute in Berkeley, California. The work will be supported by a $3 million, five-year grant from the National Science Foundation (NSF).
The interdisciplinary team of researchers combines the skills of experts in philosophy, ethics, machine learning, security and privacy. The work could add a layer of humanity to artificially intelligent systems, the researchers said.
Increasingly, the NSF pointed out in announcing the grant, decisions and actions affecting people’s lives are determined by automated systems processing personal data, which might be misused. For example, medical information about an approaching pregnancy might trigger ads for a diaper service.
Many systems use “machine learning,” where a computer is “trained” by showing it a lot of examples, from which it learns to predict what will happen in a new situation, such as whether a drug will work on a particular patient, or if a job applicant will perform well. There is a possibility, Ristenpart said, that the training data might leak out along with the answers.
“Unfortunately,” he said, “we don’t yet understand what machine-learning systems are leaking about privacy-sensitive training data sets. This project will be a great opportunity to investigate the extent to which having access to the output of machine learning systems reveals sensitive information and, in turn, how to improve machine learning to be more privacy friendly.”
Machine learning also can reflect prejudices that already exist in real-world data. A program used in the criminal justice system to predict whether or not a defendant might commit another crime if released was recently shown to be biased against African-Americans. Boston’s Street Bump app focuses on pothole repairs in affluent neighborhoods; Amazon’s same-day delivery is unavailable in black neighborhoods; gender sometimes affects which job offerings are displayed; race affects displayed search results; and Facebook shows either “white” or “black” movie trailers based upon “ethnic affiliation.”
The researchers hope to write safeguards into these applications, enabling them to detect and correct instances of privacy violation or unfairness. The project will explore applications in online advertising, health care and criminal justice, in collaboration with domain experts.
“A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness,” said Carnegie Mellon’s Anupam Datta, another PI. “These explanations then inform fixes to the system to avoid future violations.”
Said Nissenbaum, “Committing to philosophical rigor, the project will integrate socially meaningful conceptions of privacy, fairness and accountability into its scientific efforts, thereby ensuring its relevance to fundamental societal challenges.”
To address privacy and fairness in decision systems, the team must first provide formal definitions of what privacy and fairness truly entail. These definitions should deal with both protected information itself – like race, gender or health information – and proxies for that information, so that the full scope of risks is covered.
“Although science cannot decide moral questions, given a standard from ethics, science can shed light on how to enforce it, its consequences and how it compares to other standards,” said Michael Tschantz of the International Computer Science Institute, another PI.
Another fundamental challenge the team faces is in enabling accountability while simultaneously protecting the system owners’ intellectual property and the system users’ privacy.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe