News directly from Cornell's colleges and centers
Using AI to learn quantum complexity
By Kate Blackwood
To help understand quantum complexity – the vastly complicated interactions that happen when nature’s smallest particles interact – Cornell physicists and computer scientists have developed a machine learning architecture inspired by the large language models (LLMs) behind ChatGPT and similar products.
LLMs learn to piece together language by learning the relationships between basic units of text, called tokens. The new Quantum Attention Network (QuAN) also uses tokens, but in the form of snapshots of qubits – the basic unit of information in quantum computing. A qubit can be one, zero or both at the same time until it is measured, like the alive or dead Schrodinger’s cat.
A fundamental property of quantum mechanics is that there are different probabilities of which state a qubit will be in once it is measured, said Eun-Ah Kim, the Hans A. Bethe Professor of physics in the College of Arts and Sciences (A&S) and director of the NSF Artificial Intelligence-Materials Institute. Different quantum states prepared on quantum computers with many qubits differ in the shape of the probability distribution they bear in the exponentially large space of possibilities corresponding to each qubit being “alive” or “dead.” Learning what the distribution looks like, with a limited number of measurement samples, is challenging, Kim said, so “it’s a great opening for AI methods to come in and help.”
Kim is the corresponding author of “Attention to Quantum Complexity,” published in Science Advances Oct. 10. In three machine learning experiments described in the paper, the researchers apply QuAN to cut through the noise inevitably present in experimental data from available quantum computers and quantum simulators.
Read the full story on the College of Arts and Sciences website.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe