We have a postdoc position available!
Sixty years ago, John S. Bell discovered that the statistics of measurements of quantum objects can be profoundly different from what we experience in our day to day -classical- world. In particular the phenomenon of quantum entanglement leads to correlations that cannot be explained by the models that we would naively expect from classical physics, so-called local hidden-variable models. It is exactly this quantum entanglement that lies at the basis of many quantum technologies such quantum computers.
However, once quantum objects are sufficiently large, it becomes practically impossible to decide whether an arbitrary state is entangled of not. In quantum physics, “large” is measured in what we call the dimension of the Hilbert space that describes the object. In our work, we focus on one of the most popular quantum systems out there: light. The different modes of light are mathematically described by infinite-dimensional Hilbert spaces, which makes the study of quantum correlations between these modes, in general, impossible. By focusing on specific, yet experimentally relevant classes of states, we managed to circumvent these problems. We leverage the power of artificial neural networks to use measurements of the electric field -known as homodyne measurements- to decide whether a state is likely to be entangled or not.
The first step in achieving this goal is to use a numerical method to efficiently simulate a large number of quantum states of light that require only linear optics and a small number of single-photon operations to be produced. While this class of states is very far from arbitrary, it does contain the states that can be produced with state-of-the-art experiments. For our simulated states, we can easily check whether or not they are entangled. On top, we can also reproduce the typical measurement statistics for homodyne measurements. This information is then used to train a machine learning algorithm that can use homodyne measurements as input to decide whether or not the state from which these measurements originate is entangled.
Finally, we rely on another type of machine learning to visualize what is happing during the training process. A clustering algorithm will group quantum states depending on how similar their homodyne measurement statistics are. This way, we can see many clusters of entangled states appearing, which form a useful guideline for future research. Furthermore, this visualization provides a useful sanity check. Quantum states that are too different from anything that the machine learning algorithm was trained on -and thus states for which the algorithm cannot be trusted- will not belong to the previously identified clusters. In such a case, one would have to enrich the set quantum states used for the training.
This work builds upon the idea that artificial intelligence is a crucial tool to help us recognize patterns in the intricate measurement data that are obtained in quantum optics experiments. In the case of our work, the impressive feature is that the artificial neural network really manages to achieve a goal that, at present, cannot be attained in any other way.
In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen published a famous article that highlighted the existence of peculiar correlations in quantum physics. Decades later, this work pushed John S. Bell to derive his equally famous inequality. The experimental violation of this inequality showed that quantum physics is profoundly different from classical physics and ultimately led to the 2022 Nobel prize. So, what exactly are quantum correlations? This is a hard question to answer, because these correlations are actually defined by what they are not. In this regard, we introduce local hidden variable models as the key tool to describe classical correlations. Quantum correlations are said to be present whenever measurements are not consistent with these models. Throughout this article, we build the intuition for such local hidden variable models, and we will use them to challenge some common misconceptions about quantum correlations. For instance, did you know that quantum correlations are not necessarily stronger than classical correlations? Or that the terms “quantum correlations”, “quantum entanglement”, and “quantum nonlocality” are in fact not equivalent? This and more will be explained in this little excursion to the foundation of quantum physics.
I am currently figuring out how to write math epxressions here, so this outreach text is a work in progress...
Is my quantum computer better than yours?
Most quantum computers are based on discrete variables. They use quantum bits -qubits- that can be manipulated using the laws and phenomena of quantum physics, such as entanglement. However, several researchers and companies are trying to develop continuous-variable quantum computers. These are machines that store information in the form of continuous analog signals. If you measure such a signal, you do not get a zero or a one, but any real number.
Both discrete-variable and continuous-variable quantum computers can do the same things, and they can rely on the same physical platforms, such as trapped ions, photonics and superconducting circuits. Nevertheless, these two types of quantum computers do not work at all the same way, and their construction presents difficulties that are not at all the same. These differences complicate the comparison of discrete-variable quantum computers and continuous-variable quantum computers. How then can we determine which quantum computer is more powerful, promising or efficient?
To answer this question and thus guide the development of quantum machines, it is important to find a common characterization of the quantum properties that give rise to the computing power of these different machines.
In our work, we obtain such a comparative framework when both types of quantum computers are realized with the same physical platform. To do so, we rely on the "stellar formalism", which we have recently developed. In particular, we find that a property called "stellar rank" can be used to characterize both types of quantum computers and to quantify their power. Specifically, for light-based quantum computers, this stellar rank gives us a way to count the number of photons that contribute to the computation. In addition, our framework also allows us to prove that a particular type of entanglement, which we call non-Gaussian entanglement, is essential for the operation of quantum computers. Our new comparative framework not only provides new insights, it also gives us a way to formulate many new research questions to help design the quantum computers of the future.
Going further
Technically, how do we measure the power of a quantum computer? A natural way is to determine the most complex calculation that this computer can perform. A quantum computation corresponds to the measurement of a quantum state. We note that a central problem in comparing the computations of different quantum computers is that the quantum properties that cause the complexity of the computation can be introduced by both the quantum state and the measurement device. For example, different measurements on a beam of light, such as counting the number of photons (a discrete variable measurement) or measuring the electric field (a continuous variable measurement) can induce different quantum properties. In order to overcome this problem, we use the stellar rank to characterize both the states and the measurement. Thus, we formally show that the stellar rank determines the complexity of a quantum computation and, by extension, the power of the quantum computer performing this computation. To achieve this result, we have combined theoretical tools from physics (quantum optics) and computer science (computational complexity theory).