Book Review: The Idea Factory

This book is a history of Bell Labs, true, but perhaps more accurately, this book is a history of American engineering throughout the 20th century, since 20th century American engineering was the engineering of Bell Labs. From the telegraph to the internet, the span of innovation, experimentation, and discovery captured in the offices at Brooklyn and Murray Hill easily dwarf any other institution of that era. Perhaps no other company has so thoroughly touched our lives today, creating a new paradigm in the way we define ourselves in the information era.

Such great breadth of discovery is not easy to explain, but Gertner does a masterful job of doing so by encapsulating such technologies in the identities of their creators. From the infamous Shockley to the eccentric Shannon, to the ones who are unknown in general public spheres of the 21st century, this book is essentially a collection of biographies. Yet such a description falls short of the magnitude of what had occurred. These famous engineers and scientists, who thought into existence the ways that we communicate, think, and interact, were also in a unique environment provided by the Bell Labs from the 1930s through the 1960s. This book captures the culture of that time thoroughly, from the games that engineers would play when their supervisors weren’t looking to the rare disputes between management and researcher.

A possible criticism is that the book focuses too heavily on the lives of Kelly, Shockley, Bardeen, Shannon, Fisk, Pierce, and Baker. This is entirely valid – look at the 16 page insert in the center of the book that provides beautiful black-and-white photos of these Young Turks at work in Murray Hill. Yet the picture that such biographies provide is perhaps representative of what the Bell Labs of that era represented. These men had different, sometimes conflicting, personalities, yet Bell Labs was able to draw out of them true genius. Gertner deflects this argument, stating in the conclusion that “maybe this argument – the individual versus the institution; the great men versus the yeomen; the famous versus the forgotten – is insoluble. Or … perhaps the most significant thing was that Bell Labs had both kinds of people in profusion, and both kinds working together. And for the problems it was solving, both kinds were necessary.

Our understanding of innovation in today’s world is fundamentally different from innovation at that time. Look at the primary motivations behind shocking American discoveries in the 20th century, and you will find how remarkably institutional they were. The man on the moon was a result of the Space Race between two superpowers. The Manhattan Project was a result of over 600,000 scientists, engineers, and workers combining forces to develop the most terrible weapon. Discoveries and innovations of that time were in large part supported by academic institutions, by government, or by large monopolies like the Ma Bell system. Gertner raises the point that perhaps this was one of the fatal flaws that led to the eventual collapse of the Bell Labs in the 1990s. By working in a vacuum, protected from competition through explicit promises by the government, Bell Labs never had to learn how to compete in an open marketplace. By outinventing everyone else, they eventually led to their own demise, being unable to capitalize on their new creations of satellite and internet communications as many of the new startups in Silicon Valley.

Perhaps it is for the better that Bell Labs has given way to Silicon Valley, and that we have a new wave of discovery. But I find Bell Labs, and this book, incredibly enduring for imparting the sense of community that such a research institution had at the time. Sure, it was stressful, competitive, and intensive. Yet such stresses only brought everyone to reach higher heights and think bigger than any other place on Earth.

QCJC: Martinis 2005

Alright, so this paper was referenced in Steffen’s 2012 work, as one of the primary explanations for decoherence error. This paper is quite experimental, but let’s try to understand the basic principles in here.

The primary takeaway is to show that the primary source of error in a superconducting quantum bit is found from the dielectric loss within the insulating materials. Martinis suggests that the reason for this decoherence can be described by examining the two-level states (TLS) of the dielectric, and using that to model the loss.

First, recall that a Josephson junction is made up of a SIS layer, or Superconducting-Insulator-Superconductor material. In addition, normal capacitors are also made of a Capacitor-Insulator-Capacitor type. The paper mentions that there is decoherence loss in both the “bulk insulating materials” and in the “tunnel barrier” (of the JJ), but that the decoherence manifests differently in each of the cases.

The first takeaway is that the dielectric loss at low temperatures and low drive amplitudes behave differently than at high temperatures. That is, the loss is no longer linear at lower temperatures, and cannot be modeled based on previous data at high temperatures. In addition, it appears that the dielectric loss is especially bad when using non-crystalline substrates for the insulator. The experimental results show that even though the dielectric loss is low at high temperatures, there is a rapid ramp-up of loss as the drive amplitude decreases. Therefore, it is no longer reasonable to use high temperature approximations for the work done in superconducting circuits. This seems to be particularly surprising, as one might more typically expect there to be less loss as the temperature approaches zero.

The reason for this difference, seems especially interesting. According to the article, conventional resistors are often modeled as a bosonic bath, which is a model created by Feynman and Vernon in 1963. The “bosonic” part  means, I believe, that the particles in the environment (bath) behave with Bose-Einstein statistics, and quantum dissipation loss is modeled by considering how the quantum particle is coupled to that bath. A related model, called the spin-boson model, probes this in more depth for quantum systems that have two levels. However, the Martinis paper suggests that the treating the bath as bosons is inaccurate, and it would be better to consider it as a fermionic bath instead.

The paper suggests that one way to prevent such errors from accumulating is by making the insulators very very small. At the low volume limit, the assumption that small defects in the dielectric becomes false, as the defects are then better modeled as a discrete number of defects. This qualitatively changes the model, limiting the maximum amount of decoherence that is present.

Next, the paper tries to explain why the earlier models were incorrect, given the new data. They argue that the two-level state fluctuations are a result of charge fluctuations, rather than fluctuations of the critical current as was previously believed. This was verified by examining the distribution of the splitting sizes using the standard TLS tunneling model.

After this part, I get more and more lost in the rest of the paper! but one thing that I did find interesting was an application of Fermis’s golden rule to calculate the decay rate of the qubit at large-area junctions. It’s always quite cool to see equations that I’ve learned show up in modern scientific papers :)

The conclusion of the paper is that the dielectric is always going to be lossy, but if you make the insulator small enough, it won’t be lossy enough to make a huge difference. Instead, the loss will plateau at some point, and therefore be manageable.

Source: Martinis, J. M., et al. Decoherence in Josephson Qubits from Dielectric Loss Phys. Rev. Lett. 95 210503 (2005)

Book Review: True Genius: The Life and Science of John Bardeen

I came across Bardeen’s name again while looking up BCS theory in superconductivity. To me, Cooper was always the big contributor, since Cooper Pairs are so frequently talked about in relation to Josephson junctions and other superconductivity phenomena. But that B of BCS was none other than Bardeen.

I knew of Bardeen – I had a poster of famous scientists in my room and his was on it, for his work on transistors. But transistors always seemed to be more associated with Shockley, the brilliant scientist who eventually went crazy over Eugenics in Silicon Valley. So how did Bardeen somehow hide away in the corner?

I immensely appreciated this book for it’s wonderful way of looking through John Bardeen’s entire life, and documenting his challenges, first at the Bell Lab working under the egomaniacal Shockley, then later with his semi-failed theory on CDWs. His life is inspiring, and his dedication to research is unrelenting.

The book itself was dry – it reads much more as a history of science report than a general science biography. Pages are filled with direct quotes, with a hefty 27 page bibliography of sources, and an even heftier 81 page notes section, sourcing each quotation. Some of the anecdotes are repeated through the book, which might make sense on a chapter-to-chapter basis, but induce a strong sense of déjà vu while reading. However, it’s clear that Lillian Hoddeson and Vicki Daitch are truly dedicated to presenting a complete picture of who John Bardeen was and aspired to be in this biography. Their dedication to details, especially on the minutia of scientific controversy and how Bardeen navigated through them, was brilliant.

Perhaps the greater travesty is that, despite Bardeen’s phenomenal life of research, this is the only book published about him. He has perhaps contributed more than any other American to our current standard of life in the 21st century, and yet books barely mention him. Indeed, even as a physics student, I have never truly heard of him exalted or commented on, besides the lessons in superconductivity labs. Perhaps that’s because of his choice of field – solid state physics and transistor circuits are not often introduced at an early stage of an undergraduate physics curriculum. But that really is not much of an excuse, for one who has contributed so much. His transistors are in every one of the computers that power the modern information age. And I believe that his theory of superconductivity, which has led to Josephson junctions, which has led to superconducting circuits used in quantum computing, will soon be the basis of another scientific revolution not too far off in the future.

One of the fascinating aspects of this book is in the epilogue, where the authors spend some time dissecting Bardeen’s psychological profile and make an effort to understand the nature of genius. While I am wary of the tone of finality taken in the chapter, I found it surprisingly motivating, as if it was laying out a roadmap for the mentality that I should take if I ever desire to follow in the same footsteps.

I truly enjoyed the read, but do hope that we will see more of a popular science book written about Bardeen in the future, so that his story will become more accessible to everyone.

5/5 stars

Goodreads Review:

QCJC: Meyer 1999

So, that last paper ballooned way out of control. There was just so much information, and it was explained in a way that even I could understand it! Thank goodness for good paper writers :) But for this post, let’s try to tackle a short (and in my opinion hilarious) paper.

So, let’s start with the title. Quantum strategies. Alright, you got my attention – what kind of strategies can you have with quantum mechanics exactly? I swear, if this is just another quantum communication trick disguised in a game show again…

The introduction is intriguing enough – Meyer briefly goes over the history of quantum information, with discussion of quantum algorithms, QKD, and other entanglement-based communications. He then launches into a comparison between quantum algorithms and game trees found in game theory. Already, I’m getting a bit suspicious – how exactly is this game played?

Thankfully, Meyer fully explains the specific strategy that he is considering, by going to none else but Captain Picard on the Spaceship Enterprise. The premise is that there are two actors in a game to begin with, Picard (P) and Q. A single fair coin is placed heads-up within a box, and if the coin is heads-up by the end of the game, Q wins. There are three turns, where at each turn, the actor can decide to either flip or not flip the coin, without knowing the previous state of the coin. The turns will begin with Q, then P, then Q.

A simple examination of the game seems to show that this is a zero-sum game with no pure equilibrium strategy for P. The only Nash equilibrium strategy is a random chance, 50/50 mixed strategy for Picard. In my mind’s eye, I can picture Picard first flipping a coin to decide if he should flip the real coin! And of course, for Q, the Nash Equilibrium strategy is to choose each of the four options (NN, NF, FN, FF) with 25% probability. However, the expected value for each of the actors should be 0, so for a large number of games, the end result of the coin should be heads or tails with 50% probability.

The payoff matrix for different pure strategies for P and Q. Source: Meyer 1998

And that’s when things start to go off the rails. In a series of 10 games, Q wins every single one of them. How? By implementing a *quantum strategy* for the coin. Meyer begins explaining the game in terms of quantum transformations, where a flip (F) is equivalent to a NOT gate, while a no flip (N) is equivalent to an identity game. Furthermore, a classical mixed strategy is represented by a matrix as well, where the diagonals are identical. From here, Meyer extrapolates to being able to apply any series of quantum gates on the poor coin!

What is then explained is essentially a way to cheat the system. Q first prepares the qubit in a Bell state, and then acts the Bell state again after P’s turn. At this point, I smack my head. Of course, Meyer is just treating Picard as an Oracle, and implementing a quantum algorithm onto the state of the qubit! There isn’t any “strategy” happening here – it’s simply using Simon’s algorithm to deduce and reverse Picard’s actions. At least Meyer is a good storyteller…

There are so many objections I have. For one, how could you do this in practice, ever? A coin is not a quantum object – what kind of special hands do you need to have to flip it so that your coin ends up in a Bell state? And afterwards, how is Picard acting on the object without disturbing it? Yes, I understand that if both of them were doing the same experiment on a qubit, say an NMR spin, then it would be feasible. But that’s an entirely different situation than what game theory deals with! You aren’t analyzing situations that can come up in decision making, you’re literally analyzing a quantum game and trying to solve it using quantum algorithms and transformations.

But Meyer does continue in the paper. He states three theorems, and they are as follows:
All quantum algorithms do as well as mixed Nash equilibriums. The proof of this seems obvious – as a quantum gate can always be written in matrix form as a classical probability matrix, you can always use a quantum algorithm to do at least the same thing as a mixed equilibrium.

There does not always exist a pure quantum Nash equilibrium. Meyer shows this in a very similar way to Nash’s original problem. He uses the unitary nature of each of the quantum transforms to show that there exists no single strategy that cannot be improved by switching strategies. I’m actually not sure what he means here by pure quantum strategy – does that mean that he implemented a pure strategy using quantum gates? What’s the difference between that and a regular pure strategy?

There always exists a mixed quantum Nash equlibrium. For this, Meyer doesn’t even show a proof – he just asserts that it is the same as Neumann’s and Nash’s result on classical mixed strategies, on the basis of the Hilbert space that quantum transformations exist in.

All in all, Meyer is a wonderful writer who can really tell a strategy using Picard. Surely not so many people would fall for his thinly veiled Simon’s algorithm, right? Oh, 628 citations you say? Wow.

Source: Meyer, D. A. Quantum Strategies Phys. Rev. Lett. 82 S0031-9008(98)08225-8 (1999)

QCJC: Steffen 2012

After DiVincenzo’s 2000 paper last week, let’s go over something within this decade, also with DiVincenzo! This one is a particularly interesting paper because it provides the road-map for what direction IBM research in quantum computing will take in the future. It’s especially well written, providing an excellent introduction to the field and explaining the motivation behind current research methods.

Sources of Error

The paper starts by describing the basis for quantum computing, and quickly transitions to how qubits can be realized physically. A good reference to DiVincenzo’s criteria is found on the bottom of page 2, along with a great explanation of the T_2 decoherence time. It describes T_2 as the characteristic time over which the phase relationship between the complex variables c_0, c_1 are maintained. In other words, how long the qubit can last without randomly changing states. However, the paper argues that calculating T_2 alone is insufficient to actually capture decoherence errors. Instead, a few additional errors are introduced:

  • Off-on ratio R – Essentially testing the number of false positives present in the statement. If nothing is being done to the qubit, could something actually be happening? Seems trivial in classical computing, but since quantum gates are controlled by tuning interactions, this process can be more complex. An optimal value of R would be close to 0.
  • Gate fidelity F – How often does your qubit actually do what you intend? This error can include a large number of different error sources, from decoherence to random coupling, and should be close to 1.
  • Measurement fidelity M – How well can your system read the truth? In other words, when you perform a measurement, how often are you extracting correct information? I’m a bit confused about how this value is determined for mixed states, since it seems like it would be hard to separate the previous two errors from this error. For example, suppose that I have a qubit that is initialized to 0, allowed to rest for one time period, acted on by an X gate, and then measured. If I measure 0 instead of the expected 1, is that because of R, F, or M? Even worse, how can you measure this for mixed states, even those that are very carefully prepared?

Qubit Architectures

After describing the various possible errors, the authors proceed to describe the large variety of qubit architectures available at that time. Just for completeness for myself in the future, these are as follows:

  • Silicon-based nuclear spins
  • Trapped ions
  • Cavity quantum electrodynamics
  • Nuclear spins
  • Electron spins in quantum dots
  • Superconducting loops and Josephson junctions,
  • Liquid state nuclear magnetic resonance,
  • Electrons suspended above the surface of liquid helium

As an interesting side note, the papers referenced for each of these technologies [21-29 in the report] are all published between 1995 and 1999. That’s surprising to me, partially because I didn’t think that a few of these technologies were really mature until the early 2000s, but also that all of these technologies were exploding around the same time. Thinking about the history of quantum computing, it makes sense that there was a boom immediately after Shor’s 1995 paper, but I didn’t expect it to be so big!

Moving on, we see some of the history of IBM’s involvement in quantum computing. Their first plan was to create the liquid state NMR quantum computer, led by Chuang (who is also an author on this paper …. so OP). Using this architecture, IBM was able to implement Shor’s factoring algorithm to factor 15 in 2001, again led by Chuang.

However, the authors noted that NMRQC began pushing “close to some natural limits” at the dawn of the 21st century, although they do not specify exactly what those limits are. In the previous DiVincenzo paper, I believe references were made to NMR issues being unable to implement the initialization of qubits in an efficient manner, thus removing it from consideration as a scalable QC. Since that paper was written in 2000 by DiVincenzo who was at IBM at that time, and that specific claim was backed by earlier work by Chuang in 1998, I’ll be willing to bet my hat that this is the reason.

An Introduction to Superconducting Qubits

The remainder of this paper is mostly dedicated to describing the superconducting qubits that IBM then focused on. It begins with a review of a typical RLC circuit, which has a harmonic potential Hamiltonian. The reason for this Hamiltonian structure is because the inductor acts as a quantum energy storage, where E_L = \frac{\Phi^2}{2L} and the capacitor acts as a quantum kinetic energy, where latex E_C = \frac{Q^2}{2C}$. In this structure, charge is similar to momentum, and capacitance is similar to the particle mass. You can derive these equations by examining the differential equations that govern the RLC circuit, and solving the second order differential equation that results from basic circuit element rules.

However, this alone is unsuitable for use as a qubit. Even though a harmonic potential will create energy spacings, each of these energy spacings are evenly spaced apart. Therefore, up to a certain potential difference, the energy spacings between the ground state and the 1st excited state is indistinguishable from the energy spacing between the 1st excited state and the second excited state. To change the Hamiltonian, a new quantum device is introduced – the Josephson junction.

The Josephson junction is a circuit element that consists of two superconductors separated by an insulator. In that regard, it is somewhat similar to a transistor, which features two conductors separated by an insulator. In the practical examples that I have seen in the past, this can be implemented by having two pieces of superconducting metal separated by a very small air gap. The energy expression of the Josephson junction is E_JJ = -I_0 \cos{\delta}, where \delta is a quantum phase that is proportional to the magnetic flux \Phi, after normalization.

What this creates is a anharmonic oscillator. When the quantum phase is close to zero, as is true at the ground state, there is very little contribution by the Josephson junction. However, at higher energy levels, the quantum phase is larger, and the higher energy levels have different (not sure if smaller or larger?) energy spacings. The authors claim that this frequency splitting varies between 1 to 10% of the fundamental frequency, which corresponds to the 1-10GHz range for frequency control. This high frequency control also tends to be larger than the expected clock times, meaning that it satisfies DiVincenzo criteria 3, the ability to perform many gate operations before decoherence. One of the additional benefits in the Superconducting Quantum Computing (SCQC) architecture is that the higher levels in the harmonic oscillator are still able to be exploited! For instance, Reed 2013 implements the second and third excited states of the superconducting qubit to create a Toffoli (CCNOT) gate with a much shorter gate time than a conventional two qubit implementation, exploiting the “avoided crossings” found in this architecture.

The paper also mentions about the use of low-loss cavity resonators to act as a memory device for the superconducting qubits. I believe that these resonators are connected to the qubits by a transmission line, creating a coupled system with a coupled Hamiltonian. I think that Gabby presented a paper on using the connected cavity as a QEC method, but to be fully honest, I don’t know enough about how that cavity state is able to treated as a qubit yet!

One interesting aside – the paper mentions the necessity that a necessary condition for the qubit is that k_B T << h\nu. To me, this feels like a restriction on the Fermi energy of the superconductor. I believe superconductivity is explained by BCS theory, where the creation of a Cooper pair of electrons requires the material to be at a temperature lower than the required Fermi temperature. However, I don’t exactly understand why the total energy of the qubit needs to exceed the Fermi temperature in order to initialize the qubit.

Superconducting Qubit Challenges

From here, the paper begins exploring some of the research topics to improve the superconducting qubit for use in the future, primarily in reducing the amount of error present in the qubit.

The first issue explained is dielectric loss, or the noise that is introduced by the capacitors in the circuit. For one thing, if the insulator is not made perfectly smooth, it could leak energy and limit the qubit coherence. In addition, it can be difficult to characterize dielectric loss, as the authors explain that dielectric loss at low temperatures is not linear and cannot be extrapolated from high energy measurements. In fact, a difference by a factor of 1000 has been witnessed, by the Martinis group. One of the possible solutions to do this is to use the Josephson juncture’s self-capacitance as the capacitor for the circuit, growing the superconducting materials out of a “crystalline material instead of an amorphous oxide”. I may be wrong, but I think that the Schoelkopf group is pursuing this in part – I remember discussions and demonstrations of the creation of the Josephson junctures on pieces of nanofabricated sapphire crystal. I’m not sure if that’s a specialty of the Schoelkopf lab, or if that is simply now a commonly used technique in the 6(!) years since 2012.

Next, flux noise is explained and characterized as noise introduced by tuning the magnetic flux, which can limit the T_2 coherence time. The authors mention that typically, there is a “sweet spot” for flux, which allows the resonance frequency to not be sensitive to changes in magnetic flux. This section is much shorter, and I don’t understand it well :(

Current IBM SCQC Results

For a sub-architecture, IBM chose to focus on the flux qubit design, as shown in the below figure. This uses three JJ’s in series, with a shunting capacitor  around the third, larger qubit. While this is the explanation of the circuit design, I think in practice it looks quite different! I would love to see an entire picture of the qubit in the future :) The authors explain that their design is similar to the flux qubit, but that the use of the shunting capacitor (C-SHUNT) makes it also similar to the transmon and phase qubits.

A diagram of the qubit design that the IBM group is currently working with, with the C-Shunt to the right of the Josephson junction. Source: Steffen et al Fig 2

For using such a qubit, they place the qubit within a dilution refrigerator and couple it to a superconducting resonator, via a transmission lab I believe. They can then determine states by distinguishing between two possible resonant frequencies, which can either be measured via amplitude or phase.

One of the research directions that the team has taken is in creating a new type of two-qubit gate by tuning the interaction between two weakly coupled qubits, simulating a CNOT gate, using microwave excitations. I think this is related to the earlier discussion of tuning the magnetic flux to the “sweet spot” for limiting flux noise, but I could be badly mistaken!

At that, the paper begins discussing the future. I’m a bit sad about one sentence in the paper:

A little extrapolation suggests that the reliability of our quantum devices will routine exceed the threshold for fault-tolerance computing within the next five years.

This seems especially sad with a recent quote from Jay Gambetta, a current quantum info/computation project manager from IBM Research, at this year’s QConference:

Over the next few years, we’re not going to have fault tolerance, but we’re going to have something that should be more powerful than classical – and how we understand that is the difficulty.

While still optimistic, it isn’t quite as happy as the picture that Steffen, DiVincenzo, Chow, Theis, and Ketchen painted in 2012. Oh well – not much more to do but continue working!

References: January 7th, 2017: Steffen, M et al. Quantum Computing: An IBM Perspective IBM J. Res. & Dev. 55 No. 5 (2011)

QCJC: DiVincenzo 2000

A bit of a breather from systems of linear equations – let’s go back to the the basics! We’ll be looking at DiVincenzo’s 2000 paper, titled The Physical Implementation of Quantum Computation, but I think this paper is more commonly known as DiVincenzo’s criteria. In this paper, he outlines 5 + 2 criteria that a quantum computing architecture would need in order to be considered mature, or at the very least, usable. I think this is one of the most widely cited articles in the field of quantum computing, with 2118 citations, since most hardware architecture papers tend to put in a citation to argue why their new research satisfies multiple criteria. (Other notable papers: Girvin and Schoelkopf’s 2004 paper introducing superconducting circuits as quantum computing architecture has 2915 citations, and Shor’s 1995 factoring paper has 7059 (!) citations. In contrast, the combined published work that I’ve made has something south of 3 citations, all of which are self-citations.)

The paper is written remarkably conversationally, but packs in a great deal of examples from pre-2000 works in the field. To start, in section II, DiVincenzo does an excellent job arguing for the need of quantum computing without making it sound like a magic bullet. Indeed, quantum computers do not necessarily improve all possible computations or algorithms. Instead, there are limited classes of problems that can be solved exponentially faster with quantum computing. However, a strong argument for the use of quantum computing in the future is that nature itself is fundamentally quantum, not classical. The classical effects that we see are typically a large scale approximation, or a high-energy limit, of intrinsically quantum effects. Therefore, if we truly want to understand how the universe works, it might make sense to use fundamentally quantum computers.

I find this argument to be particularly elegant, although I’m not as convinced of the next line, that quantum machines can only have greater computational power than classical ones. Just because something uses a language that is more fundamental doesn’t always mean that it will be faster. I think a reasonable analogy might be to compare the classical programming languages C and Python. Python is often considered to be a high-leveled scripting language, which C is a more machine level language that can execute functions faster. Both have their purposes, and there is a choice in the choice of language, just as there is a choice between quantum and classical computing. For problems that do not need quantum computing, the requirements of quantum computing are too stringent now to warrant the cost, but that’s not to say that the same costs will apply in the future.

Two amusing points: the first response to the question of “Why QC?” is simply “Why not?”. Which I absolutely love :)

The second is that one of the applications of QC that DiVincenzo discusses is somewhat obliquely put as

… And for some games, winning strategies become possible with the use of quantum resources that are not available otherwise.

Which sounded crazy to me! I tracked down the two references that this sentence sites, and my initial suspicion was confirmed – one of the citations was merely referring to a manner of quantum communication to beat classical communication errors. Sure, it’s really great stuff, but wasn’t completely new – after all, my QM teacher has been bringing up that example since last year :) But the other citation looked interesting – Meyer 1999 on Quantum Strategies and considering game theory (!) from the views of quantum computing. I think I found my article for tomorrow!

Alright, so what are these criteria anyways? Let’s write them out first, and then revisit each one of them.

  1. A scalable physical system with well characterized qubits
  2. The ability to initialize the state of the qubits to a simple fiducial state, such as |000…>
  3. Long relevant decoherence times, much longer than the gate operation time
  4. A universal set of quantum gates
  5. A qubit specific measurement capacity
  6. The ability to interconvert stationary and flying qubits
  7. The ability to faithfully transmit flying qubits between specified locations.

Note that criteria 6 and 7 are what DiVincenzo calls “Desiderata for Quantum Communication”, as flying qubits (or, more usually, a form of a photonic qubit) are typically used in quantum communications over long distances. While these are fascinating, we will skip these two for now.

The other 5 criteria seem remarkably reasonable for computational standards, but they do have some interesting caveats to consider.

The first criteria is simply that you need qubits in order to have a quantum computer, just as you need bits for a classical computer. The additional requirement that these qubits are well characterized is interesting. For DiVincenzo, this means that the following should be true:

  • The qubits are permitted to be entangled, that is, for more than one qubit, the overall state of the qubits cannot be simply decomposed into some product of individual states.
  • Physical parameters, including the internal Hamiltonian, of the qubit are known,
  • Presence of other states beyond a quantum two-level system are characterized, and interactions are understood,
  • Interactions with other qubits are understood, and
  • Coupling with external fields are understood

The reason for so many requirements is perhaps that it is fairly easy to see a two-level system in quantum mechanics, but it can be difficult to characterize those systems to only be a two-level system that satisfy these criteria. DiVincenzo raises an interesting aside about how quantum dots cannot be called a two-qubit system, despite their surface similarities to having a single electron shared between two different dots. Despite the apparent superposition of a single electron in multiple positions, the selection rules prevent additional entanglement from occurring.

At this point DiVincenzo briefly goes over the existing systems at the time of publication, including ion-traps, NMR, neutral-atoms, impurities or quantum dots , and superconducting devices. Since that time, I think superconducting devices has evolved into superconducting circuits, impurities have evolved into diamond NV centers, and topological qubits have emerged into the scene as well.

The second criteria is to have an easy to access initial state. My QM professor would often joke about going to the Quantum Mechanical hardware store and picking up a whole bunch of states that were all initialized to the same 0 position. However, in an actual implementation, those states need to be created efficiently. This is because a 0 state qubit is especially useful for quantum error correcting codes. Not only is a 0 state needed, but a continuous supply of 0 states is needed, meaning that one of two things must happen: either there is a fast (short decoherence time) way of generating new 0 states, or that there is a way to “store” 0 state qubits and then “move” them in place on a conveyor belt when they are needed.

For the first option, DiVincenzo proposes two ways to reset qubits. One is to allow the qubit to naturally fall to its ground state, which is most typically used as the 0 state of a qubit anyways. While this is a natural system, the time it takes for a qubit to return to its ground state can be quite long, and not be shorter than the decoherence time of the entire system. Therefore, if you need to wait for a single qubit to return to 0, then within that same amount of time, all the qubits will have been given a chance to fall back to 0. Another option is to measure the qubits and therefore project them into a state that can be further manipulated by rotations if necessary. As to how that measurement process is actually handled, that is a question for criteria five.

Interestingly, DiVincenzo uses this criteria to rule out the possibility of NMR as a QC architecture. I’m not sure how the field has changed within the last 17 years to prove him wrong/right!

The third criteria is in extending the relative decoherence time of the qubits. In general, decoherence corresponds with the lifetime of the qubit, or how long it takes before outside noise interferes with the delicate state of the qubit. Note that not every single aspect of the qubits has to have a long decoherence time. Instead, only the properties of the qubit that are being represented in the basis state of the logical qubit need to have a long decoherence time. But unfortunately, there is usually a correspondence between the raw decoherence time of a qubit with its gate time, as both quantities are related to the interaction strength between the qubit and external factors.

The main motivation for this criteria is found in quantum error correcting (QEC). A typical quantum algorithm will likely take longer than the lifetime of its individual qubits to complete, which seems rather bad. If you were trying to send a message to someone, but the delivery of the message would take 4 centuries to get to its destination, by which time the original receiver would be dead, why bother sending the message? However, QEC is a solution – it frequently “revitalizes” the qubits by checking to make sure that they are still the same as before, and then correcting them (using ancillary qubits that are initialized to 0) whenever needed. Therefore, as long as you are able to run multiple of these QEC subprograms during the algorithm, your entire code should still work.

The fourth criteria is to create a set of universal gates that are able to implement every possible algorithm. This isn’t as bad as it sounds (for now) – for instance, classical computing only requires the NAND gate to build any other gate. Similarly, QC only needs the CNOT gate and a rotation gate (as well as a measurement gate but that’s a whooole ‘nother criteria). The typical way to implement this is to find some Hamiltonian that corresponds to the unitary transformation you want to execute, and then “turn on” that Hamiltonian for the desired amount of time.

U = e^{\frac{iHt}{\hbar}}

But that actually has a ton of problems in its implementation within different architectures. DiVincenzo brings up that some of these desired Hamiltonians in NMR systems could only be turned on and not off, posing quite a challenge. Others do not interact with more than two-qubits, which makes gates (like the Toffoli CCNOT gate) much more challenging to implement. Even for a good Hamiltonian, controlling the time pulse at which the gate is active can be a concern. The gate needs to evolve unitarily, meaning that it needs to be slow enough to satisfy adiabatic requirements (which I don’t super understand at the quantum level?). Furthermore, if the time signal is controlled by classical components, there needs to be a good separation between that QM system and the classical system.

I feel like DiVincenzo gave up a bit of ground here at the end. He admits that the unitary gates cannot be implemented perfectly, and instead recommends that such inaccuracies should only be taken into account in the future. Instead of trying to make the perfect unitary gates, we should aim to understand how much noise the gates introduce to the system and correct for them using QEC. Oh, QEC, how you bring life to our gates!

The fifth criteria is to have good measuring devices. That is, if the qubit is blue, don’t let your measuring device say that it is red! It might sound simple, but this criteria measures the fidelity of the qubits. Your measuring device will likely not be able to exactly say the correct state of the qubit each time, which seems really confusing to me!

Aside: I can understand fidelity well enough when it comes to determined qubits. That is, if a qubit is |0> and it is being measured as |1> every now and then, I understand that the measuring device has a fidelity less than 100%. But if the initial qubit is in the Bell state, how do you even estimate the fidelity of the measuring device? Do you simply characterize the measuring device using a known qubit and then say that the fidelity does not change with an entangled qubit in the future?

In any case, DiVincenzo remarks that a low fidelity system (a system that has poor measuring devices) can still be overcome, if you repeat the same experiment over and over again using a large ensemble of particles. That would satisfy the threshold levels of reliability for the experiment, and hopefully, for the computation. However, such processes again requires rapid gate actions, especially if the final state qubit is not easily replicable and the entire algorithm is needed to run again.

By looking at the five criteria, you start to get a sense for how daunting the task is, but also in how powerful QC can be. Let’s play with this idea with a sample problem.

Suppose that you have a problem that typically takes exponential time with a classical computer and would take linear time with a QC. However, to carry out the problem, you need a large number of qubits – let’s say 100. Each of those computational qubits needs to be backed up by a series of QEC ancillary bits, each of which are running constant subroutines to ensure that the qubits do not decohere during that process. With that many qubits rubbing against each other, perhaps interactions between qubits becomes less stable, and qubits are able to become excited to non-two level systems. Furthermore, if the final measurement device has a low fidelity, the entire process may be repeated several times to reach a certain threshold level of certainty. And even then, if the entire algorithm is a non-deterministic algorithm (as most QC algorithms are), it would take some number of tries of each different sample value before a final solution is given!

All that’s to say, QC has a lot of challenges to face before solving large problems reliably. Just the sheer amount of error correcting that would be needed strongly motivates the creation of a ton of stable qubits, and a good conveyor belt system to pipe in fresh qubits that are initialized in the proper state. However, the rewards can still be great. If it were as easy to manufacture qubits as it is to manufacture silicon bits together, the world may never be the same.

Reference: January 6th, 2017: DiVincenzo, D. P. The Physical Implementation of Quantum Computing (2000)

QCJC: Cai 2013

Alright, as we promised yesterday, a look at the guts of the experimental side of the papers! I’m going to be looking into Cai 2013, which is one of the earlier implementations of the QC solution to linear systems of equations, as was very roughly described in the previous post. The main challenge is to see if I can get some meaning out of the theoretical abstractness that was in the last paper, by seeing some real examples being worked.

The premise remains the same: Cai et al. aims to implement the algorithm described by Harrow using photonic qubits. The paper treats the problem much more practically by aiming to solve a specific problem:

A x = b, A = \begin{bmatrix} 0.5 & 1.5 \\ 1.5 & 0.5 \end{bmatrix}

Like the original theory paper, Cai completes this process in three steps. The initial step deals with determining the eigenvalues of the matrix A, the second step conducts rotations on the eigenvalues to find inverse eigenvalues (for the inverse matrix), and the third step extracts the proper vector x from the system. The algorithm is initialized with three different types of bits – one bit that is an extra bit to assist in the inverse finding step, labeled as Ancilla, two qubits to act as “memory” to store information about the matrix A, which are labeled as the Register, and one qubit to act as the input bit.

Shows the different steps as well as a display of the four qubits needed. Source: Cai et al Fig 1(b)

One of the parts of this algorithm that most confuses me is in the setup of the matrix A in step 1. This algorithm appears to have hard coded in the eigenvalues of the matrix. Just for sanity’s sake, let’s do this brief eigenvalue calculation here.

First, we know that the eigenvalue problem we need to solve is det(A - \lambda I) = 0, and so

det (\begin{bmatrix} 1.5 - \lambda & 0.5 \\ 0.5 & 1.5 -\lambda \end{bmatrix}) = 0

( 1.5 - \lambda )^2 - 0.25 = 0

\lambda = 1, 2

which nicely verifies with what the paper claims. The paper states that to optimize the circuit, they encode the eigenvalues of the matrix in binary and store them within the register qubits. To my understanding, this is a bit of a simplification that they are working with that is not part of the original algorithm. That is, instead of allowing the algorithm to solve for the eigenvalues of the matrix, they are hard-coding the eigenvalues of the matrix into the algorithm and therefore simulating the desired matrix. This makes sense as a simplification – the original paper seems to rely on some kind of more complex manner of phase estimation.

Disclaimer – I’m not sure if this implementation actually is a form of phase estimation itself, mainly because I am unfamiliar with that process. However, the way that this algorithm is implemented here seemed a bit fishy to me. For instance, what if the eigenvalues of the matrix were not so nice and easy, but were actually very far apart? For instance, if the two eigenvalues were 1 and 9000, then you would need at least 14 register bits to just encode the eigenvalue, not to mention the number of operations needed to instantiate such values. It is possible that this is one of the drawbacks that the original paper discusses, when it comes to \kappa, which the paper defines to be as the “ration between A’s largest and smallest eigenvalues”. The 2009 theory paper claims a less stable solution set when \kappa is large, and that the complexity increases by \kappa^2.

After that first step, the next two steps are done to entangle the qubits together and to invert the eigenvalues. Now, I’m not sure if my math is right here. However, what it seems to me is that in step two, you are doing the equivalent of inverting the matrix A by finding the inverses of the eigenvalues. In that process, because those eigenvalues are entangled with the original input qubits, you are also transforming the input qubits. Step three then separates the two again.

At the end of step three, there is a measurement stage. To the best of my understanding, this measurement stage does not always work. That is, it is possible to measure results that do not give |1> for the ancillary qubit and |00> for the register qubits. In those cases, I think that the calculation was then moot and the same process is repeated again from step 1, until those two conditions are met. Only then is the input qubit known to be the solution.

I wasn’t yet able to follow along with a computation of all of the register qubits. However, I did try out some of the results in Figure 3, simply as sanity checks to ensure that my calculations were correct. For this paper, the experimenters evaluate the three following vectors:

b = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}, \begin{pmatrix} 1 \\ 0 \end{pmatrix}

Using these values, we can either find the solution by manually solving the system of linear equations, or by finding the inverse of the matrix. The inverse of the matrix is

A^{-1} = \frac{1}{4} \begin{bmatrix} 3 & -1 \\ -1 & 3 \end{bmatrix}

and by left multiplying the inverse matrix with the input vectors, we find the following normalized solutions to x as follows:

b_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, b_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} b_3 = \frac{1}{\sqrt{10}} \begin{pmatrix} 3 \\ -1 \end{pmatrix}

Afterwards, recall that the overall purpose is not to recover the actual vectors of x, as that would require at least n operations to read out. Instead, the purpose is to find expectation values. For this experiment, the expectation values used are the Pauli matrices, measuring in the X, Y, and Z directions. Since we know the values of x, we can compute the expected expectation values and compare them to the experimental results. (Note that this is already done for us as well in Figure 3, in the grey bars. This calculation is more or less an exercise!)

For the Pauli matrix \sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, the expectation values for each of the three vectors are

<b_1|\sigma_x|b_1> = 1, <b_2|\sigma_x|b_2> = -1, <b_3|\sigma_x|b_3> = -.6

which agree with that which is shown.

The grey bars show the theoretical values for the expectation values of the result vector with each of the Pauli matrices, while the red values (with error bars) show the experimental results. Source: Cai et al. Fig 3

It’s a bit odd to not be discussing the actual experiment that this paper carries out, but that’s more or less because of my ignorance. I’m still not as familiar with these rotation transformations as I would like, and need to improve on understanding the physical implementations of these logical quantum gates. Hopefully I’ll gain more understanding and discuss this part in the future! One part that did catch my attention was the apparent worsening of the algorithm with vectors b_2 and b_3. The paper explains that this is due to the specific optical setup used in the experiment, and that it is caused by “high-order photon emission events and post-selection in CNOT gates”. This seems troubling – even when you are using an algorithm that is optimized for a specific matrix, you still have such large discrepancies? But I’m certain that in the future, these fidelity values will improve!

In summary, this paper shows experimental proof that it is possible to calculate expectation values to systems of linear equations in an efficient manner. We follow the example equation that is provided and compute some of the same values, while leaving the harder experimental items behind for another day :)

References: Cai, X. D., et al. Experimental Quantum Computing to Solve Systems of Linear Equations Phys. Rev. Lett. 110 230501 (2013)

QCJC: Harrow 2009

It’s been a while since I’ve come back to the QCJC, but the start of a new year is as good of a time as any :) I was browsing through a few recent publications from the Pan group at USTC, and came across an interesting implementation. It uses a quantum algorithm to solve linear systems of equations, which I thought was phenomenal! While most of the algorithms that I’ve gone over so far are very limited in scope, the algorithm described in this paper seemed much more general and applicable to so many different contexts and uses.

Although most of this algorithm seems to go over my head, it’s useful to go over the key assumptions found in this paper:

  • The goal of this paper is to achieve an exponential speedup in solving eigenvalue problems, which are equivalent to solving linear systems of equations.
  • However, the way this speedup is achieved is by not actually solving these linear equations. In fact, the paper notes that in general, to write down the solution of N linear equations would require time that scales as N. The authors achieve the speedup by computing the expectation values of eigenvalue solutions rather than computing those actual solutions.
  • The authors compare their algorithm to a classical Monte Carlo sampling algorithm, but also show that no classical algorithm is able to match the same speed, even for algorithms that are only computing summary or expectation values as well.

The first step of this algorithm is to solve the problem Ax = b for x, the standard eigenvalue problem. First, the vector $b$ is assumed to be initiated, perhaps as a result of some other subroutine. Afterwards, this algorithm is rooted in earlier work that allows the creation of an operator e^{iAt}, where A is a Hermitian matrix representing the linear system of equations, through a process known as phase estimation. This operator is then applied to solve for the eigenvalues of b in order to find x.

However, the goal of this algorithm is to not directly find or output x. Instead, using the vector x, we can then find the expectation value of various quantities. Those values can be used to find different weights of the solution, or further moments of the solution vector.

Unfortunately, I don’t really follow the remainder of the mathematics of this paper… however, tomorrow I’m going to take a look at two implementations of this algorithm, and hope that those experimental papers will provide a little more insight as to how this is actually computed. It looks like the Wikipedia page for this algorithm is also directly based on this paper, so perhaps some additional research will help me understand that, at least :)

Happy New Year!

References: December 31, 2017: Harrow, A. et al. Quantum Algorithms for Linear Systems of Equations Phys. Rev. Lett. 103 210504 (2009)

QCJC: Yin 2017!

Wow! Hot off the presses from Science, we have a new world-record for quantum entanglement distance!

Front cover of the June 16th Issue of Science! Copyright AAAS

And I’m especially excited to be covering this article, since I have previously written about the Micius satellite here for the Yale Scientific, and I was fortunate enough to have worked in the same lab in Shanghai where they designed and built this amazing satellite!

As far as I can tell, this is the first time that Micius is being written about in a scientific journal, and it’s especially thrilling to me to see it on the cover of Science. But why wouldn’t it be? After all, we’ve only broken the previous record for entangled particles by 1000 kilometers, and oh, the sources were orbiting Earth in space and the photons were still able to be detected!

Most of the news sources that are covering the story are focusing on the implications of Quantum Key Distribution and the Bell Test, so I’ll go ahead and highlight a few that I thought did a very good job: LiveScience has a good article from Edd Gent with quotes from Dr. Pan, Nature’s Davide Castelvecchi does a very good job in placing Micius in the context of other quantum satellites, and Washington Post’s Sarah Keplan does an excellent job explaining the science of quantum entanglement in easy-to-understand terms.

But what about the satellite itself? How does it manage to transmit these entangled photons over such an incredible distance without getting out of sync? In my last blog post, I did some hand-waving regarding a function known as “fast feed-forward”, saying that this system “provides necessary support.” Now, with the publication of the journal article, we can understand how exactly that happens.

First, the basics. The Quantum Experiments at Space Scale (QUESS) Satellite, Micius, was launched last August, to begin a very bold experiment in setting up a Quantum Key Distribution (QKD) scheme. It planned on entangling two photons from the satellite, and then send those pairs of entangled photons down to different land-based locations within China, in Qinghai, Xinjiang, and Yunnan. These land-based stations are separated by a distance of around 1100-1200 kilometers, far exceeding any past records of quantum teleportation. These quantum effects were verified by a Bell Inequality Test, a standard manner of measuring if something is quantum or classical. The entangled photons received from Micius had a four standard deviation confirmation for it’s quantum, not classical, nature.

The primary motivation for such an experiment is that the distance between the satellite and the Earth, while generally being between 700-800 kilometers for a sun-synchronous orbit, only really travel through 16 kilometers of atmosphere. For the rest of that distance, those photons are traveling through near-vacuum, meaning that there is little environmental disturbances that could bump into the photon. This is the key problem with ground-based fiber optic cables; even with the best fiber optics, there is still a significant loss from bumping around the wire.

But of course, space is not very forgiving either. The biggest problem is the fact that you are distributing from a satellite, and hitting a tiny 1.2 meter telescope on the ground while whizzing around the Earth at 1100 miles per hour. The probability of this happening by chance is minuscule – “the equivalence of a quarter into a slot 100,000 meters away” as I mentioned last year. Furthermore, the implementation of this satellite is different from other systems used in Canada and Europe, in that Micius generates the photons on the satellite, rather than receive entangled photons from the ground.

This paper explains how the group was able to solve both of these problems, first by pulsing laser light through a crystal to generate the entangled photons, and then using laser signals that are sent with the primary quantum signal to implement the mysterious fast feed-forward system.

The production of entangled photon pairs happens on the satellite as a very weakly powered laser pumps light through a KTiOPO4 crystal on board. To give a sense of the scale here, the laser only uses 30 milliWatts, which is only 0.03% of the power needed for a normal 100 Watt light bulb. And yet it still produces almost 6 million entangled photons per second! But to qualify that, the 100 Watt light bulb would be producing somewhere around 6 Quintilian (18 zeros) photons per second. This might be a very weak beam of light, but the more important aspect is the quality of the entanglement, not the quantity of photons. As the process continues, photons are produced in the classic Bell entangled state of

|Ψ> =1/2 (|01> + |10>),

the necessary condition for photon entanglement. Now, to send them out to Earth!

To understand how the laser light is synchronized, imagine that you are jogging with your friend around a lake. But instead of jogging side by side, you want to jog exactly opposite the lake from your friend. If you have a very large lake, even if you think you are jogging at exactly the right speed to match your friend, you may find that the two of you start to drift further and further apart.

You and your friend are running around a lake, and you want to be exactly opposite each other as you run.

To make sure that the two of you are in the right place, you might want to send a signal to the other person, something to show your friend where your true location is. Let’s say that the easiest way to do this is by laser light – you have a green laser that you are pointing at your friend, and as long as your friend stays aligned with it, she would be directly across the lake from you. But what if your green laser starts slipping away from the true location? Then you would be back to square one!

Therefore, instead of using only one laser for alignment, suppose that we have two, one for you and one for your friend. As you jog, both of you keep your own lasers pointed directly across the pond, while trying to stay exactly on the laser of the other person. This two-fold alignment helps keeps both people in the right place at the right time.

With two lasers aligned on each other, the runners are able to have more certainty in keeping in pace with each other.

The same idea is exactly in play with the Micius satellite, but at a more complex level. Along with the primary 810nm signal, an additional signal of green light at 532nm is sent from the satellite, while a signal of red light at 671nm is being sent from the ground. These two additional signals feed directly into very sensitive motors near the mirrors of the satellites, allowing minuscule changes to be made, keeping everything properly aligned. There is additional complexity, since the motion of the satellite itself creates a drift in arrival time for the photons. This additional error needs to be accounted for by considering the relative motion between the satellite to each ground station, and quickly changing the polarization angle on Earth.

The thing that blew me away the most was this sentence: “Compared with the previous method of entanglement distribution by direct transmission of the same two-photon source – using the best performance … the effective link efficiency of our satellite-based approach … is 12 orders of magnitude higher.” 12 orders of magnitude! That is 1 TRILLION times better than any current fiber optics! And even at a theoretically perfect fiber optics, which have not been received yet, this first satellite experiment would be better by four to eight orders of magnitude, or between 10 thousand and 100 million times better. Wow.

Most of the remainder of the paper is in providing the method for determining Bell test violation. As a quick refresher from my previous post, the Bell inequality is a conclusion drawn from regular statistics. All classical physics is within the Bell inequality, but quantum physics violates the Bell inequality. Therefore, getting a result that violates the Bell inequality shows that the system is quantum, not classical. With Micius, the entangled photons were shown to violate the Bell inequality by four standard deviations, meaning that the probability of such a result happening by chance is around 1 in 16,000.

Honestly, this research is so exciting and fresh, and I absolutely can’t wait to read more about it. In the Nature article linked above, Dr. Pan says that their team has already performed QKD experiments, but are not yet ready for publication. But when it does publish, it will definitely change the world.

Reference: Yin, J. et al. Satellite-based Entanglement Distribution over 1200 Kilometers Science 356 1140-1144 (2017)

Disclaimer: Don’t take my blog post as unshakable truths! It’s very enriching to read the original paper itself, with it’s many beautiful diagrams. See link to Science here for the original publication. If you are a university student, you may need to use your VPN to get access.

QCJC: Aspect 1981

Digging further and further back now, for a very early paper on Bell’s Theorem. Honestly, I think the best reason I can come up with for reading this paper is that it just happens to be in the proper queue… so if you could suggest some better papers for me to read, please do! Everyone is free to comment here, or email me directly :)

Moving on to the paper! This paper is an experimental demonstration of Bell’s Inequality, the same issue that was raised by EPR (and was mentioned in the last QCJC). This inequality, and subsequent theorem, makes local hidden variable theory exclusive from quantum mechanics. Whenever Bell’s inequality holds, a local hidden variable theory is possible. However, quantum mechanics proposes a value that violates Bell’s inequality, and therefore, violates local hidden variable theory. When we speak of Bell states, we often speak of entangled particles that obey quantum mechanical laws and NOT local hidden variable theory.

A few more words about local hidden variable theories: Originally, in the EPR paper, the topic was dealing with just hidden variable theories in general. Hidden variable theories are just the idea our formulation of quantum mechanics is incomplete, and are actually reliant on some underlying “hidden” variables that we just don’t know about yet. This theory was proposed by Einstein, Polesky, and Rosen (yes got the name right this time!), and seemed to rely on common sense. After all, how would a pair of entangled particles transmit information faster than the speed of light? But, in time, this “spooky action at a distance” has shown to be in fact a real, verifiable fact through experiment.

This paper is one such demonstration of particles that violate Bell’s theorem, thus showing to be quantum mechanical and not local hidden variable theory dependent. But before we go further, we should probably state how Bell’s inequality is intended to be measured. (Aside: I thought I was quite familiar with the ideas of Bell’s Theorem when I started writing. But it turns out that I was more comprehensive of the philosophical implications, and wasn’t as thorough with the physics…)

Bell’s theorem by itself is simply derived from standard probabilities. If we take the original inequality, we have that

N(A, not B) + N(B, not C) >= N(A, not C)

We can prove this simply as the following:

N(A, not B, C) + N(not A, B, not C) >=0

N(A, not B, C) + N(not A, B, not C) + N(A, not B, not C) + N(A, B, not C) >= N(A, not B, not C) + N(A, B, not C)

N(A, not B) + N(B, not C) >= N(A, not C)

QED! Very simple, very direct. Supposing that A, B, and C are each independent variables, there shouldn’t be any variables that violate these rules. We have just shown it to be true simply by counting and through sets.

Now, we take A, B, and C, to be hidden variables corresponding to the spin of a photon, for instance. We might be interested in the spin of the photon along three different axes, such as a 0 degree axis, a 45 degree axis, and a 90 degree axis. We know that the spin can either be up or down in each of these directions, when measured. Furthermore, we know that a pair of antisymmetric photons is being produced. That is, if one of the photons has spin up in 0 degrees, the other photon must have spin down in 0 degrees, because at an earlier point, they had total spin 0.

Since we have two different antisymmetric particles, we can randomly test each individual particle with randomly chosen axis. Afterwards, we compile the information for each pair, and compute the inequality. Was it true that the number of pairs with spin up in 0 degrees, spin down in 45 degrees plus the number of pairs with spin up in 45 degrees and spin down in 90 degrees greater than the number of pairs with spin up in 0 degrees and spin down in 90 degrees? Or was this inequality violated, as quantum mechanics would predict?

After 650ish words, we have finally gotten to this paper! The paper provides a new way for experimentally checking Bell’s theorem. The authors first discuss the earlier tests of Bell’s theorem, which use positronium anihilation – when an electron meets a positron and annihilates each other. Then, they discuss a better way of conducting the tests using low-energy photons produced by atomic radiative cascades. They claim that the photons produced in this method is better for testing Bell’s theorem. I’m not entirely sure why this is the case, but it seems to have to do with detector efficiencies and/or efficient polarizers. The authors claim that it is able to not require “strong supplementary assumptions” that would otherwise apply.

The experimenters use the atomic radiative cascade of calcium, which yields two photons that have polarization correlations. They demonstrate how they set up their cascade, through irraddiating an atomic beam of calcium with a single-mode krypton ion laser, and a clockwise single-mode Rhodamine dye laser. The reason for choosing these two lasers is so that they have parallel polarizations, and have wavelengths that are corresponding to the different states of the Calcium, allowing “selective excitation”. By controlling these factors, the experimenters are able to very finely control the photon source, producing more data than previous experiments had done.

Next, the paper discusses the optical elements used in this experiment. They discuss the filters used to prevent photon reflections, as well as the two different polarizers that were constructed to perform the measurements. They have lots of specific information regarding how the piles of plates inclined near the Brewster’s angle (for polarization) would perform the polarization. They also provide data regarding the transmittances for each of the polarizers.

Then, the paper discusses the electronics that allow for coincidence counting to occur. It remarks on the TAC and Multichannel analysers that provide a time-delay spectrum, allowing for the monitoring of coincidences. This, of course, is crucial to the computation of Bell’s inequality and whether it is violated.

In the end, the group discovered a violation of Bell’s inequality (and a confirmation of quantum mechanics) by over 13 standard deviations, for both near and far (6.5 meter separation). This is great for quantum mechanics!

I think this blog post spent a bit too much time on the theory of Bell’s inequality, which is a shame given how interesting the experimental part is. But hopefully I will have a chance to explore other papers on Bell’s inequality and discover more there!

Continue reading