Thursday, August 25, 2011

How much entanglement do you need?

Quantum entanglement is required for various "useful" quantum information processing tasks such a teleportation, dense coding, and quantum key distribution.
How crucial entanglement is for actual quantum computation turns out to be a subject of debate. For mixed states the presence of entanglement is a necessary but not a sufficient condition to violate Bell inequalities, as found in a classic paper by Werner.

In practice, if one builds some quantum information processing device in the laboratory one will never create maximal entanglement between qubits. For example, in a quantum dot computer the spin singlet-triplet splitting is switched on and off in order to swap electronic spins. But, the possibility of electronic double occupancy can reduce this entanglement, but not fatally, as discussed here.

So can we quantify how much entanglement is enough to be useful? Is there a lower bound on how much entanglement a gate must create to be useful? Is there some rough figure of merit?
I cannot find any discussion of this basic question in the literature. I scanned through the nice Reviews of Modern Physics article by Horodecki^4 but could not find anything.

Any ideas?

The above question was raised by one of the referees for a recent paper (with Laura McKemmish, Noel Hush, and Jeff Reimers) where we calculate the entanglement between the electronic and nuclear degrees of freedom in the low lying eigenstates of a model Hamiltonian for several simple molecules.

3 comments:

  1. Hi Ross,

    If your 2-qubit gate is unitary then it turns out that any non-vanishing amount of entanglement is enough, see for example http://arxiv.org/abs/quant-ph/0207072 . Obviously if your gates are only weakly entangling you will typically need more applications of them in your computation.

    If the gate is non-unitary (as will always be the case in practice, due to experimental noise or interactions with the environment), it turns out that entanglement measures are not the most useful way of expressing the "imperfectness" of the gate. Instead the literature usually talks about an "error channel" (or a "CP map" to be more technical) e.g. we could model the gate as being a perfect unitary followed by an error channel that, say, flips both qubits with probability p_2 (although more complicated errors are usually expected). In that case, there has been a lot of theoretical work done to establish how small the error probability needs to be before a large scale quantum computer can be built using your gates. The current state-of-the-art is that the error probability (per gate) needs to be less than around 1 percent. (See http://iopscience.iop.org/1367-2630/9/6/199 and http://www.nature.com/nature/journal/v434/n7029/full/nature03350.html ).

    I guess one could convert these error-channel models into some entanglement measure, but the story is somewhat complicated by the fact that the amount of entanglement a gate yields actually depends on the input state used.

    ReplyDelete
  2. If the arguments put forward recently by del Rio et al.* are correct, than shouldn't the entanglement be related to some optimal degree of work extractable or required by a subsystem in order to set its state (i.e. to erase it)?

    *Rio, L. D., Ã…berg, J., Renner, R., Dahlsten, O., & Vedral, V. (2011). The thermodynamic meaning of negative entropy. Nature, 474(7349), 61–63. doi:10.1038/nature10123

    ReplyDelete
  3. Thanks for these extremely helpful comments. They have saved me a lot of time.

    ReplyDelete