Researchers have recently uncovered a brand-new approach for correcting quantum computer calculation errors, possibly eliminating a substantial barrier to a powerful new field of computing.
A SciTechDaily report describes “error correction” as a “well-developed subject in traditional computers.” Every cellphone needs checks and adjustments to transfer and receive data over messy airwaves.
— luis hernan graffe (@hernangraffe) September 28, 2022
Essentially, quantum computers have immense potential to tackle complicated problems conventional computers cannot, although this capacity is reliant on harnessing the tremendously fleeting behavior of subatomic particles.
Such computing behaviors are so ephemeral that even inspecting them for problems might collapse the entire system.
(Photo: THOMAS KIENZLE/AFP via Getty Images)
A photonic chip for quantum computing lies under a microscope.
The Challenge to Quantum Computers
An interdisciplinary team led by an associate professor of electrical and computer engineering, Jeff Thompson, from Princeton University, and collaborators Yue Wu and Shruti Puri at Yale University and Shimon Kolkowitz at the University of Wisconsin-Madison presented in a theoretical study published in Nature Communications that they could significantly enhance the tolerance of a quantum computer for faults and lessen the amount of redundant or repeated information needed for isolation and error-fixing.
The new method is quadrupling the acceptable error rate from one to four percent, making it practical for quantum computers presently being developed.
Thompson explained that the fundamental challenge to quantum computers is that the operations one would want to do are noisy. This means that calculations are prone to myriad modes of failure.
Working with a Traditional Computer
In a traditional computer, an error can be as simple as a bit of memory that accidentally flips “from a 1 to a 0, as easy as one wireless router” interrupting another.
A common method for handling such defaults is to develop redundancy so that every piece of data is compared with duplicate copies.
Nevertheless, that approach increases the amount of data needed and creates more probabilities for errors.
Consequently, it’s only working when the vast majority of information is already correct. Otherwise, checking incorrect data against wrong data leads deeper into a crater of error.
Erasure errors, as described in the Princeton University report, are well understood in conventional computing, although researchers had not previously considered attempting to engineer quantum computers to convert errors into erasures, according to Thomson.
As a practical matter, the proposed system could withstand an error rate of 4.1 percent, which Thompson said is well within the realm of probability for existing quantum computers.
In past systems, the state-of-the-art error correction has had the capacity of handling below one-percent error, which according to Thompson, is at the edge of the capability of any existing quantum system with a huge number of qubits.
The ability of the team to generate erasure errors turned out to be an unforeseen advantage from a choice Thomson made years back.
His study explores “neutral atom qubits,” in which quantum information, a qubit, is stored in one atom.
They pioneered the use of the element ytterbium for this specific purpose. Thompson explained the group chose ytterbium partially due to the fact that it has two electrons in its outermost electron layers, compared to most other natural atom cubits, which have only one.
Quantum computing is explained on Simplilearn’s YouTube video below: