COHORT 1 (2014 entry)
Quantum computers are perhaps the most transformative potential quantum technology, but a clear insight into what fully separates their potential power from classical computers remains poorly understood.
One of the big questions in this field of quantum computation is where does the quantum computational improvement come from. Recent works have suggested that a type of non-classical behaviour, called contextuality, might play an important role in quantum speed up, however, many open questions remained, particularly in the case of computation based on quantum bits, the main architecture for quantum computing.
My project aims to clarify these questions by studying if contextuality is indeed a resource for quantum computation. The initial focus is particular schemes of quantum computation, important in fault tolerant architectures, called state-injection schemes. In such schemes, we can consider certain states as “resources” and study and quantify their properties. The project requires a combination of interdisciplinary techniques from Physics and Computer Science, therefore develop this work I am collaborating with Dr Nadish de Silva from the department of Computer Science at UCL.
Just as classical thermodynamics allowed for a better understanding of engines, leading to improved efficiency of machines based on classical principles, as quantum technology becomes an increasingly mature field, quantum thermodynamics will be needed to better understand how to optimise quantum devices.
Much work remains to be done in translating ideas from classical thermodynamics into a quantum framework. One property understood classically, but not yet fully developed in the quantum situation is that of entropy production. This offers a measure of irreversibility, a foundational area still not fully understood, although the questions have persisted since the advent of classical thermodynamics.
I am using the stochastic Liouville-von Neumann (SLN) equation as a tool with which to explore the thermodynamic properties of quantum systems coupled to environments. In addition to seeking quantum entropy production, I have also used the SLN to model properties such as heat flow which will be of particular use in developing better quantum systems.
I will be extending this work in collaboration with researchers at the University of Turku, when I go to spend time on a placement there. The work is intended to apply the SLN to model proposals for quantum heat engines and quantum refrigerators, which involve coupling a quantum system to two or more heat baths. While in Finland, I will also have the opportunity to meet Jukka Pekola and his group, who work on experiments in quantum thermodynamics.
My research goal is the development of a new system for biomedical imaging. Our device aims to exploit the advantages of quantum sensors to produce conductivity maps of living tissue via electromagnetic induction imaging, such conductivity maps are not currently available.
Direct imaging of conductivity is applicable to many medical conditions. In particular, Atrial Fibrillation (a common irregular heart beat) is thought to be caused by permanent conductivity changes on the surface of the heart. A conductivity map of the region would aid diagnoses and treatment, along with investigation of the fundamental causes. Other applications include the diagnosis of tumours, where the conductivity of the tumour has been shown to differ significantly from that of the surrounding healthy tissue.
The system we are developing is based on optical atomic magnetometers. These are extremely sensitive magnetic field sensors which measure the response of a quantum state to magnetic fields. Advantages over other sensors include room temperature operation in unshielded environments and that no calibration is required. We use them to measure the magnetic field response of a target object to an oscillating magnetic field. Compared to other medical imaging techniques this system is completely non-invasive and non-contact. Thus far, we have developed a prototype device capable of accurately imaging target objects with a wide range in conductivities.
We have an ongoing collaboration with the UCL Heart Hospital in order to understand the requirements of a future device. We also collaborate with industrial partners at NPL and Thales UK for their advice and experience in delivering new technologies. In addition, we have received funding for closely related projects from the Wellcome Trust, DSTL, and the Home Office.
Quantum key distribution is one of the most mature quantum technologies and is one of the main candidates for replacing current cryptographic schemes which are vulnerable to attacks by adversaries using quantum computers. QKD is needed when large quantum computers become available
QKD needs to be secure not just in the current technological environment but also in 20+ years time when it becomes necessary in order to communicate securely in the presence of quantum computers. In this project we determine the security of QKD in an environment where adversaries can make more accurate measurements than allowed for by current theory (for instance better distinguishing conjugate polarisations). More specifically we consider the possibility that greater measurement capacities are possible, corresponding to greater levels of experimental control which may arise from the development of large scale quantum computers. We consider these better measurement capacities to be described by theories which are close to quantum theory (same dynamics) but with different measurement update rules.
The output of this project is several QKD schemes (which can be run on current hardware) which are secure against post-quantum attacks (both incoherent and collective) and an analysis of its efficiency. This will allow us to determine how future proof QKD is, in the sense of establishing how resistant it is to advances in technology/physics. Emphasis is placed on gauging the efficiency of these schemes. Moreover as part of the project we will develop the theoretical tools needed to describe attacks on QKD against arbitrary post-quantum theories. This toolbox will then be usable for arbitrary post-quantum attacks, not just the family considered in this project. Possible extensions to this project consist in exploring key distribution when Alice and Bob also have access to post-quantum capabilities (that is to say not restricting our analysis to current hardware but admitting the possibility of using these better measurements to strengthening the scheme).
Quantum computation is the holy grail of quantum technologies, and rightly so – it would allow the solution to a whole new complexity of otherwise impossible problems. While there are no known theoretical reasons why this shouldn’t happen, implementing a ‘useful’ quantum computer is still widely agreed to be a long way off. However, one of the most useful tasks such a quantum computer could do is in fact to simulate another complex quantum system, and it turns out that this greatly relaxes the technical requirements. Such a quantum simulator could eventually allow the study and development of key chemicals and materials such as drugs and high temperature superconductors.
The greater ease at achieving this comes from moving to an analogue regime – here, the physical architecture of the simulator directly mimics the target model, and small errors have no impact on the large scale features that one is interested in. This allows the ‘computation’ to take place with both fewer and lower quality components. Many great results have actually already been achieved in this field, notably using trapped ions and optical lattices of ultra-cold atoms. These are however, somewhat limited in the type of systems they can simulate, a problem that a solid state simulator might alleviate. Here the components are embedded in an electrical solid, a more established technology that might be more compact and scalable.
This project takes the context of solid state simulators and looks at what is possible now, for example, verifying a quantum state with limited measurements. It also looks more widely at what aspects of many body quantum physics a simulator might shed light on, for example, the breakdown of statistical mechanics due to many-body localization.
In recent years, new numerical methods have been developed which have transformed our understanding of many-body quantum systems. Using insights from quantum information theory, methods which take advantage of weakly entangled quantum states called matrix product states have been developed. These methods have been used to determine ground state energies and time evolution in one dimension. They have the advantage of working well even when systems are strongly interacting. These numerical methods form a powerful theoretical analogue to the experimental work on quantum simulators such as cold atomic gasses since they both can easily capture unusual physics or the behaviour of a system that is suddenly put out of equilibrium.
In this project we will try to expand this work on matrix product states in two directions. Firstly, while these numerical methods work well in one dimension, they currently do not work as well when extended to two or higher dimensions. In this project we will use a new approach to look at time evolution in out of equilibrium systems in two dimensions. This method uses a special cylindrical geometry but it still allows us to calculate universal thermodynamic quantities in two dimensions and develop new insights in to the way in which quantum entanglement propagates in quantum systems. This may offer a new understanding of how quantum entanglement behaves in quantum computers.
The second strand on this project is to take matrix product states and apply them to more precise analytic work in one dimension. By building a quantum field theory over these matrix product states we are able to take a more physically intuitive approach than is normally possible. By doing this we will not only be able to reproduce previous results in a simpler manner but also be able to more easily achieve new insights in models where quantum entanglement is significant. These analytic results can also be used to enhance the power of existing numerical models by simplifying the numerical calculations. This approach seems like it may especially offer insight in to adiabatic quantum computation.
Quantum computation aims to employ the laws of quantum physics to solve hard computational problems. Although the computational capacity and the speed of classical computers have increased significantly over the last 50 years, many real world problems such as modelling DNA and protein folding remain hard for classical computers. Building a quantum computer could enable this type of problems to be tackled more efficiently.
Building a universal quantum computer which is able to run any task is still a distant goal to achieve. In the medium term, special quantum devices that are able to perform specific tasks could be built. The currently available adiabatic quantum annealer from D-Wave systems is an example of such a device. The processor is designed and built such that it exploits quantum fluctuations to solve a class of problems known as optimization problems. For this type of problem, there are many potential solutions for the same problem and the aim is to find the best solution that minimizes the problem cost function. The typical optimization problems solved by this processor so far are relativity easy as they can be solved with a standard classical computer.
The annealer is constructed out of superconducting flux qubits. Each qubit is a superconducting loop interrupted by a SQUID (superconducting quantum interference device). The coupling on the current annealer is quadratic – i.e, each coupling involves two qubits – which makes it hard to solve cubic or higher order problems. Many real world problems involve many-body coupling (higher or- der) terms, accordingly building an annealer based on multi-body coupling could provide an efficient tool to solve this set of problems.
In this project we aim to design, fabricate and test a prototype, with a few number of qubits, of a three-body coupler based quantum annealer. The proposed new design of this prototype helps in reducing the complexity of the annealer circuit comparing to the currently available commercial one. In parallel, we have been in contact with D-Wave systems to establish an academic-industrial collaboration so that we can build more skills in this emerging field of quantum technology.
The universe is a continuous object, so physicists tend to use continuous mathematics to study it. However, over the past two decades the study of how quantum mechanical systems can be used to store, communicate and process information has brought together theoretical physicists, information theorists, and computer scientists, and caused considerable mathematical “cross-pollination” between the fields. Indeed, in recent years, the application of discrete mathematical tools more commonly used in theoretical computer science to problems from quantum information has proved fruitful, with breakthrough results related to the information capacity of quantum communication channels and in more foundational questions regarding “contextual” correlations between quantum systems.
In this project we are attempting to further explore this link between discrete mathematical structures and the physical world. So far we have developed a graph theoretic framework for studying the properties of quantum systems that have become entangled with one another. The framework reveals some interesting structural properties present in entangled states which we would not have noticed if we had used the usual methods of mathematically representing the state of a quantum system. This structural perspective is likely to be useful in developing a greater understanding of entanglement itself, which is vital for a number of quantum cryptographic and computational tasks.
In short, the project aims to explore how discrete mathematics can be used to gain new insights on theoretical problems related to quantum information processing and quantum technologies.
My research project is devoted to the study of thermodynamics for microscopic systems in contact with imperfect and finite-sized thermal reservoirs. While in many physical situations the assumption of having an infinite thermal reservoir is well-justified, there are cases, especially when we deal with Quantum Technologies, in which the environment has to be considered as finite-sized, and cannot be thought in equilibrium at a specific temperature. For instance, there are situations in which the evolution of the quantum system under examination (for example, a qubit in a quantum computer) is so fast that it as to be considered either isolated from the environment or interacting with just a small portion of the surroundings.
In other situations, instead, the energy and entropy exchanged with the environment are enough to modify the surroundings, which again can be assumed to be finite-sized. Furthermore, the environment is not always in perfect equilibrium at a given temperature, and we have to account this imperfections while considering the evolution of the system.
My research is focused on the analysis of these (more realistic) situations, which can be approached with the tools of quantum information theory and resource theory. For example, I have developed (together with Tobias Fritz, Max Planck Institute, Leipzig) a resource theory for quantum thermodynamics in which the thermal bath is finite. This theory provides information on the amount of work and heat necessary to perform thermodynamic states transformations when the environment has a finite size. At the moment, instead, I am working (together with David Jennings, Imperial College, London) on the analysis of microscopic heat engines which exploit partially out-of-equilibrium reservoirs. These engines allow us to extract work from a single non-thermal reservoir in a cyclic way (without violating the second law of thermodynamics).
COHORT 2 (2015 entry)
The goal of my PhD project is to carry out simulations of coupled transmon qubits in superconducting systems in order to find topologically protected states which may prove useful for quantum computation. Superconducting quantum devices are now routinely being built in groups around the world and my theoretical studies are intended to help guide these experimental efforts such as in the development of qubits which are more robust against decoherence.
I will be carrying out this work under the supervision of Marzena Szymanska as part of the Quantum Collective Dynamics in Light Matter Systems group at UCL with additional guidance from Eran Ginossar who is a member of the Advanced Technology Institute at the University of Surrey. In addition we intend to work as closely as possible with experimental collaborators such as Peter Leek at the University of Oxford with whom we are already in the process of studying bistability in circuit-QED for transmon qubit readout.
In order to be as productive as possible it is our intention to explore systems which have either already been built or are feasible to construct in the near term. Therefore input from experimentalists working in this field will be vital. In addition we have theory collaborators upon whose expertise we may draw such as Eytan Grosfeld at Ben Gurion University in Israel.
The field of quantum information processing is highly exciting at present and superconducting qubits are certainly one of the leading technologies in this field. Drawing together the experience of all our collaborators we intend to be as productive as possible in the advancing their development.
Cavity optomechanics is a large field studying the interaction between an optical field and a mechanical motion. By using lasers adequately, the temperature of the cantilever (its motion) can be increased or decreased. Quantum mechanics predicts that a final temperature can be reached. If the theory of quantum mechanics is very successful at the microscopic scale, many of its predicted behaviours have not yet been observed at nanoscales. This is mainly assumed to be caused by thermal decoherence (heating) from the environment. In levitated optomechanics, thermal decoherence is highly reduced by increasing the isolation of the particle in a vacuum chamber. In the UCL experiment, charged particles are levitated in a Paul trap which enables to keep them trapped at lower pressures than the ones achieved with standard optical tweezers.
The goal of my work will be first to cool the particle as much as possible. Different directions include changing the cavity design, the Paul trap field or applying feedback cooling. In case the ground state is reached different experiments could be designed to show evidence of any quantum behaviours. These include superposition of the centre of mass motion and mechanical Fock states (direct evidence of the quantization of the mechanical motion). Studying these physical properties at nanoscale is a key step for quantum technologies since enabling us to understand better these phenomena at intermediate scales. In case the ground state is not reached experiments on collapse models (physical models trying to solve the measurement problem by adding a nonlinear term in the Schrödinger equation and a stochastic term) could be realised. Moreover, in the current set-up one of the particle motion frequency depends on its fundamental parameter. Using this trap as a spectrometer could be further explored.
Not only interesting from a fundamental prospective with a better understanding of quantum behaviours at macroscopic scales, these devices are potential candidates for many possible quantum technologies. Applications include frequency conversions, conversion between mechanical motion and light and spectrometry (mass to charge ratio and polarisation to shape) among others.
Quantum computers are perhaps the most challenging but also the most exciting development in quantum technologies. These devices hold the promise of exponential speedups over classical computers, including problems like factoring, the ‘hardness’ of which underlies most modern cryptographic protocols.
A computing device is called fault-tolerant if it is designed in a way that makes it robust to random errors caused by faults in the components or interference from the environment. These ideas have successfully been extended to the domain of quantum computing, both in theory and most recently in successful experimental demonstrations, including experiments at Google and IBM. This continued progress in experimental control raises the exciting prospect of first generation quantum computing systems. These early devices will have limited physical resources, and so an important challenge is to minimise the resources used in a given computation to make these early systems as powerful as possible.
This process of optimising quantum computation is commonly called quantum gate synthesis, but can also be conceptualised as a quantum analogue of classical compilation, where plain-text code is first optimised and then translated to machine code.
An important caveat of these fault tolerant constructions is that they limit the kind of operations we can perform, as a cost of making those operations ’safe’ to realise. This restriction is okay as long as we can realise a ‘universal’ set, such that we can build up any computation from these fixed elementary operations. In current fault-tolerant designs, we are restricted to performing a set of operations called the Clifford group; unfortunately it is known that these operations alone are not universal, and so we need ways to implement additional operations that can ‘boost’ us to universality. This is done using extra ancillary or ‘resource’ qubits to carry out these operations.
The problem in quantum compilation for fault-tolerant designs is thus: How can we implement a given computation with as few non-Clifford operations as possible, and with the smallest number of overall operations? The current state of the art has lead to good techniques for building single qubit operations out of Clifford gates and ’T’ gate ancillae. In this project, we examine the potential computing power of alternative resource states, with the aim of finding more efficient implementations and developing novel techniques for building multi-qubit operations.
Machine Learning algorithms already play a major role in our everyday lives, seeing applications as varied as spam mail filters, bioinformatics, financial risk evaluation and many more. These algorithms learn a relation between input and output data from examples, which are then applied to classifying new input data.
With the ever-increasing proliferation of available data globally, machine learning algorithms need to remain computationally tractable in this so-called era of big data. Both in academia and industry, high-performance computing (HPC) resources are now required for the most cutting-edge learning applications, such as DeepMind’s 2016 result in training a machine to play Go at the international level.
Given the importance of learning problems and how resource-hungry they are, a natural question to ask is whether an advantage can be gained by using quantum computation. Algorithms using quantum computers have been shown to display dramatic speedup over their classical counterparts. For instance, Shor’s factoring algorithm is exponentially faster than the best classical method.
Currently, there are only a few algorithms for which quantum superiority is conclusively established, primarily based on Grover’s quantum search algorithm. What this PhD project aims to achieve is to provide practically feasible quantum algorithms for machine learning which are demonstrably better than their classical equivalents.
More specifically, the aim is to extend the classical idea of kernel methods into the quantum realm. Kernel methods allow linear relations between vectors to be derived from non-linear relations between arbitrary objects. Machine learning algorithms work best at establishing linear patterns between vectors, thus the kernel trick allows learning to be carried out on a much wider class of data, to a better degree of accuracy. The goal here is to devise and show superiority of quantum kernel algorithms.
This project aims to augment the already established technology of machine learning with the expanding field of quantum technologies.
Superconductor–spin hybrid quantum systems is a vibrant and modern field, combining the long coherence times offered by natural spin systems with the fast control and convenient electrical interfacing that comes with superconducting devices. By taking advantage of the best characteristics of each component, such a hybrid device forms a new route towards quantum information processing offering compatibility with existisng technologies and additional QIP capabilities for spin-wave qubit storage.
Rare-earth spins have been of particular interest due to impressively long coherence times exceeding 6 hours in some cases, and due to convenient transitions allowing simultaneous microwave and optical addressing of the spin state.
My contribution to this field focuses on coupling to rare-earth spins doped in a host of yttrium orthosilicate (YSO), a crystal with very low nuclear spin density offering great potential for long coherence times. Almost no work has been done on construction of superconducting devices on YSO as a substrate, so the first goal of this project is to construct a superconducting resonator on samples of doped YSO and subsequently observe a strong coupling to the spin ensemble.
Such a device has applications towards various quantum technologies, incuding hybrid quantum computers with storage of multiple qubits, and microwave–optical transducers forming a quantum network of quantum processors.
This project forms a collaboration with NPL, with Tobias Lindström as a secondary supervisor, providing a breadth of knowledge behind SC resonators and sources of loss in such systems, as well as access to specialised equipment for probing their behaviour at ultra-low temperatures and powers.
Rydberg atoms are atoms whose outer electron has been excited to a high lying energy state. This gives the atoms a number of useful and unusual properties, both for fundamental research (e.g. the 2012 Nobel Prize in Physics) and, increasingly, for applications in quantum technologies.
The research project consists of coupling atoms in circular Rydberg states to chip-based coplanar microwave resonators. Initial experiments will include coupling the atoms to a coplanar waveguide in order to drive transitions between the two qubit states, studying the effect of atom-surface interactions formed by patch potentials and coupling the atoms to a superconducting microwave resonator.
The aim of the PhD is to work towards implementing hybrid systems such as coupling atoms in circular Rydberg states to superconducting qubits in order to realise amongst other things, an optical-to-microwave frequency converter, a component which will be required for future networked quantum computers.
The aim of the project is to develop improved tools for numerical simulation of how quantum systems made up of multiple smaller systems change with time under various conditions. This will involve both analytical and computational work. In particular the tools will model the effect of interactions with the systems’ environment. This is typically a computationally slow task with current numerical tools, however using insights from recent theoretical results it should be possible to model the environmental effects with far fewer computational resources.
The numerical tools I will produce will be capable of simulating quantum computers (since these consist of quantum systems made up of multiple smaller systems). One paradigm of quantum computing, adiabatic quantum computing, has been shown to be characterized in terms of ‘entanglement’. Entanglement is a fundamental aspect of what makes quantum mechanics different from classical descriptions of nature, and is responsible the non-intuitive effects that Einstein described as ‘spooky action at a distance’. The tools developed in this project will be capable of tracking how entanglement changes during the operation of an adiabatic quantum computer. At present there is no method of tracking the evolution of entanglement in adiabatic quantum computing protocols, so it is likely that we will be able to make significant contribution in this regard.
These tools will enable us to draw conclusions about environmental degradation in real-life quantum computing hardware, for example the devices sold by D-Wave systems inc. Therefore the results will have direct relevance to existing quantum technologies. The tools developed in this project will also be applicable to modelling any future quantum technologies that involve many body quantum systems dynamics.
As well as this it is hoped that during the course of the project these techniques can be applied to help better understand the evolution of entanglement in continuous-time quantum technologies protocols, specifically in protocols that rely on adiabatic quantum computing.
Shielding quantum technological devices from noise and interference such as vibrations and electromagnetic waves becomes increasingly important as we look into building large-scale quantum computers and quantum communication networks. Quantum systems are extremely sensitive to environmental noise, but in most cases we can protect them by adding vibrations-damping materials and surrounding them with magnetic shields. However, one source of noise that we cannot shield our devices from is gravity. We have some evidence that gravity causes the quantum systems to loose coherence, which is the property that allows for quantum computational speedup and secure quantum communication. Many quantum systems, such as photons travelling large distances across the Earth through fibre optic cables, are subject to gravity over relatively large timescales. Gravitational influence will cause the photons to loose coherence, meaning that the rate at which we can engage in secure quantum communication goes down. Because we do not know of any mechanism or material that can shield our systems from gravity it becomes important to understand these effects in order to try and compensate for them.
For my PhD, I will be looking at describing low-energy gravitational effects on quantum systems by using mathematical techniques that model gravitational influence as a bath of interacting particles. By studying these phenomena, we can work out how much coherence is lost to gravity and how it will affect the quantum devices. Potentially, any results can also give rise to new methods that use quantum systems to make very precise measurements of gravity. Applications of gravity measurements include improved methods for geological studies and the oil and gas industry, enabling increased precision for acceleration-guided navigation, and finally a better understanding of gravitational waves and astrophysical phenomena.
In short, even the smallest quantum systems will never escape the influence of gravity, and so we must take it into account when building the next generation of quantum technological devices.
A quantum computer offers to solve problems which are intractable for any conventional computer. However, to reach quantum computation on a large scale an extremely high level of control in fabrication and reproducibility is required relying on computation using fragile quantum states. In conventional microelectronics namely the CMOS platform there has been a twofold increase in transistors per processor every two years known as Moore’s law which was enabled by continuous advances in semiconductor fabrication techniques. Nowadays, transistors have reached dimensions where quantum effects are observed.
The goal of this project is to realise high-fidelity spin qubits – the building block of a spin based quantum computer – in CMOS compatible devices. Such devices will benefit from the strongly developed microelectronics fabrication and allow integration in current technology. The CMOS platform should facilitate moving from proof-of-concept implementations to more practical large-scale devices.
The basis for this project are state-of-the-art silicon nanowire field-effect transistors fabricated by collaborators at CEA-LETI which have shown great potential for realising spin qubits using quantum dots and donors. In this project multiple routes for encoding a qubit in these devices will be investigated and bench-marked including routes for scaling up to a large number of qubits.
An industrial partnership with the Hitachi Cambridge Laboratory is set-up which adds strong expertise on radio-frequency reflectometry which is required for device characterisation and read-out.
Finally this project builds upon strong low temperature characterisation and spin manipulation expertise within the Quantum Spin Dynamics Group at UCL.
Building a quantum computer is the holy grail of quantum technology. Quantum computers would allow us to tackle problems across science which are impossible to solve with our most powerful supercomputers. For example, with a quantum computer we could efficiently simulate chemical systems and predict chemical reaction rates. This would allow us to test and design new pharmaceuticals much faster than is presently possible.
Researchers have recently built small scale quantum processors containing a handful of quantum bits (qubits). However to unlock the full power of quantum computing we need to construct devices which contain many more qubits. In such a device the qubits will fail occasionally due to unavoidable control errors and noise from the environment. The field of faulttolerant quantum computation aims to circumvent this problem by designing quantum computing protocols and architectures which will function correctly even in the presence of qubit errors.
The first step in designing a faulttolerant protocol is to use a quantum errorcorrecting code. Such a code allows us to protect our quantum information by encoding it in the state of many qubits. Currently, the leading quantum errorcorrecting code is the surface code. The surface code is popular because it has a relatively simple design and a high error threshold. An error threshold is the acceptable failure rate of the components of our quantum computing protocol. As long as the error rates are below the threshold our quantum computer will function correctly.
The surface code also has disadvantages, chiefly that when using this code we can only implement a limited number of quantum gates efficiently. A quantum computation consists of a series of quantum gates applied to a set of qubits. To realise the full power of quantum computation we must be able to perform a certain set of quantum gates, called a universal set. Unfortunately, the surface code does not support an efficient implementation of a universal set of quantum gates.
Recently, researchers have proposed new codes which support a universal set of quantum gates. Unfortunately these codes have more complex architectures than the surface code and are therefore more difficult to build using current technology. They also seem to have lower error thresholds than the surface code.
I am investigating whether any of these newer codes can be truly competitive with the surface code or even surpass it. I would also like to find out whether some of the advantages of the newer codes can be adapted to the surface code, improving its performance. Finally, a longterm goal in my research is to discover new codes with beat better properties than the surface code for building a scalable quantum computer.
For the last 40 years, computing power has increased exponentially in accordance with Moore’s law. Moore’s law predicts that the number of transistors, the switches that represent the bits of computational logic, that can be located in a given space doubles every 18 months. This has required that individual transistors become smaller and smaller, with current transistors being under 10nm across. The minimum size of a transistor is limited: when they are so small that they contain only a few atoms they will begin to behave differently and will no longer be suitable for computational tasks.
To continue the increase in computing power several paths are being examined. One of these is known as quantum computing. Quantum computing research attempts to replace the transistors of a traditional computer with quantum particles. These quantum particles would no longer behave like simple switches and would be able to take on a series of different values between ‘on’ and ‘off’. If this could be achieved then quantum computers would be exponentially faster at performing certain tasks than traditional computers.
An important concern for any computer, including a quantum one, is how errors are corrected when they occur. Although there are well understood theoretical schemes for performing quantum error correction the difficulty is how to implement them in the real world. My project aims to take make a small scale implementation of one of these error correcting schemes, known as the surface code. To do this I will be using electrons as the quantum analogue of the classical transistor. Electrons have a property known as spin, which describes their direction of rotation. This quantity replaces the ‘on’ or ‘off’ states of the classical transistor. I will use a measurement or probe spin on a movable needle to determine the spins of groups of four electrons at a time. This process allows for errors to be detected without interfering with the individual electrons. If successful this would identify one possible route towards an error corrected quantum computer.