We explore basic necessary protocols to achieve fault tolerance for quantum computation in the cluster state and circuit
models. For the cluster state model we simulate the implementation of an arbitrary rotation via only measurement on
a decohered cluster state. Fidelity is used to quantify the accuracy of the initial cluster state and a gate fidelity for the
arbitrary rotation is determined. In the circuit model we compare the accuracy of two methods that can be used to construct
a logical zero state appropriate for the [7, 1, 3] Steane quantum error correction code in a non-equiprobable Pauli operator
error environment: a fault tolerant method done by applying error correction on seven qubits all in the state zero, and a
non-fault tolerant method done by implementing the encoding gate sequence. We find that the latter construction method,
in spite of its lack of "fault tolerance," outputs a seven qubit state with a higher fidelity than the first (fault tolerant)
method. However, the fidelity of the single qubit of stored information exhibits almost equivalent values between the two
construction methods.
In this work we compare the accuracy of two methods used to construct a logical zero state appropriate for the [7, 1, 3] CSS
quantum error correction code in a non-equiprobable Pauli operator error environment. The first method is to apply error
correction, via syndrome measurement, on seven physical qubits all in the state zero. To do the syndrome measurements
in a fault-tolerant fashion requires the construction of four qubit Shor states. These Shor states are also assumed to be
constructed in a non-equiprobable Pauli operator error environment and it is these that are used to implement the syndrome
measurement. The second construction method is to implement the [7, 1, 3] encoding gate sequence, also in the nonequiprobable
Pauli operator error environment. The fidelity of the output states is calculated for each of these methods.
With respect to the Shor state construction we find that the implementation of (noisy) parity based verifications does not
necessarily raise the fidelity of the resulting Shor state. We also find that the second logical zero construction method
outputs a seven qubit state with a respectfully higher fidelity than the first (fault tolerant) method. However, the fidelity of
the single qubit of stored information has almost equivalent fidelity from the two construction methods.
KEYWORDS: Photons, Diffraction, Signal detection, Sensors, Single photon, Image resolution, Signal attenuation, Atmospheric propagation, Point spread functions, Radio propagation
We present a detailed theoretical analysis of quantum imaging intended to reveal under what conditions it is superior to
imaging with non-entangled photons in order to determine practical bounds on quantum imaging systems. Our analysis
includes a description of the propagation and detection of entangled light signals taking into account attenuation, diffraction,
and event statistics. Each of these three are significant roadblocks on the path towards practical quantum imaging and
we rate how severe each of these is in three imaging regimes (exo-atmospheric, short distance endo-atmospheric, and long
distance endo-atmospheric) and three detection regimes (high signal-to-background, low signal-to-background, and saturated).
In an attempt to overcome these roadblocks we briefly speculate about the possible role of nonlinear propagation
phenomena which may enable entangled light propagation without diffraction, and of X-waves, which may provide for the
possibility of overcoming all of the above mentioned roadblocks.
The highly entangled four qubit cluster state can be used to perform an arbitrary single logical qubit rotation via the techniques
of measurement-based quantum computation. In this paper, we explore the relationship between the entanglement
in the cluster state and the ability of the state to accurately perform the desired logical rotation. This has important experimental
ramifications as realistic cluster state experiments will always be subject to decoherence. We also note the
exhibition of entanglement sudden death (ESD) and ask how severely its onset affects the utilization of the cluster state as
a means of implementing the arbitrary single logical qubit rotation.
The quantum mechanical phenomenon of entanglement can be utilized to beat the the Rayleigh limit, the classical
bound on image resolution. This is done by entangling the photons that are used as the signal states. Using
entanglement, the best possible image resolution is instead given by the Heisenberg limit, an improvement proportional
to the number of entangled photons in the signal. Here, we present a novel application of entanglement
by inverting the above procedure. We show that the resolution obtained by an imaging system utilizing separable
photons can be achieved by an imaging system making use of entangled photons, but with the advantage of a
smaller aperture. This results in a smaller, lighter imaging system. Smaller imaging systems can be especially
valuable in satellite imaging where weight and size play a vital role.
The standard approach to quantum fault tolerance is to calculate error thresholds on basic gates in the limit of arbitrarily
many concatenation levels. In contrast this paper takes the number of qubits and the target implementation accuracy as
given, and provides a framework for engineering the constrained quantum system to the required tolerance. The approach
requires solving the full dynamics of the quantum system for an arbitrary admixture (biased or unbiased) of Pauli errors.
The inaccuracy between ideal and implemented quantum systems is captured by the supremum of the Schatten k-norm of
the difference between the ideal and implemented density matrices taken over all density matrices. This is a more complete
analysis than the standard approach, where an intricate combination of worst case assumptions and combinatorial analysis
is used to analyze the special case of equiprobable errors. Conditions for fault tolerance are now expressed in terms of
error regions rather than a single number (the standard error threshold). In the important special case of a stochastic noise
model and a single logical qubit, an optimization over all 2×2 density matrices is required to obtain the full dynamics. The
complexity of this calculation is greatly simplified through reduction to an optimization over only three projectors. Error
regions are calculated for the standard 5- and 7-qubit codes. Knowledge of the full dynamics makes it possible to design
sophisticated concatenation strategies that go beyond repeatedly using the same code, and these strategies can achieve
target fault tolerance thresholds with fewer qubits.
We analyze the phase estimation ability of photonic N00N states propagating in a lossy medium. In such a
medium a N00N state of N enangled photons cannot achieve the maximum 1/N phase estimation resolution.
In fact, unless the transmittance of the medium is extremely high, a signal comprised of an attenuated separable
state of N photons will produce a better phase estimate than a comparable signal of an equally attenuated N00N
state. Thus, for most practical applications the resolution provided photonic N00N states is actually worse than
the 1/√N Standard Quantum Limit. This performance deficit becomes more pronounced as the number of
photons in the signal increases.
We study the phase estimation abilites of photonic N00N states, propagating in an attenuating medium, is
analyzed. It is shown that N00N states of a given number of enangled photons N, never achieve the 1/N
Heisenberg limit if the propagation occurs through lossy medium. It is also shown that a signal comprised of an
attenuated separable state of N photons will actually produce a better phase estimate than a signal comprised of
an equally attenuated N00N state unless the transmittance of the medium is very high. Thus, for most practical
applications in realistic scenarios with attenuation, the resolution of N00N state-based phase estimation not
only does not achieve the Heisenberg Limit, but is actually worse than the 1/(square root of)N Standard Quantum Limit.
This performance deficit becomes more pronounced as the number, N, of photons in the signal increases.
The model of quantum computation developed by Kitaev (1 ,∮4.1) shows that a perfect, error-free, quantum
computer can lead to reliable classical outputs, despite the need to apply a necessarily probabilistic measurement.
In this paper we extend the analysis to account for necessarily imperfect quantum computation. The analysis
presented here is required to establish the utility of practical quantum computation even given the assumption
that fault-tolerance techniques are successfully applied. This is due to the fact that the application of currently
known fault-tolerance techniques does not permanently and completely remove errors. To this end we have
introduced a mathematical relation that compares the accuracy of a necessarily imperfect quantum computation
to a prescribed performance bound. Finally, we discuss several mathematical aspects of this bound and its
usefulness in analyzing quantum computing protocols.
We demonstrate a method of constructing L-shape cluster states by exploiting equivalence class properties of graph states. The L-shape cluster state is a primitive which can be used to construct cluster states capable of supporting universal quantum computation. The method is device independent but is shown to be considerably more efficient than previously proposed approaches for photonic cluster construction. Two-dimensional photonic cluster states can be efficiently built via local unitaries and type-I fusion only. To allow for a complete calculation of the cost of two-dimensional photonic cluster-state construction, we also provide a recursion relation relating the length of a photonic cluster chain to the average resources required.
In this paper we review the content of the Quantum Computer Condition, which is a rigorously specified criterion that provides a set of mathematical constraints that must be satisfied by a physical system if we intend to use that system as a quantum computing machine, and we discuss an important consequence of it known as the Quantum Computing No-Go Theorem, which establishes a bound for decoherence and dissipation beyond which quantum computation is not possible. In connection with this theorem, we explicitly calculate a universal critical damping value for fault-tolerant quantum computation. We also discuss a relevant class of time-dependent generalizations of the Lindblad equation.
Quantum cryptography has attracted much recent attention due to its potential for providing secret communications that cannot be decrypted by any amount of computational effort. This is the first analysis of the secrecy of a practical implementation of the
BB84 protocol that simultaneously takes into account and presents the full set of analytical expressions for effects due to the presence of pulses containing multiple photons in the attenuated output of the laser, the finite length of individual blocks of key material, losses due to error correction, privacy amplification, and
authentication, errors in polarization detection, the efficiency of the detectors, and attenuation processes in the transmission medium.
The analysis addresses eavesdropping attacks on individual photons rather than collective attacks in general. Of particular importance is the first derivation of the necessary and sufficient amount of privacy amplification compression to ensure secrecy against the loss
of key material which occurs when an eavesdropper makes optimized individual attacks on pulses containing multiple photons. It is shown that only a fraction of the information in the multiple photon pulses is actually lost to the eavesdropper.
We present a fundamental approach to the processing of signals for the detection of targets immersed in clutter in any type of digitized image (synthetic aperture radar, optical, acoustical, etc.), analogous to the standard theory of target detection for pulsed radars, with applications to aided target recognition (ATR). Expanding upon recent results of DiPietro and Fante, we derive a new ATR detection probability function analogous to the Swerling Type-4 detection probability function of pulsed-radar detection theory. We carry out a comparative theoretical and numerical analysis of the single- look and single-pulse probabilities of detection of targets in the general ATR and pulsed-radar cases, and also provide a comparative analysis of the noncoherent integration of multiple samples of image data in the ATR and pulsed-radar cases. We derive expressions for the binary integration of all single-look ATR detection probabilities, and perform a comparative theoretical and numerical analysis of the performance characteristics of binary integration versus noncoherent integration. A detailed numerical analysis of the optimization of parameters for peak ATR binary integration performance in low-resolution and/or low signal-to-clutter ratio images is performed, and a comparative analysis of optimization of parameters for the pulsed-radar theory in low signal-to-noise ratio environments is carried out.
We analyze and assess the underlying assumptions and characteristics of the standard Gaussian automatic target recognition algorithm. An analysis of the theoretical formulation of the basic algorithm is carried out in which the important assumption of Gaussian multivariate feature distribution is replaced with the assumption of generalized Rayleigh multivariate feature distribution. Closed form analytical expressions are worked out for the associated characteristic and detection probability functions. Numerical analysis of the results is performed which reveals that superior performance characteristics can arise in the generalized Rayleigh distribution-based case. An empirical analysis of a computer programmatic implementation of the basic Gaussian algorithm is also carried out to explore the sensitivity of the generated numerical results to the variation of those parameters which are intrinsic to the code. It is explicitly demonstrated that the statistics of the receiver-operator characteristics yielded by the code are extremely sensitive to this set of parameters, and that this sensitivity can lead to potentially ambiguous results in important cases.
Recently, considerable attention has been directed to the use of airborne synthetic-aperture radar (SAR) for detecting both large and small underground objects. The reconstruction of images of buried objects from SAR data, however, often relies on assumptions and algorithms developed in the context of free-space radar propagation. Although in some cases the implicit assumption of free-space propagation can lead to acceptable results, in other important cases it will result in severe degradation to images of underground objects. In this paper we investigate the conditions under which significant improvements to image quality result if proper account is taken of the detailed effects of the soil medium on radar wave propagation. These effects are analyzed both in terms of the refraction of plane waves and via reference to general analytic solutions to Maxwell's equations. Quantitative predictions of the improvement in buried-target image focussing are presented for some realistic data collection scenarios, corresponding to airborne and ground-based SAR or microwave holography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.