# Usage

## Getting Started

Most of the functionality of PyMatching is available through the `pymatching.matching.Matching`

class, which can be imported in Python with:

```
[1]:
```

```
from pymatching import Matching
```

The `Matching`

class is used to represent the \(X\)-type or \(Z\)-type matching graph of a CSS quantum code for which syndrome defects come in pairs (or in isolation at a boundary). Each edge in the matching graph corresponds to a single error, and each node corresponds to a stabiliser measurement (or a boundary). The simplest way to construct a `Matching`

object is from the X or Z check matrix of the code, which can be given as a numpy or a scipy array. For example, we can construct
the \(Z\)-type matching graph for a five-qubit quantum bit-flip repetition code (which has \(Z\) stabilisers \(ZZIII\), \(IZZII\), \(IIZZI\) and \(IIIZZ\)) from the \(Z\) check matrix using:

```
[2]:
```

```
import numpy as np
"""
Each column of Hz corresponds to an X error on a qubit, and each
row corresponds to a Z stabiliser.
Hz[i,j]==1 if Z stabiliser i acts non-trivially
on qubit j, and is 0 otherwise.
"""
Hz = np.array([
[1,1,0,0,0],
[0,1,1,0,0],
[0,0,1,1,0],
[0,0,0,1,1]
])
m = Matching(Hz)
m
```

```
[2]:
```

```
<pymatching.Matching object with 4 detectors, 1 boundary node, and 5 edges>
```

Note that, since two qubits (0 and 4) are incident to only a single stabiliser, a boundary node has automatically been created in the matching graph, and is connected to the stabilisers acting non-trivially on qubits 0 and 4. The weights of all edges in the matching graph default to 1.0, unless they are specified using the `spacelike_weights`

parameter.

We can visualise the matching graph using the `Matching.draw()`

method:

```
[3]:
```

```
%matplotlib inline
m.draw()
```

```
[ ]:
```

```
```

Note that the stabiliser nodes are shown as filled circles, and the boundary node (labelled 4) is shown as a hollow circle. Each edge is labelled with its `fault_ids`

attribute, which gives the id (or id’s) of any self-inverse faults (such as frame changes) which are flipped when the edge is flipped. When a `pymatching.Matching`

object is constructed from a check matrix `H`

as done here, each edge is given a `fault_ids`

attribute equal to the index of its column in `H`

. Since here we
chose to define `H`

from the \(Z\) stabilisers of the code, each column corresponds to a single physical Pauli \(X\) error on a physical qubit (so there is a one-to-one correspondence between each self-inverse fault and each qubit). Note that in earlier versions of PyMatching, `fault_ids`

was instead named `qubit_id`

, and as a result `qubit_id`

is still accepted instead of `fault_ids`

as an argument when constructing `Matching`

objects in order to maintain backward
compatibility.

If \(X\) errors occur on the third and fourth qubits we have a binary noise vector:

```
[4]:
```

```
noise = np.array([0,0,1,1,0])
```

and the resulting syndrome vector is:

```
[5]:
```

```
z = Hz@noise % 2
print(z)
```

```
[0 1 0 1]
```

This syndrome vector `z`

can then be decoded simply using:

```
[6]:
```

```
c = m.decode(z)
print("c: {}, of type {}".format(c, type(c)))
```

```
c: [0 0 1 1 0], of type <class 'numpy.ndarray'>
```

where `c`

is the \(X\) correction operator (i.e. \(IIXXI\)).

Note that for larger check matrices you may instead prefer to use a scipy sparse matrix to represent the check matrix:

```
[7]:
```

```
import scipy
Hz = scipy.sparse.csr_matrix(Hz)
m = Matching(Hz)
m
```

```
[7]:
```

```
<pymatching.Matching object with 4 detectors, 1 boundary node, and 5 edges>
```

## Noisy Syndromes

### Spacetime matching graph

If stabiliser measurements are instead noisy, then each stabiliser measurement must be repeated, with each defect in the matching graph corresponding to a change in the syndrome (see IV B of this paper). We will repeat each stabiliser measurement 5 times, with each qubit suffering an \(X\) error with probability `p`

, and each stabiliser will be measured incorrectly with probability `q`

. Spacelike edges will be weighted with
\(\log((1-p)/p)\) and timelike edges will be weighted with \(\log((1-q)/q)\). The Matching object representing this spacetime matching graph can be constructed using:

```
[8]:
```

```
repetitions=5
p = 0.05
q = 0.05
m2d = Matching(Hz,
spacelike_weights=np.log((1-p)/p),
repetitions=repetitions,
timelike_weights=np.log((1-q)/q)
)
```

### Simulate noisy syndromes

Now if each qubit suffers an \(X\) error with probability `p`

in each round of stabiliser measurements, the errors on the data qubits can be given as a 2D numpy array:

```
[9]:
```

```
num_stabilisers, num_qubits = Hz.shape
np.random.seed(1) # Keep RNG deterministic
noise = (np.random.rand(num_qubits, repetitions) < p).astype(np.uint8)
noise # New errors in each time step
```

```
[9]:
```

```
array([[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
```

```
[10]:
```

```
noise_cumulative = (np.cumsum(noise, 1) % 2).astype(np.uint8)
noise_total = noise_cumulative[:,-1] # Total cumulative noise at the last round
noise_cumulative # Cumulative errors in each time step
```

```
[10]:
```

```
array([[0, 0, 1, 1, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
```

The corresponding noiseless syndrome would be:

```
[11]:
```

```
noiseless_syndrome = Hz@noise_cumulative % 2
noiseless_syndrome # Noiseless syndrome
```

```
[11]:
```

```
array([[0, 0, 1, 1, 1],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0]])
```

We assume each syndrome measurement is incorrect with probability `q`

, but that the last round of measurements is perfect to ensure an even number of defects (a simple approximation - the overlapping recovery method could be used in practice):

```
[12]:
```

```
syndrome_error = (np.random.rand(num_stabilisers, repetitions) < q).astype(np.uint8)
syndrome_error[:,-1] = 0
syndrome_error # Syndrome errors
```

```
[12]:
```

```
array([[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
```

```
[13]:
```

```
noisy_syndrome = (noiseless_syndrome + syndrome_error) % 2
noisy_syndrome # Noisy syndromes
```

```
[13]:
```

```
array([[0, 0, 0, 1, 1],
[0, 0, 0, 0, 1],
[0, 0, 0, 1, 1],
[0, 0, 0, 0, 0]])
```

```
[14]:
```

```
noisy_syndrome[:,1:] = (noisy_syndrome[:,1:] - noisy_syndrome[:,0:-1]) % 2
noisy_syndrome # Convert to difference syndrome
```

```
[14]:
```

```
array([[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 0]])
```

### Decode

Decoding can now be done just by inputting this 2D syndrome vector to the `Matching.decode`

method:

```
[15]:
```

```
correction = m2d.decode(noisy_syndrome)
correction
```

```
[15]:
```

```
array([1, 0, 1, 0, 0], dtype=uint8)
```

And we see that this correction operator successfully corrects the cumulative total noise:

```
[16]:
```

```
(noise_total + correction) % 2
```

```
[16]:
```

```
array([0, 0, 0, 0, 0], dtype=uint8)
```

## Loading from NetworkX graphs

While it can be convenient to decode directly from the check matrices, especially when simulating under a standard independent or phenomenological noise model, it is sometimes necessary to construct the matching graph nodes, edges, weights and boundaries explicitly. This is useful for decoding under more complicated (e.g. circuit-level) noise models, for which matching graph edges can be between nodes separated in both space and time (“diagonal edges”). There can also be so called “hook errors”, which are single faults (matching graph edges) corresponding to errors on two or more qubits. Furthermore, the stabilisers themselves can change as a function of time when using schedule-induced gauge fixing of a subsystem code (see this paper).

To provide the functionality to handle these use cases, PyMatching allows Matching objects to be constructed explicitly from NetworkX graphs.

Each node in the matching graph with \(n\) nodes, represented by the `pymatching.Matching`

object, should be uniquely identified by an integer between \(0\) and \(n-1\) (inclusive). Edges are then added between these integer nodes, with optional attributes `weight`

, `fault_ids`

and `error_probability`

.

We will again use the five qubit quantum repetition code as an example. This time, nodes 1, 2, 3 and 4 will correspond to stabiliser measurements (detectors), and nodes 0 and 5 will be boundary nodes. We’ll start by creating the following NetworkX graph:

```
[17]:
```

```
import networkx as nx
p = 0.2
g = nx.Graph()
g.add_edge(0, 1, fault_ids=0, weight=np.log((1-p)/p), error_probability=p)
g.add_edge(1, 2, fault_ids=1, weight=np.log((1-p)/p), error_probability=p)
g.add_edge(2, 3, fault_ids=2, weight=np.log((1-p)/p), error_probability=p)
g.add_edge(3, 4, fault_ids=3, weight=np.log((1-p)/p), error_probability=p)
g.add_edge(4, 5, fault_ids=4, weight=np.log((1-p)/p), error_probability=p)
```

Here each “fault_ids” attribute is used to store the id of the qubit which is acted on by an \(X\) error (we assume each stabiliser is a \(Z\)-type operator).

Recall that nodes 0 and 5 should be boundary nodes, since they do not correspond to stabilizers/detectors. E.g. the boundary node 0 allows us to associate an edge (0, 1) with a fault mechanism that only flips detector 1. We can specify that nodes 0 and 5 are boundary nodes by setting their optional `is_boundary`

attribute to `True`

:

```
[18]:
```

```
g.nodes[0]['is_boundary'] = True
g.nodes[5]['is_boundary'] = True
```

We now connect these boundary nodes with an edge of `weight`

zero, and with `fault_ids`

either unspecified or set to `set()`

or `-1`

(since edges between boundaries do not correspond to Pauli errors):

```
[19]:
```

```
g.add_edge(0, 5, weight=0.0, fault_ids=-1, error_probability=0.0)
```

Here we have used two boundary nodes to demonstrate that multiple boundary nodes can be added. However, usually only one boundary node needs to be added. For example, we could have connected a single boundary node to nodes 1 and 4 instead here.

Just for the purpose of demonstration, we’ll assume that there is also an error process that gives a single hook error on qubits \(2\) and \(3\), corresponding to a single edge between node \(2\) and node \(4\). This error will occur with probability `p2`

. This can be added using:

```
[20]:
```

```
p2 = 0.12
g.add_edge(2, 4, fault_ids={2, 3}, weight=np.log((1-p2)/p2), error_probability=p2)
```

Finally, we can now use this NetworkX graph to construct the `Matching`

object:

```
[21]:
```

```
m = Matching(g)
m
```

```
[21]:
```

```
<pymatching.Matching object with 4 detectors, 2 boundary nodes, and 7 edges>
```

We can also use the `Matching.draw()`

method to visualise our matching graph as before:

```
[22]:
```

```
%matplotlib inline
m.draw()
```

While the noise and syndrome can be generated separately without PyMatching, if the optional `error_probability`

attribute is given to every edge, then the edges can be flipped independently with the `error_probability`

assigned to them using the `add_noise`

method:

```
[23]:
```

```
from pymatching import set_seed
set_seed(1) # Keep RNG deterministic
noise, syndrome = m.add_noise()
print(noise)
print(syndrome)
```

```
[0 1 0 0 0]
[0 1 1 0 0 0]
```

We can now decode as before using the `decode`

method:

```
[24]:
```

```
correction = m.decode(syndrome)
print((correction+noise)%2)
```

```
[0 0 0 0 0]
```

## Constructing a matching graph by adding edges directly to the Matching object

The most direct way to construct a matching graph is to add edges explicitly to the `pymatching.Matching`

object. This approach is just as flexible as constructing the graph via NetworkX. For example, the example used in the previous section with NetworkX can instead be constructed as follows:

```
[25]:
```

```
p = 0.2
m = Matching()
m.add_edge(0, 1, fault_ids=0, weight=np.log((1-p)/p), error_probability=p)
m.add_edge(1, 2, fault_ids=1, weight=np.log((1-p)/p), error_probability=p)
m.add_edge(2, 3, fault_ids=2, weight=np.log((1-p)/p), error_probability=p)
m.add_edge(3, 4, fault_ids=3, weight=np.log((1-p)/p), error_probability=p)
m.add_edge(4, 5, fault_ids=4, weight=np.log((1-p)/p), error_probability=p)
m.add_edge(0, 5, weight=0.0, fault_ids=set(), error_probability=1.0)
p2 = 0.12
m.add_edge(2, 4, fault_ids={2, 3}, weight=np.log((1-p2)/p2), error_probability=p2)
m.set_boundary_nodes({0, 5})
m
```

```
[25]:
```

```
<pymatching.Matching object with 4 detectors, 2 boundary nodes, and 7 edges>
```

```
[26]:
```

```
%matplotlib inline
m.draw()
```

## Using Stim to construct a PyMatching matching graph

For simulations of quantum error correcting codes using more realistic circuit-level noise, manually constructing the matching graph can be challenging and time-consuming. Fortunately, these matching graphs can be constructed automatically using Stim for Clifford stabiliser measurement circuits and Pauli noise models. Using Stim, you need only define an annotated stabiliser measurement circuit, from which the matching graph is automatically generated (via a Detector Error Model). Stim can also sample directly from the stabiliser measurement circuit. For more information on combining Stim and PyMatching, see the Stim “getting started” notebook.