Code Documentation

Matching

class pymatching.matching.Matching(H: Optional[Union[scipy.sparse.base.spmatrix, numpy.ndarray, retworkx.PyGraph, networkx.classes.graph.Graph, List[List[int]]]] = None, spacelike_weights: Optional[Union[float, numpy.ndarray, List[float]]] = None, error_probabilities: Optional[Union[float, numpy.ndarray, List[float]]] = None, repetitions: Optional[int] = None, timelike_weights: Optional[Union[float, numpy.ndarray, List[float]]] = None, measurement_error_probabilities: Optional[Union[float, numpy.ndarray, List[float]]] = None, precompute_shortest_paths: bool = False, **kwargs)

A class for constructing matching graphs and decoding using the minimum-weight perfect matching decoder

The Matching class provides most of the core functionality of PyMatching. A PyMatching object can be constructed from a check matrix with one or two non-zero elements in each column (e.g. the \(Z\) or \(X\) check matrix of some classes of CSS quantum code), given as a scipy.sparse matrix or numpy.ndarray, along with additional argument specifying the edge weights, error probabilities and number of repetitions. Alternatively, a Matching object can be constructed from a NetworkX graph, with node and edge attributes used to specify edge weights, fault ids, boundaries and error probabilities.

__init__(H: Optional[Union[scipy.sparse.base.spmatrix, numpy.ndarray, retworkx.PyGraph, networkx.classes.graph.Graph, List[List[int]]]] = None, spacelike_weights: Optional[Union[float, numpy.ndarray, List[float]]] = None, error_probabilities: Optional[Union[float, numpy.ndarray, List[float]]] = None, repetitions: Optional[int] = None, timelike_weights: Optional[Union[float, numpy.ndarray, List[float]]] = None, measurement_error_probabilities: Optional[Union[float, numpy.ndarray, List[float]]] = None, precompute_shortest_paths: bool = False, **kwargs)

Constructor for the Matching class

Parameters
  • H (scipy.spmatrix or numpy.ndarray or networkx.Graph object, optional) – The quantum code to be decoded with minimum-weight perfect matching, given either as a binary check matrix (scipy sparse matrix or numpy.ndarray), or as a matching graph (NetworkX graph). Each edge in the NetworkX graph can have optional attributes fault_ids, weight and error_probability. fault_ids should be an int or a set of ints. Each fault id corresponds to a self-inverse fault that is flipped when the corresponding edge is flipped. These self-inverse faults could correspond to physical Pauli errors (physical frame changes) or to the logical observables that are flipped by the fault (a logical frame change, equivalent to an obersvable ID in an error instruction in a Stim detector error model). The fault_ids attribute was previously named qubit_id in an earlier version of PyMatching, and qubit_id is still accepted instead of fault_ids in order to maintain backward compatibility. Each weight attribute should be a non-negative float. If every edge is assigned an error_probability between zero and one, then the add_noise method can be used to simulate noise and flip edges independently in the graph. By default, None

  • spacelike_weights (float or numpy.ndarray, optional) – If H is given as a scipy or numpy array, spacelike_weights gives the weights of edges in the matching graph corresponding to columns of H. If spacelike_weights is a numpy.ndarray, it should be a 1D array with length equal to H.shape[1]. If spacelike_weights is a float, it is used as the weight for all edges corresponding to columns of H. By default None, in which case all weights are set to 1.0

  • error_probabilities (float or numpy.ndarray, optional) – The probabilities with which an error occurs on each edge corresponding to a column of the check matrix. If a single float is given, the same error probability is used for each edge. If a numpy.ndarray of floats is given, it must have a length equal to the number of columns in the check matrix H. This parameter is only needed for the Matching.add_noise method, and not for decoding. By default None

  • repetitions (int, optional) – The number of times the stabiliser measurements are repeated, if the measurements are noisy. This option is only used if H is provided as a check matrix, not a NetworkX graph. By default None

  • timelike_weights (float, optional) – If H is given as a scipy or numpy array and repetitions>1, timelike_weights gives the weight of timelike edges. If a float is given, all timelike edges weights are set to the same value. If a numpy array of size (H.shape[0],) is given, the edge weight for each vertical timelike edge associated with the i`th check (row) of `H is set to timelike_weights[i]. By default None, in which case all timelike weights are set to 1.0

  • measurement_error_probabilities (float, optional) – If H is given as a scipy or numpy array and repetitions>1, gives the probability of a measurement error to be used for the add_noise method. If a float is given, all measurement errors are set to the same value. If a numpy array of size (H.shape[0],) is given, the error probability for each vertical timelike edge associated with the i`th check (row) of `H is set to measurement_error_probabilities[i]. By default None

  • precompute_shortest_paths (bool, optional) – It is almost always recommended to leave this as False. If the exact matching is used for decoding (setting num_neighbours=None in decode), then setting this option to True will precompute the all-pairs shortest paths. By default False

Examples

>>> import pymatching
>>> import math
>>> m = pymatching.Matching()
>>> m.add_edge(0, 1, fault_ids={0}, weight=0.1)
>>> m.add_edge(1, 2, fault_ids={1}, weight=0.15)
>>> m.add_edge(2, 3, fault_ids={2, 3}, weight=0.2)
>>> m.add_edge(0, 3, fault_ids={4}, weight=0.1)
>>> m.set_boundary_nodes({3})
>>> m
<pymatching.Matching object with 3 detectors, 1 boundary node, and 4 edges>

Matching objects can also be created from a check matrix (provided as a scipy.sparse matrix, dense numpy array, or list of lists): >>> import pymatching >>> m = pymatching.Matching([[1, 1, 0, 0], [0, 1, 1, 0], [0, 0, 1, 1]]) >>> m <pymatching.Matching object with 3 detectors, 1 boundary node, and 4 edges>

add_edge(node1: int, node2: int, fault_ids: Optional[Union[int, Set[int]]] = None, weight: float = 1.0, error_probability: Optional[float] = None, **kwargs)None

Add an edge to the matching graph

Parameters
  • node1 (int) – The ID of node1 in the new edge (node1, node2)

  • node2 (int) – The ID of node2 in the new edge (node1, node2)

  • fault_ids (set[int] or int, optional) – The IDs of any self-inverse faults which are flipped when the edge is flipped, and which should be tracked. This could correspond to the IDs of physical Pauli errors that occur when this edge flips (physical frame changes). Alternatively, this attribute can be used to store the IDs of any logical observables that are flipped when an error occurs on an edge (logical frame changes). In earlier versions of PyMatching, this attribute was instead named qubit_id (since for CSS codes and physical frame changes, there can be a one-to-one correspondence between each fault ID and physical qubit ID). For backward compatibility, qubit_id can still be used instead of fault_ids as a keyword argument. By default None

  • weight (float, optional) – The weight of the edge, which must be non-negative, by default 1.0

  • error_probability (float, optional) – The probability that the edge is flipped. This is used by the add_noise() method to sample from the distribution defined by the matching graph (in which each edge is flipped independently with the corresponding error_probability). By default None

Examples

>>> import pymatching
>>> m = pymatching.Matching()
>>> m.add_edge(0, 1)
>>> m.add_edge(1, 2)
>>> print(m.num_edges)
2
>>> print(m.num_nodes)
3
>>> import pymatching
>>> import math
>>> m = pymatching.Matching()
>>> m.add_edge(0, 1, fault_ids=2, weight=math.log((1-0.05)/0.05), error_probability=0.05)
>>> m.add_edge(1, 2, fault_ids=0, weight=math.log((1-0.1)/0.1), error_probability=0.1)
>>> m.add_edge(2, 0, fault_ids={1, 2}, weight=math.log((1-0.2)/0.2), error_probability=0.2)
>>> m
<pymatching.Matching object with 3 detectors, 0 boundary nodes, and 3 edges>
add_noise()Optional[Tuple[numpy.ndarray, numpy.ndarray]]

Add noise by flipping edges in the matching graph with a probability given by the error_probility edge attribute. The error_probability must be set for all edges for this method to run, otherwise it returns None. All boundary nodes are always given a 0 syndrome.

Returns

  • numpy.ndarray of dtype int – Noise vector (binary numpy int array of length self.num_fault_ids)

  • numpy.ndarray of dtype int – Syndrome vector (binary numpy int array of length self.num_detectors if there is no boundary, or self.num_detectors+len(self.boundary) if there are boundary nodes)

property boundary: Set[int]

Return the indices of the boundary nodes.

Note that this property is a copy of the set of boundary nodes. In-place modification of the set Matching.boundary will not change the boundary nodes of the matching graph - boundary nodes should instead be set or updated using the Matching.set_boundary_nodes method.

Returns

The indices of the boundary nodes

Return type

set of int

decode(z: Union[numpy.ndarray, List[int]], num_neighbours: int = 30, return_weight: bool = False)Union[numpy.ndarray, Tuple[numpy.ndarray, int]]

Decode the syndrome z using minimum-weight perfect matching

If the parity of the weight of z is odd and the matching graph has one connected component, then an arbitrarily chosen boundary node in self.boundary is flipped, and all other stabiliser and boundary nodes are left unchanged. If the matching graph has multiple connected components, then the parity of the syndrome weight within each connected component is checked separately, and if a connected component has odd parity then an arbitrarily chosen boundary node in the same connected component is highlighted. If the parity of the syndrome weight in a connected component is odd, and the same connected component does not have a boundary node, then a ValueError is raised.

Parameters
  • z (numpy.ndarray) – A binary syndrome vector to decode. The number of elements in z should equal the number of nodes in the matching graph. If z is a 1D array, then z[i] is the syndrome at node i of the matching graph. If z is 2D then z[i,j] is the difference (modulo 2) between the (noisy) measurement of stabiliser i in time step j+1 and time step j (for the case where the matching graph is constructed from a check matrix with repetitions>1).

  • num_neighbours (int, optional) – Number of closest neighbours (with non-trivial syndrome) of each matching graph node to consider when decoding. If num_neighbours is set (as it is by default), then the local matching decoder in https://arxiv.org/abs/2105.13082 is used, and num_neighbours corresponds to the parameter m in the paper. It is recommended to leave num_neighbours set to at least 20. If num_neighbours is None, then instead full matching is performed, with the all-pairs shortest paths precomputed and cached the first time it is used. Since full matching is more memory intensive, it is not recommended to be used for matching graphs with more than around 10,000 nodes, and is only faster than local matching for matching graphs with less than around 1,000 nodes. By default 30

  • return_weight (bool, optional) – If return_weight==True, the sum of the weights of the edges in the minimum weight perfect matching is also returned. By default False

Returns

  • correction (numpy.ndarray or list[int]) – A 1D numpy array of ints giving the minimum-weight correction operator as a binary vector. The number of elements in correction is one greater than the largest fault ID. The ith element of correction is 1 if the minimum-weight perfect matching (MWPM) found by PyMatching contains an odd number of edges that have i as one of the fault_ids, and is 0 otherwise. If each edge in the matching graph is assigned a unique integer in its fault_ids attribute, then the locations of nonzero entries in correction correspond to the edges in the MWPM. However, fault_ids can instead be used, for example, to store IDs of the physical or logical frame changes that occur when an edge flips (see the documentation for Matching.add_edge for more information).

  • weight (float) – Present only if return_weight==True. The sum of the weights of the edges in the minimum-weight perfect matching.

Examples

>>> import pymatching
>>> import numpy as np
>>> H = np.array([[1, 1, 0, 0],
...               [0, 1, 1, 0],
...               [0, 0, 1, 1]])
>>> m = pymatching.Matching(H)
>>> z = np.array([0, 1, 0])
>>> m.decode(z)
array([1, 1, 0, 0], dtype=uint8)

Each bit in the correction provided by Matching.decode corresponds to a fault_ids. The index of a bit in a correction corresponds to its fault_ids. For example, here an error on edge (0, 1) flips fault_ids 2 and 3, as inferred by the minimum-weight correction: >>> import pymatching >>> m = pymatching.Matching() >>> m.add_edge(0, 1, fault_ids={2, 3}) >>> m.add_edge(1, 2, fault_ids=1) >>> m.add_edge(2, 0, fault_ids=0) >>> m.decode([1, 1, 0]) array([0, 0, 1, 1], dtype=uint8)

To decode with a phenomenological noise model (qubits and measurements both suffering bit-flip errors), you can provide a check matrix and number of syndrome repetitions to construct a matching graph with a time dimension (where nodes in consecutive time steps are connected by an edge), and then decode with a 2D syndrome (dimension 0 is space, dimension 1 is time): >>> import pymatching >>> import numpy as np >>> np.random.seed(0) >>> H = np.array([[1, 1, 0, 0], … [0, 1, 1, 0], … [0, 0, 1, 1]]) >>> m = pymatching.Matching(H, repetitions=5) >>> data_qubit_noise = (np.random.rand(4, 5) < 0.1).astype(np.uint8) >>> print(data_qubit_noise) [[0 0 0 0 0]

[0 0 0 0 0] [0 0 0 0 1] [1 1 0 0 0]]

>>> cumulative_noise = (np.cumsum(data_qubit_noise, 1) % 2).astype(np.uint8)
>>> syndrome = H@cumulative_noise % 2
>>> print(syndrome)
[[0 0 0 0 0]
 [0 0 0 0 1]
 [1 0 0 0 1]]
>>> syndrome[:,:-1] ^= (np.random.rand(3, 4) < 0.1).astype(np.uint8)
>>> # Take the parity of consecutive timesteps to construct a difference syndrome:
>>> syndrome[:,1:] = syndrome[:,:-1] ^ syndrome[:,1:]
>>> m.decode(syndrome)
array([0, 0, 1, 0], dtype=uint8)
draw()None

Draw the matching graph using matplotlib

Draws the matching graph as a matplotlib graph. Stabiliser nodes are filled grey and boundary nodes are filled white. The line thickness of each edge is determined from its weight (with min and max thicknesses of 0.2 pts and 2 pts respectively). Note that you may need to call plt.figure() before and plt.show() after calling this function.

edges()List[Tuple[int, int, Dict]]

Edges of the matching graph

Returns a list of edges of the matching graph. Each edge is a tuple (source, target, attr) where source and target are ints corresponding to the indices of the source and target nodes, and attr is a dictionary containing the attributes of the edge. The dictionary attr has keys fault_ids (a set of ints), weight (the weight of the edge, set to 1.0 if not specified), and error_probability (the error probability of the edge, set to -1 if not specified).

Returns

A list of edges of the matching graph

Return type

List of (int, int, dict) tuples

load_from_check_matrix(H: Union[scipy.sparse.base.spmatrix, numpy.ndarray, List[List[int]]], spacelike_weights: Optional[Union[float, numpy.ndarray, List[float]]] = None, error_probabilities: Optional[Union[float, numpy.ndarray, List[float]]] = None, repetitions: Optional[int] = None, timelike_weights: Optional[Union[float, numpy.ndarray, List[float]]] = None, measurement_error_probabilities: Optional[Union[float, numpy.ndarray, List[float]]] = None, **kwargs)None

Load a matching graph from a check matrix

Parameters
  • H (scipy.spmatrix or numpy.ndarray or List[List[int]]) – The quantum code to be decoded with minimum-weight perfect matching, given as a binary check matrix (scipy sparse matrix or numpy.ndarray)

  • spacelike_weights (float or numpy.ndarray, optional) – If H is given as a scipy or numpy array, spacelike_weights gives the weights of edges in the matching graph corresponding to columns of H. If spacelike_weights is a numpy.ndarray, it should be a 1D array with length equal to H.shape[1]. If spacelike_weights is a float, it is used as the weight for all edges corresponding to columns of H. By default None, in which case all weights are set to 1.0

  • error_probabilities (float or numpy.ndarray, optional) – The probabilities with which an error occurs on each edge associated with a column of H. If a single float is given, the same error probability is used for each column. If a numpy.ndarray of floats is given, it must have a length equal to the number of columns in H. This parameter is only needed for the Matching.add_noise method, and not for decoding. By default None

  • repetitions (int, optional) – The number of times the stabiliser measurements are repeated, if the measurements are noisy. By default None

  • timelike_weights (float or numpy.ndarray, optional) – If repetitions>1, timelike_weights gives the weight of timelike edges. If a float is given, all timelike edges weights are set to the same value. If a numpy array of size (H.shape[0],) is given, the edge weight for each vertical timelike edge associated with the i`th check (row) of `H is set to timelike_weights[i]. By default None, in which case all timelike weights are set to 1.0

  • measurement_error_probabilities (float or numpy.ndarray, optional) – If repetitions>1, gives the probability of a measurement error to be used for the add_noise method. If a float is given, all measurement errors are set to the same value. If a numpy array of size (H.shape[0],) is given, the error probability for each vertical timelike edge associated with the i`th check (row) of `H is set to measurement_error_probabilities[i]. This argument can also be given using the keyword argument measurement_error_probability to maintain backward compatibility with previous versions of Pymatching. By default None

Examples

>>> import pymatching
>>> m = pymatching.Matching([[1, 1, 0, 0], [0, 1, 1, 0], [0, 0, 1, 1]])
>>> m
<pymatching.Matching object with 3 detectors, 1 boundary node, and 4 edges>

Matching objects can also be initialised from a sparse scipy matrix: >>> import pymatching >>> from scipy.sparse import csc_matrix >>> H = csc_matrix([[1, 1, 0], [0, 1, 1]]) >>> m = pymatching.Matching(H) >>> m <pymatching.Matching object with 2 detectors, 1 boundary node, and 3 edges>

load_from_networkx(graph: networkx.classes.graph.Graph)None

Load a matching graph from a NetworkX graph

Parameters

graph (networkx.Graph) – Each edge in the NetworkX graph can have optional attributes fault_ids, weight and error_probability. fault_ids should be an int or a set of ints. Each fault id corresponds to a self-inverse fault that is flipped when the corresponding edge is flipped. These self-inverse faults could correspond to physical Pauli errors (physical frame changes) or to the logical observables that are flipped by the fault (a logical frame change, equivalent to an obersvable ID in an error instruction in a Stim detector error model). The fault_ids attribute was previously named qubit_id in an earlier version of PyMatching, and qubit_id is still accepted instead of fault_ids in order to maintain backward compatibility. Each weight attribute should be a non-negative float. If every edge is assigned an error_probability between zero and one, then the add_noise method can be used to simulate noise and flip edges independently in the graph.

Examples

>>> import pymatching
>>> import networkx as nx
>>> import math
>>> g = nx.Graph()
>>> g.add_edge(0, 1, fault_ids=0, weight=math.log((1-0.1)/0.1), error_probability=0.1)
>>> g.add_edge(1, 2, fault_ids=1, weight=math.log((1-0.15)/0.15), error_probability=0.15)
>>> g.nodes[0]['is_boundary'] = True
>>> g.nodes[2]['is_boundary'] = True
>>> m = pymatching.Matching(g)
>>> m
<pymatching.Matching object with 1 detector, 2 boundary nodes, and 2 edges>
load_from_retworkx(graph: retworkx.PyGraph)None

Load a matching graph from a retworkX graph

Parameters

graph (retworkx.PyGraph) – Each edge in the retworkx graph can have dictionary payload with keys fault_ids, weight and error_probability. fault_ids should be an int or a set of ints. Each fault id corresponds to a self-inverse fault that is flipped when the corresponding edge is flipped. These self-inverse faults could correspond to physical Pauli errors (physical frame changes) or to the logical observables that are flipped by the fault (a logical frame change, equivalent to an obersvable ID in an error instruction in a Stim detector error model). The fault_ids attribute was previously named qubit_id in an earlier version of PyMatching, and qubit_id is still accepted instead of fault_ids in order to maintain backward compatibility. Each weight attribute should be a non-negative float. If every edge is assigned an error_probability between zero and one, then the add_noise method can be used to simulate noise and flip edges independently in the graph.

Examples

>>> import pymatching
>>> import retworkx as rx
>>> import math
>>> g = rx.PyGraph()
>>> matching = g.add_nodes_from([{} for _ in range(3)])
>>> edge_a =g.add_edge(0, 1, dict(fault_ids=0, weight=math.log((1-0.1)/0.1), error_probability=0.1))
>>> edge_b = g.add_edge(1, 2, dict(fault_ids=1, weight=math.log((1-0.15)/0.15), error_probability=0.15))
>>> g[0]['is_boundary'] = True
>>> g[2]['is_boundary'] = True
>>> m = pymatching.Matching(g)
>>> m
<pymatching.Matching object with 1 detector, 2 boundary nodes, and 2 edges>
property num_detectors: int

The number of detectors in the matching graph. A detector is a node that can have a non-trivial syndrome (i.e. it is a node that is not a boundary node).

Returns

The number of detectors

Return type

int

property num_edges: int

The number of edges in the matching graph

Returns

The number of edges

Return type

int

property num_fault_ids: int

The number of fault IDs defined in the matching graph

Returns

Number of fault IDs

Return type

int

property num_nodes: int

The number of nodes in the matching graph

Returns

The number of nodes

Return type

int

set_boundary_nodes(nodes: Set[int])None

Set boundary nodes in the matching graph. This defines the nodes in nodes to be boundary nodes.

Parameters

nodes (set[int]) – The IDs of the nodes to be set as boundary nodes

Examples

>>> import pymatching
>>> m = pymatching.Matching()
>>> m.add_edge(0, 1)
>>> m.add_edge(1, 2)
>>> m.set_boundary_nodes({0, 2})
>>> m.boundary
{0, 2}
>>> m
<pymatching.Matching object with 1 detector, 2 boundary nodes, and 2 edges>
to_networkx()networkx.classes.graph.Graph

Convert to NetworkX graph

Returns a NetworkX graph corresponding to the matching graph. Each edge has attributes fault_ids, weight and error_probability and each node has the attribute is_boundary.

Returns

NetworkX Graph corresponding to the matching graph

Return type

NetworkX.Graph

to_retworkx()retworkx.PyGraph

Convert to retworkx graph

Returns a retworkx graph object corresponding to the matching graph. Each edge payload is a dict with keys fault_ids, weight and error_probability and each node has a dict payload with the key is_boundary and the value is a boolean.

Returns

retworkx graph corresponding to the matching graph

Return type

retworkx.PyGraph