Working AQQA with D-wave¶
D-Wave, the first company commercialize a quantum device that implemented quantum annealing as we have just covered. We need to keep in mind that, with these quantum computers, the evolution process will not be adiabatic in general, so there is no guarantee that the exact solution will be found in all cases.
Using D-wave is easier than you think. You need to install Ocean, which is D-Wave's quantum annealing Python library, and to create a free account on D-Wave Laep, a cloud service where you can get one minute per month of free computing time on D-Wave's quantum annealers.
Once you have everything set up, you can access quantum annealers to find an approximation of a solution to any combinatiorial optimiazation problem that you may have written as either an instance of finding the groud state of an Ising model or as a QUBO problem. Let's try with the MaxCut problem from QUBO first!
ok let's do some review first, for the Ising model from QUBO, we have
$$ \begin{array}{ll} \text{Minimize} & -\sum_{(j,k)\in E} J_{jk}\langle \psi \lvert Z_{j}Z_{k}\lvert \psi \rangle -\sum_{j} h_{j}\langle \psi \lvert Z_{j}\lvert \psi \rangle \\ \text{where} & \lvert \psi \rangle \text{is taken from the set of quantum states on} \ n \ \text{qubis} \end{array} $$
and we can reduce to a 3 vertices problem to the following:
$$ \begin{array}{ll} \text{Minimize} & \langle \psi\lvert(Z_{0}Z_{1}+Z_{0}Z_{2})\lvert \psi\rangle = \langle \psi \lvert Z_{0}Z_{1}\lvert \psi \rangle + \langle \psi \lvert Z_{0}Z_{2}\lvert \psi \rangle,\\ \text{where} & \lvert \psi \rangle \text{is taken from the set of quantum states on 3 qubis} \end{array} $$
we can write the ground state as,
$$ Z_{0}Z_{1} + Z_{0}Z_{2} = Z \otimes Z \otimes I + Z \otimes I \otimes Z $$
which is, of course, an Ising Hamiltonian in which $J_{01} = J_{02} = 1$ and the rest of the coefficients are $0$.
Next, all we need is to tell the quantum annealer is that those are the coefficients we want to use, and then we can perfrom the annealing multiple times to obtain some resutls that will hopefully wolve our problem. To specift the problem, we can use the dimod
package inclided in the Ocean library.
Build a MaxCut model with dimod¶
First, we use the function BinaryQuadraticModel(linear, quadratic, offset, vartype)
in dimod
object.
import dimod
J = {(0,1):1, (0,2):1}
h = {}
problem = dimod.BinaryQuadraticModel(h,J,0.0,dimod.SPIN)
print("The problem we are going to solve is:")
print(problem)
The problem we are going to solve is: BinaryQuadraticModel({0: 0.0, 1: 0.0, 2: 0.0}, {(1, 0): 1.0, (2, 0): 1.0}, 0.0, 'SPIN')
You should get the following result:
The problem we are going to solve is: BinaryQuadraticModel({0: 0.0, 1: 0.0, 2: 0.0}, {(1, 0): 1.0, (2, 0): 1.0}, 0.0, 'SPIN')
We used $J$ for the coefficients of the degree $2$ terms (quadratic) -
(0,1):1
sets the $J_{01}$ coefficient to 1 and(0,2):1
sets $J_{02} = 1$ - andh
for the linear ones.These parameters will be set to $0$ by
BinaryQuadraticModel()
if we don't specify.Notice that the result shows
(1,0)
and(2,0)
that are identical to $Z_{0}Z_{1}$ and $Z_{0}Z_{2}$ since $Z_{0}Z_{1} = Z_{1}Z_{0}$ and, thus, the situation is symmetrical.We set offset to $0.0$, which is a constant term that can be added to the Hamiltonian.
We used
dimod.SPIN
since we are working with an Ising Hamiltonian and, thus, the values of our variables are $1$ and $-1$.
Run the annealing process on one of the quantum annealers:¶
Modeling¶
Let's break down what are going to do:
- We use
EmbeddingComposite()
, which allow us to map or embed our problem into the actual qubits of the annealer. (An automatically way of selecting a few qubits in the computer that will be used to represent our variables.) - After mapping, we will create an object
sampler
that we then use to obtain 10 samples or possible solutions to our problem This is where the actual execution on the actual quantum annealer happens. - Print our result, which will vary from execution to execution, since we are using an actual quantum computer!
from dwave.system import DWaveSampler
from dwave.system import EmbeddingComposite
sampler = EmbeddingComposite(DWaveSampler())
result = sampler.sample(problem, num_reads = 10)
print("The solutions that we have obtained are")
print(result)
The solutions that we have obtained are 0 1 2 energy num_oc. chain_. 0 -1 +1 +1 -2.0 1 0.0 1 +1 -1 -1 -2.0 9 0.0 ['SPIN', 2 rows, 10 samples, 3 variables]
Results¶
You should get the result similar to this form.
The solutions that we have obtained are 0 1 2 energy num_oc. chain_. 0 -1 +1 +1 -2.0 1 0.0 1 +1 -1 -1 -2.0 9 0.0 ['SPIN', 2 rows, 10 samples, 3 variables]
This means that we obtained two different solutions: $z_{0} = -1$, $z_{1} = 1$, and $z_{2} = 1$, and $z_{0} = 1$, $z_{1} = -1$, and $z_{2} = -1$, both with energy $-2$; the first one was measured in $1$ of the executions and the second one was measured in $9$ of the executions. The chain_
column shows information about the chain break in quantum annealing or related solvers. A value of 0.0 means there were no chain breaks (ideal solution). But, as you can check, these two solutions are maximum cuts in our graph.
For the first result:
$$ Z_{0}\cdot Z_{1}\cdot I + Z_{0}\cdot I\cdot Z_{2} = -1\cdot 1\cdot 1 + 1\cdot 1 \cdot -1 = -2 $$
- $Z_{0}$ = -1 is in one partition.
- $Z_{1}$ = 1 and $Z_{2}$ = 1 are in another partition.
- The edges between $Z_{0}$ and $Z_{1}$, and between $Z_{0}$ and $Z_{2}$, are cut since their spins differ.
Additional info¶
In addition, we can access the best solution through result.first
and the total time what we used the quantum annealer for, with result.info['timing][qpu_access_time]
, which is the actual time you use for your monthly credit. The time is in microseconds.
Build a QUBO problem¶
For the QUBO problems, we have to specify the linear coefficients of the quadratic terms (2 degree terms). Also, don't forget we are using binary (0,1) variables, so expression like $x_{3}^{2}$ can be simplified to $x_3$ $(1^2 = 1)$. Also we have to specify the independent coefficient. The only change is that we will use dimod.BINARY
parameter when creating our problem with the BinaryQuadraticModel
class.
Using Ocean to formulate and transform optimization problems¶
You are given a problem like this:
$$ \begin{array}{ll} \text{Minimize} & -5x_{0} + 3x_{1} - 2x_{2} \\ \text{subject to} & x_{0} + x_{2} \leq 1,\\ & 3x_{0} - x_{1} + 3x_{2} \leq 4\\ & x_{j} \in \{ 0,1 \}, \ \ \ j=0,1,2 \end{array} $$
This is an obvious binary linear programming problem which we have introduced before that can be transformed into a QUBO or Icing problems by using proper slack variables and penalty terms.
For the binary linear programming problem, we don't need to transform our problem into QUBO coefficients and then use them to define BinaryQuadraticModel
manually. All we have to do is to use ConstrainedQuadraticModel
class provided by dimod
. To construct our binary linear programm as a ConstrainedQuadraticModel
model, we need to do the followlings:
x0 = dimod.Binary("x0")
x1 = dimod.Binary("x1")
x2 = dimod.Binary("x2")
These three instructions created three binary variables and we have labeled them so that we can use them in math expression directly!
Note that if you don't define constraint before execute the code, dimod will assign an alphanumeric string to each constraint, you need to use relabel_constraints
if you want to change label after.
# Define binary
x0 = dimod.Binary("x0")
x1 = dimod.Binary("x1")
x2 = dimod.Binary("x2")
# Use these binary in to the math expression directly!
blp = dimod.ConstrainedQuadraticModel()
blp.set_objective(-5*x0 + 3*x1 - 2*x2)
blp.add_constraint(x0 + x2 <= 1, "First constraint")
blp.add_constraint(3*x0 - x1 + 3*x2 <= 4, "Second constraint")
# Inspect our model
print("Our variables:", blp.variables)
print("Our objective:", blp.objective)
print("Our constraints:", blp.constraints)
Our variables: Variables(['x0', 'x1', 'x2']) Our objective: ObjectiveView({'x0': -5.0, 'x1': 3.0, 'x2': -2.0}, {}, 0.0, {'x0': 'BINARY', 'x1': 'BINARY', 'x2': 'BINARY'}) Our constraints: {'First constraint': Le(ConstraintView({'x0': 1.0, 'x2': 1.0}, {}, 0.0, {'x0': 'BINARY', 'x2': 'BINARY'}), np.float64(1.0)), 'Second constraint': Le(ConstraintView({'x0': 3.0, 'x1': -1.0, 'x2': 3.0}, {}, 0.0, {'x0': 'BINARY', 'x1': 'BINARY', 'x2': 'BINARY'}), np.float64(4.0))}
You should expect the result as following:
Our variables: Variables(['x0', 'x1', 'x2']) Our objective: ObjectiveView({'x0': -5.0, 'x1': 3.0, 'x2': -2.0}, {}, 0.0, {'x0': 'BINARY', 'x1': 'BINARY', 'x2': 'BINARY'}) Our constraints: {'First constraint': Le(ConstraintView({'x0': 1.0, 'x2': 1.0}, {}, 0.0, {'x0': 'BINARY', 'x2': 'BINARY'}), np.float64(1.0)), 'Second constraint': Le(ConstraintView({'x0': 3.0, 'x1': -1.0, 'x2': 3.0}, {}, 0.0, {'x0': 'BINARY', 'x1': 'BINARY', 'x2': 'BINARY'}), np.float64(4.0))}
Le
stands for less than or equal to.- You can also create quality constraints, which will belong to the
dimod.sym.Eq
class. - The inequality constraint $\geq$ belongs to
dimod.sym.Ge
objects.
Solving constructed quadratic models with dimod¶
Here we are aiming to solve the problem we just formulated by using dimod
. To do this, we can define an assignment of values to the variables, check if it is feasible, and compute its cost for the problem defined in the previous subsection by using the following code:
# Check if sample1 is a feasible solution
sample1 = {"x0":1, "x1":1, "x2":1}
print("The assignment is", sample1)
print ("It's cost is:", blp.objective.energy(sample1))
print ("Is it feasible:", blp.check_feasible(sample1))
print("The violations of the constraints are:", blp.violations(sample1))
The assignment is {'x0': 1, 'x1': 1, 'x2': 1} It's cost is: -4.0 Is it feasible: False The violations of the constraints are: {'First constraint': np.float64(1.0), 'Second constraint': np.float64(1.0)}
This tells us that the assignment is not feasible. The $1$s represent that left-hand side of each inequality is bigger than the right-hand side.
Let's try sample2
# Check if sample2 is a feasible solution
sample2 = {"x0":0, "x1":0, "x2":1}
print("The assignment is", sample2)
print ("It's cost is:", blp.objective.energy(sample2))
print ("Is it feasible:", blp.check_feasible(sample2))
print("The violations of the constraints are:", blp.violations(sample2))
The assignment is {'x0': 0, 'x1': 0, 'x2': 1} It's cost is: -2.0 Is it feasible: True The violations of the constraints are: {'First constraint': np.float64(0.0), 'Second constraint': np.float64(-1.0)}
The result should show that sample2
is a feasible solution and no positive term in violation
.
The dimod
package also provide a brute-force solver that tries all possible assignments and sort them according to their cost, from lowest to highest. Please run the below code:
solver = dimod.ExactCQMSolver()
solution = solver.sample_cqm(blp)
print("The list of assignments is:")
print(solution)
The list of assignments is: x0 x1 x2 energy num_oc. is_sat. is_fea. 6 1 0 1 -7.0 1 arra... np.F... 2 1 0 0 -5.0 1 arra... np.T... 7 1 1 1 -4.0 1 arra... np.F... 3 1 1 0 -2.0 1 arra... np.T... 4 0 0 1 -2.0 1 arra... np.T... 0 0 0 0 0.0 1 arra... np.T... 5 0 1 1 1.0 1 arra... np.T... 1 0 1 0 3.0 1 arra... np.T... ['INTEGER', 8 rows, 8 samples, 3 variables]
The first number is just an identifier of the assignment. The result also shows that if the assignemnt is feasible or not. In fact, if we execute solution.first
, we will get:
print(solution.first)
Sample(sample={'x0': np.int64(1), 'x1': np.int64(0), 'x2': np.int64(1)}, energy=np.float64(-7.0), num_occurrences=np.int64(1), is_satisfied=array([False, False]), is_feasible=np.False_)
This shows the first solution, which is obviously infeasible in our case.
You may wonder what if I want the optimal solution? We can try filter
method like below:
feasible_sols = solution.filter(lambda s: s.is_feasible)
print(feasible_sols.first)
Sample(sample={'x0': np.int64(1), 'x1': np.int64(0), 'x2': np.int64(0)}, energy=np.float64(-5.0), num_occurrences=np.int64(1), is_satisfied=array([ True, True]), is_feasible=np.True_)
You should get the solution of $x_{0} = 1$, $x_{1} = 0$, $x_{2} = 0$ with an energy of $-5.0$ and it's a feasible assignment.
Running constrained problems on quantum annealers¶
To define problem on class ConstrainedQuadraticModel
and run it on the quantum annealers, we first need to eliminate the constraints and created a BinaryQuadraticModel
object that we can later execute on actual quantum hardware. This process can be achieved thanks to the Ocean library. Let's define a simple constrained problem with the following code:
# Construct a smple constrained problem
y0, y1 = dimod.Binaries(["y0","y1"])
cqm = dimod.ConstrainedQuadraticModel()
cqm.set_objective(-2*y0 - 3*y1)
cqm.add_constraint(y0 + 2*y1 <= 2)
# Make the above problem into a unconstrained problem.
qubo, invert = dimod.cqm_to_bqm(cqm, lagrange_multiplier=5)
print(qubo)
BinaryQuadraticModel({'y0': -17.0, 'y1': -23.0, 'slack_v8466ba63c65c435890f38d4ac99e843e_0': -15.0, 'slack_v8466ba63c65c435890f38d4ac99e843e_1': -15.0}, {('y1', 'y0'): 20.0, ('slack_v8466ba63c65c435890f38d4ac99e843e_0', 'y0'): 10.0, ('slack_v8466ba63c65c435890f38d4ac99e843e_0', 'y1'): 20.0, ('slack_v8466ba63c65c435890f38d4ac99e843e_1', 'y0'): 10.0, ('slack_v8466ba63c65c435890f38d4ac99e843e_1', 'y1'): 20.0, ('slack_v8466ba63c65c435890f38d4ac99e843e_1', 'slack_v8466ba63c65c435890f38d4ac99e843e_0'): 10.0}, 20.0, 'BINARY')
The result may seems overwhelming at the begining, but let's break it down!
- First, we have the linear part, which starts with
y0 : -17.0
. It tells us that in the objective function, $y_0$ has coefficient of $-17$, $y_1$ has coefficient $-23$, and the two other variables ahve coefficient of $-15$. - Then we look into the quadratic term. $y_{0}y_{1}$ has a coefficient of 20 and with coefficients $10$ and $20$ for the other products of two variables.
- Lastly, 20 is the independent term or offset.
- We know that all the variables are binary.
What dimod
does is to apply transformation in section 3.4.1 : Binary linear programmig. First two slack variables are introduced to transform the inequality constraint into an equality one. Then the equality constraint is incorporated into the cose function as a penalty term with a penalty coefficient (lagrange_multiplier
parameter).
After all of these transformation, we can finally use a quantum annealer to run the problem defined in the qubo
object!
# Run the quantum annealer
sampler = EmbeddingComposite(DWaveSampler())
result = sampler.sample(qubo, num_reads = 10)
print("The solutions that we have obtained are")
print(result)
The solutions that we have obtained are slack_v8466ba63c65c435890f38d4ac99e843e_0 ... y1 energy num_oc. chain_. 0 0 ... 1 -3.0 5 0.0 1 1 ... 0 -2.0 1 0.0 2 0 ... 0 -2.0 2 0.0 3 0 ... 1 0.0 1 0.0 4 1 ... 0 0.0 1 0.0 ['BINARY', 5 rows, 10 samples, 4 variables]
To make the result more informative. I mean we don't really care about the slack variables that we used to transform our problem. Therefore, we can use invert
object to retrieve the solution to the original problem.
# Use invert object to retrieve the original solution from `result`
samples = []
occurrences =[]
for s in result.data():
samples.append(invert(s.sample))
occurrences.append(s.num_occurrences)
sampleset = dimod.SampleSet.from_samples_cqm(samples, cqm, num_occurrences = occurrences)
print("The solutions to the original problem are")
print(sampleset)
The solutions to the original problem are y0 y1 energy num_oc. is_sat. is_fea. 3 1 1 -5.0 1 arra... np.F... 0 0 1 -3.0 5 arra... np.T... 1 1 0 -2.0 1 arra... np.T... 2 1 0 -2.0 2 arra... np.T... 4 0 0 0.0 1 arra... np.T... ['INTEGER', 5 rows, 10 samples, 2 variables]
Here, we created SampleSet
object from the samples obtained with the trainsformed probelm. we also elimiate slack variables by using the invert
method. Finally, we pass the cqm
problem to the from_samples_cqm
method, the energy without the penalties computed, as well as the feasibility status of each assignment.
Againm lets use filter
to get the feasible results.
final_sols = sampleset.filter(lambda s: s.is_feasible)
final_sols = final_sols.aggregate()
print("The final solutions are")
print(final_sols)
The final solutions are y0 y1 energy num_oc. is_sat. is_fea. 0 0 1 -3.0 5 arra... np.T... 1 1 0 -2.0 3 arra... np.T... 2 0 0 0.0 1 arra... np.T... ['INTEGER', 3 rows, 9 samples, 2 variables]
Solving optimization problems on quantum annealer with Leap¶
In this section, we will learn how to have more control over what the quantum annealer is doing in order to fine the ground state of our Hamiltonians. Also, we will discuss the different types of annealers that we can access via D-Wave Leap. And also, what does EmbeddingComposite
do.
The Leap annealers¶
# A list of solver we can access via D-Wave
from dwave.cloud import Client
for solver in Client.from_config().get_solvers():
print(solver)
BQMSolver(id='hybrid_binary_quadratic_model_version2') DQMSolver(id='hybrid_discrete_quadratic_model_version1') StructuredSolver(id='Advantage_system4.1') CQMSolver(id='hybrid_constrained_quadratic_model_version1') NLSolver(id='hybrid_nonlinear_program_version1') StructuredSolver(id='Advantage2_prototype2.6') StructuredSolver(id='Advantage_system6.4')
Let's grab one of the solver and see some of the basic sampler properties.
from dwave.system import DWaveSampler
sampler = DWaveSampler(solver = 'Advantage_system4.1')
print("Name:", sampler.properties["chip_id"])
print("Number of qubits:", sampler.properties["num_qubits"])
print("Category:", sampler.properties["category"])
print("Support problems:", sampler.properties["supported_problem_types"])
print("Topology:", sampler.properties["topology"])
print("Range of reads:", sampler.properties["num_reads_range"])
Name: Advantage_system4.1 Number of qubits: 5760 Category: qpu Support problems: ['ising', 'qubo'] Topology: {'type': 'pegasus', 'shape': [16]} Range of reads: [1, 10000]
It's worth to notice that all of them accept problems in the QUBO or Ising formats but not constrained problems and that's why we need to transform them befoe running them in the previous section. The toloplogy refers to the way in which the qubits are connected to each other in the machine, and determines which couplings - or connections bewteen variables - can be used to define our problem.
Embeddings and annealer topologies¶
In current quantum computers, technological difficulties prevent qubits from being connected in an all-to-all way. In fact, each qubit is usually connected exclusively to some of its neighbours and we can only apply two-qubit gates or use couplings (that is, use non-zero coefficients in the Ising model) between those qubits that are actually linked.
That is, the particular way in which the qubits are connected in a certain quantum chip is called its topology nad sometimes it is important to be aware of it when we design our algorithm or when we anneal our problems. You can find more on Ocean websute $\rightarrow$ QPU Topology.
Here are the high level overview for the topology used in D-Wave systems:
- Chimera topology for D-Wave 2000Q systems.
- In the topology Chimera, it has 6 qubits organized into two groups of 4 qubits each in column configuration. All the qubits in one group are connected to all the qubits in the other group, but there are no connections inside each group.
- Pegasus topology for Advantage systems.
- Every qubits connected to up to 15 qubits, compared to the maximum of 6 in Chimera.
- This topology also contains group of 4 qubits that are all connected to each other, making it mush easier to embed problems into it.
- Zephyr topology for next-generation quantum computers currently under development.
To obtain a list enumerating all the connections by using the properties["couplers"]
attribute as follows:
# Using the properties["couplers"]
sampler = DWaveSampler(solver = 'Advantage_system4.1')
#print("Couplings:", sampler.properties["couplers"])
--------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[1], line 2 1 # Using the properties["couplers"] ----> 2 sampler = DWaveSampler(solver = 'Advantage_system4.1') 3 #print("Couplings:", sampler.properties["couplers"]) NameError: name 'DWaveSampler' is not defined
One more important thing to note about the Chimera topology is that it does not contain triangles. There are no three vertices all connected to each other. Thus if our Ising Hamiltonian is somethin like $Z_{0}Z_{1}+Z_{0}Z_{2}+Z_{1}Z_{2}$, we cannot directly map it to qubits in the Chimera annealer. Therefore, we need to introduce embedding
in to our solver.
An embedding
is a way of mapping the qubits in our problem Hamiltonian to the phsical qubits in the annealer. We can use several physical qubits (what we call a chain) to represent a single qubit from our problem. In that case, we want all the qubits in the same chain to have the same value when we measure them.
For instance, if qubits 12 and 20 are part of the same chain, the coefficient for (12,20) could be, for instance, -15. Then, the term -15$Z_{12}Z_{20}$ will be part of the spin Hamiltonian that we want to minimize and it will make it very likely for $Z_{12}$ and $Z_{20}$ to be equal to each other, because that will make the total energy significantly slower.
You may think it's complicated to ensure every embedding is correct and each every physical qubits are used to ensure they repsent our problem properly and correctly. However, Ocean can compute embeddings automatically for us.
# Define problem
J = {(0,1):1, (0,2):1, (1,2):1 }
h = {}
triangle = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)
# Embed it and solve it on the
sampler = EmbeddingComposite(DWaveSampler(solver = 'Advantage_system4.1'))
result = sampler.sample(triangle, num_reads= 10, return_embedding=True)
print("The sampels obtained are:")
print(result)
print("The embedding used was:")
print(result.info["embedding_context"])
The sampels obtained are: 0 1 2 energy num_oc. chain_. 0 +1 +1 -1 -1.0 1 0.0 1 +1 -1 -1 -1.0 3 0.0 2 -1 +1 -1 -1.0 2 0.0 3 -1 +1 +1 -1.0 1 0.0 4 +1 -1 +1 -1.0 3 0.0 ['SPIN', 5 rows, 10 samples, 3 variables] The embedding used was: {'embedding': {1: (5519,), 0: (2684,), 2: (5504,)}, 'chain_break_method': 'majority_vote', 'embedding_parameters': {}, 'chain_strength': 1.9996979771955565}
As you can see, EmbeddingComposite
performs the embdeeing in a way that is completely transparent for the user and, in fact, the samples returned only refer to the variables in the original pronlem. However, variable 0
has been mapped to qubit 2684, variable 2
has been mapped to 5504, and 1
is represented by 5519.
In addition to EmbeddingComposite
, there are other classes in Ocean that allow you to find embeddings for your problems.
- For instance
AutoEmbeddingComposite
first tries to run the problem on the annealer directly without using embedding, and use it only if it is needed, which can save some computation time. - The
FixEmbeddingComposite
class doesn't compute an embedding, but uses whichever one is passed as a parameter. - The
LazyEmbeddingComposite
only computes the embedding for a problem on the first call to thesample
method, storing it for future calls;EmbeddingComposite
, on the other hand, recomputes the embedding with each callsample
.
Controlling annealing parameters¶
You may remember that we have mentioned handle the adiabatic process, a system which remains in a state of minimal energy, slow enough. However, we also know that it's hard to achieve in practice os we jsut use the quantum annealing. In order to control the annealing process with D-Wave's quantum annealer, we need to do something to improve our results for our combinatorial optimization problems.
- Changing the duration of the annealing process.
# Define problem
sampler = DWaveSampler(solver = "Advantage_system4.1")
# Show default and possible annealing time
print("The default annealing time is", sampler.properties["default_annealing_time"], "microseconds")
print("The possible values for the annealing time (in microseconds) lie in the range", sampler.properties["annealing_time_range"])
The default annealing time is 20.0 microseconds The possible values for the annealing time (in microseconds) lie in the range [0.5, 2000.0]
You should see the follwoing result
The default annealing time is 20.0 microseconds The possible values for the annealing time (in microseconds) lie in the range [0.5, 2000.0]
We can jsut modify the annealing in sample
function.
J = {(0,1):1, (0,2):1, (1,2):1}
h = {}
triangle = dimod.BinaryQuadraticModel(h,J,0.0, dimod.SPIN)
sampler = EmbeddingComposite(DWaveSampler(solver = "Advantage_system4.1"))
result = sampler.sample(triangle, num_reads = 10, annealing_time = 100)
print("The samples obtained are", result)
The samples obtained are 0 1 2 energy num_oc. chain_. 0 +1 -1 -1 -1.0 3 0.0 1 -1 -1 +1 -1.0 3 0.0 2 +1 -1 +1 -1.0 2 0.0 3 +1 +1 -1 -1.0 2 0.0 ['SPIN', 4 rows, 10 samples, 3 variables]
To obtain better and better solutions, you may be tempted to increase the annealing time to its maximum possible value. However, the longer you run the annealing process, the higher possibility that external interactions will affect the system state and ruin your computation: you might get worse resuls instead of better one. On the other handm by increasing the annealing time, you will obvisously spend more time!
Not only you can adjust the annealing time, but you can also adjust the annealing sequence. Let's refer the follwoing equation:
$$ H(t) = -A(t)\sum_{j=0}^{n-1} X_{j} - B(t)\sum_{j,k}Z_{j}Z_{k} - B(t)\sum_{j}h_{j}Z_{j}, $$
which defines the Hamiltonian that we use in the annealing process.
Also, you know that we only required $A$ and $B$ to satisfy that $A(0) = B(T) = 1$ and $A(t) = B(0) = 0$, where T is the total annealing time, but we did not restrict in any way how $A$ and $B$ should behave except fo these boundary conditions.
While D-Wave has its own annealing schedule (annealing section), we can still modify those default schdeules by assigning the values that we want the funcitons to take at some intermediate times.
Let's see what can we change:
- Anneal fraction (s):
- The first number of each pair needs to be a time value given in microseconds and the second number has to be a number between $0$ and $1$.
- The higher the value of $s$ is, the higher the value of $B$ and the lower the value of $A$ will be. As a consequence, when $s=1$, we can interpret that $B$ is $1$ and $A$ is $0$; when $s=0$, we can interpret that $A$ is $1$ and $B$ is $0$.
- Forward Annealing:
- Starts at $(0,0)$ and ends at $(T,1)$, where $T$ is the total annealing time.
- $s$ values increase monotonically overtime.
- Represents the standard annealing process.
- Example:
forward_schedule = [[0.0, 1.0], [5.0, 0.25], [25, 0.75], [30, 1.0]]
.forward_schedule = [[0.0, 1.0], [5.0, 0.25], [25, 0.75], [30, 1.0]]
sampler = EmbeddingComposite(DWaveSampler())
result = sampler.sample(triangle, num_reads = 10, anneal_schedule = forward_schedule)
- Reverse Annealing:
- Starts with $s=1$, decreases, and then increases back to $s=1$.
- Requires an initial state since the ground state of the Hamiltonian is unknown.
- Commonly sued to refine approximate solutions to find lower-energy configuration, which means we already found solution and are trying to find a better one.
- For example,
reverse_schedule = [[0.0, 1.0], [10.0, 0.5], [20, 1.0]]
. - Two approaches:
- Reinitialize State: Repeats the annealing process on the same initial state by setting:
reinitialize_state = True
. - Final State Progression: Uses the final state of one run as the initial state for the next by setting
reinitialize_state = True
.
- Reinitialize State: Repeats the annealing process on the same initial state by setting:
Controlling the annealing schedule can be useful for certain problems, for instance, you know that at some points the groud state and the first excited state are closer, you may slow down the annealing process.
# Reverse schedule
reverse_schedule = [[0.0, 1.0], [10.0, 0.5], [20, 1.0]]
initial_state = {0:1, 1:1, 2:1}
sample = EmbeddingComposite(DWaveSampler())
result = sampler.sample(triangle, num_reads = 10,
anneal_schedule = reverse_schedule,
reinitialize_state = False, initial_state = initial_state)
print("The samples obtained are")
print(result)
The samples obtained are 0 1 2 energy num_oc. chain_. 0 -1 +1 +1 -1.0 1 0.0 1 -1 +1 +1 -1.0 1 0.0 2 -1 -1 +1 -1.0 1 0.0 3 -1 +1 -1 -1.0 1 0.0 4 -1 +1 +1 -1.0 1 0.0 5 -1 -1 +1 -1.0 1 0.0 6 -1 +1 -1 -1.0 1 0.0 7 -1 -1 +1 -1.0 1 0.0 8 +1 -1 -1 -1.0 1 0.0 9 +1 -1 +1 -1.0 1 0.0 ['SPIN', 10 rows, 10 samples, 3 variables]
Congrats! you know how to control both the annealing time and the schedule! Let's look into why it's important to set the coupling strengths and the penalty terms wisely!
The importance of coupling strengths¶
You know that there are a couple of situations wherer we have to select values for some arbitrary constants that are used to set coupling strengths in the annealer:
Penalty Terms in Objective Functions: Constants are used as penalty terms in the objective function, like the
lagrange_multiploer
parameter in thecqm_to_bqm
method of thedimod package.
Coupling Strengths in Embedding: Selevting coupling strengths for chains in a quantum embedding, often handled automatically by classes like
EmbeddindComposite
.
You also know that we want these parameters as big as possible since all of us are not interested in solutions that do not satisfy the problem constraints and you don't want your chains to be broken.
Before select a good number, we need to know that D-Wave does not allow us to selet arbitrarily large numbers. In fact, the h_range
will be [-4.0, 4.0] as you can verify it by running the following code:
sampler = DWaveSampler("Advantage_system4.1")
print("The coupling strength range is", sampler.properties["h_range"])
The coupling strength range is [-4.0, 4.0]
This means that if you set coupling strengths (that is, $J$ coefficicent) that in absolute value are bigger than $4$, the largest one will be scaled down to $4$, and the rest of the coefficients in your model will be scaled down accordingly. This can cause some of the values to be very close together, even closer than the resolution of the device, affecting the results of the annealing process, as we can see in the follwoing example:
# Choose a sampler
sampler = EmbeddingComposite(DWaveSampler("Advantage_system4.1"))
# Define the problem
x0 = dimod.Binary("x0")
x1 = dimod.Binary("x1")
x2 = dimod.Binary("x2")
blp = dimod.ConstrainedQuadraticModel()
blp.set_objective(-5*x0 + 3*x1 - 2*x2)
blp.add_constraint(x0 + x2 <= 1, "First constraint")
blp.add_constraint(3*x0 - x1 + 3*x2 <= 4, "Second constraint")
# Convert the problem and run it
qubo, invert = dimod.cqm_to_bqm(blp, lagrange_multiplier=10)
result = sampler.sample(qubo, num_reads = 100)
# Aggregate and show the results
samples = []
occurrences = []
for s in result.data():
samples.append(invert(s.sample))
occurrences.append(s.num_occurrences)
sampleset = dimod.SampleSet.from_samples_cqm(samples, blp, num_occurrences = occurrences)
print("The solutions to the original problem are")
print(sampleset.filter(lambda s: s.is_feasible).aggregate())
The solutions to the original problem are x0 x1 x2 energy num_oc. is_sat. is_fea. 0 1 0 0 -5.0 16 arra... np.T... 1 1 1 0 -2.0 26 arra... np.T... 2 0 0 1 -2.0 17 arra... np.T... 3 0 0 0 0.0 11 arra... np.T... 4 0 1 1 1.0 20 arra... np.T... 5 0 1 0 3.0 6 arra... np.T... ['INTEGER', 6 rows, 96 samples, 3 variables]
The result may very but we can tell that the result is not very obvious since the Lagrange_multiplier
parameter is too big compared to the range of energies of the objective function. In fact, if you use Exactsolver
on the transformed problem, you can easily check that all the assignment that are unfeasible on the the original problem get energy $16$ or higher on the transformed one, while the feasible solutions always get energy $3$ or lower. THAT IS A HUGE GAP!
Let's try set Lagrange_multiplier
to $4$ and rerun the problem!
## Case: lagrange_multiplier = 4
# Convert the problem and run it
qubo, invert = dimod.cqm_to_bqm(blp, lagrange_multiplier=4)
result = sampler.sample(qubo, num_reads = 100)
# Aggregate and show the results
samples = []
occurrences = []
for s in result.data():
samples.append(invert(s.sample))
occurrences.append(s.num_occurrences)
sampleset = dimod.SampleSet.from_samples_cqm(samples, blp, num_occurrences = occurrences)
print("The solutions to the original problem are")
print(sampleset.filter(lambda s: s.is_feasible).aggregate())
The solutions to the original problem are x0 x1 x2 energy num_oc. is_sat. is_fea. 0 1 0 0 -5.0 42 arra... np.T... 1 1 1 0 -2.0 21 arra... np.T... 2 0 0 1 -2.0 13 arra... np.T... 3 0 0 0 0.0 8 arra... np.T... 4 0 1 1 1.0 10 arra... np.T... 5 0 1 0 3.0 5 arra... np.T... ['INTEGER', 6 rows, 99 samples, 3 variables]
You can see that the lowest energy state of value $-5.0$ has the most number. By setting the lagrange_multiplier = 4
, all the infeasible solutions will, therefore, have a relative low energy state. by checking with the Exactsolver
, the energy gap is much smaller now and we got 99 feasible solution over 99 samples.
How about we change the lagrange_multiplier = 1
?
## Case: lagrange_multiplier = 1
# Convert the problem and run it
qubo, invert = dimod.cqm_to_bqm(blp, lagrange_multiplier=1)
result = sampler.sample(qubo, num_reads = 100)
# Aggregate and show the results
samples = []
occurrences = []
for s in result.data():
samples.append(invert(s.sample))
occurrences.append(s.num_occurrences)
sampleset = dimod.SampleSet.from_samples_cqm(samples, blp, num_occurrences = occurrences)
print("The solutions to the original problem are")
print(sampleset.filter(lambda s: s.is_feasible).aggregate())
The solutions to the original problem are x0 x1 x2 energy num_oc. is_sat. is_fea. 0 1 0 0 -5.0 79 arra... np.T... 1 1 1 0 -2.0 9 arra... np.T... 2 0 0 1 -2.0 8 arra... np.T... 3 0 0 0 0.0 1 arra... np.T... ['INTEGER', 4 rows, 97 samples, 3 variables]
The frequency of the optimal result has dramatically improved! However, we "lost" 3 samples because they corresponded ot unfeasible solutions.
Setting a gooe penalty constant can be difficult since it involves having some information about the energy distribution of the solutions to the problem. The example here just showed that you should not just use any value for lagrange_multiplier
, since it can mess up your result.
Note: Something similar may happen when the value of the coupling strength for chains in an embedding is too big. Fortunately, EmbeddingComposite
and its relatives takes this into account and will try to keep the value as low as possible without breaking many chains. Don't take the choice of the coupling strenglth lightly.
Classical and hybrid samplers¶
Besides the quantum annealing, D-Wave also provide other ways of solving optimization problem beyond "pure" quantum annealing. We will talk about Classiscal solvers and Hybrid solvers in this section.
Classical solvers¶
In D-Wave, SimulatedAnnealing
and SteepestDescentSolver
are two solvers other than the ExactSolver
that do not use quantum resources whatsoever. This seciont helps you not just know how to use classical solver to solve your problem but also provide so called hybrid solver to work with the power of quantum annealer!
SteepestDescentSolver¶
The first classical solver we are going to talk about is the SteepestDescentSolver
. This is included in the greedy
package and it is a discrete version of the gradient descent algorith, for continuous optimization. It selects one direction where the decrease in energy is bigger. Let's see how to use it in D-Wave by the following code:
SteepestDescentSolver accepts using of initial_states
and seed
commands.
Before that, please run pip install dwave-greedy
to install tabu package.
! pip install dwave-greedy
Requirement already satisfied: dwave-greedy in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (0.3.0) Requirement already satisfied: dwave-samplers>=1.0.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-greedy) (1.4.0) Requirement already satisfied: numpy<3.0.0,>=1.19.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers>=1.0.0->dwave-greedy) (2.0.2) Requirement already satisfied: dimod<0.13.0,>=0.12.13 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers>=1.0.0->dwave-greedy) (0.12.18) Requirement already satisfied: networkx>=3.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers>=1.0.0->dwave-greedy) (3.4.2)
import greedy
import dimod
# Create a BinaryQuadraticModel
J = {(0,1):1, (1,2):1, (2,3):1, (3,0):1}
h = {}
problem = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)
# Using SteepestDescentSolver
solver = greedy.SteepestDescentSolver()
solution = solver.sample(problem, num_reads = 10)
print(solution.aggregate())
0 1 2 3 energy num_oc. num_st. 0 -1 +1 -1 +1 -4.0 5 1 1 +1 -1 +1 -1 -4.0 2 1 2 +1 +1 -1 -1 0.0 3 0 ['SPIN', 3 rows, 10 samples, 4 variables]
As you can see, this is exactly the format we already know from using quantum solvers.
Next, we will talk about the TabuSolver
.
TabuSolver¶
Tabu solver is a type of local search algorithm, which means it improves its search by exploring neighbors. The tabu solver avoids falling into local minima by sometimes accepting solutions with higher energy than the current one, and it also "remembers" solutions it has already explored. Let's look into how to use TabuSolver
in D-Wave.
TabuSolver accepts using of initial_states
and seed
commands.
Before that, please run pip install dwave-tabu
to install tabu package.
! pip install dwave-tabu
Requirement already satisfied: dwave-tabu in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (0.5.0) Requirement already satisfied: dwave-samplers>=1.0.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-tabu) (1.4.0) Requirement already satisfied: numpy<3.0.0,>=1.19.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers>=1.0.0->dwave-tabu) (2.0.2) Requirement already satisfied: dimod<0.13.0,>=0.12.13 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers>=1.0.0->dwave-tabu) (0.12.18) Requirement already satisfied: networkx>=3.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers>=1.0.0->dwave-tabu) (3.4.2)
import tabu
# Build a sample
solver = tabu.TabuSampler()
solution = solver.sample(problem, num_reads = 10)
print(solution.aggregate())
0 1 2 3 energy num_oc. num_re. 0 +1 -1 +1 -1 -4.0 7 1 1 -1 +1 -1 +1 -4.0 3 0 ['SPIN', 2 rows, 10 samples, 4 variables]
SimulatedAnnealingSampler¶
The last calssical solver we are going to talk about is the SimulatedAnnealingSampler
. As you may assume, this solver simulates annealing behavior. It's just another algorithm that explores the neighbourhood of the candidate solution that it is considering at a given moment. It tries to move to solutions with lower energy.
Just like the TabuSolver
, it sometime explore higher energy state to avoid stucking at the local minimum, which is accomplished by controlling the global "temperature" parameter that decreases with time, eventually to 0. In fact, the Hamiltonian $H_0$ in our quantum annealing process can be understood as analogous to the temperature in simulated annealing: it allows the solutions to move or "tunnel" to some neightboring ones and it decreases over time. The SimulatedAnnealingSampler
can be used in D-Wave Ocean as:
SimulatedAnnealingSampler accepts using of initial_states
and seed
commands.
! pip install dwave-neal
Collecting dwave-neal Downloading dwave_neal-0.6.0-py3-none-any.whl.metadata (3.0 kB) Requirement already satisfied: dwave-samplers<2.0.0,>=1.0.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-neal) (1.4.0) Requirement already satisfied: numpy<3.0.0,>=1.19.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers<2.0.0,>=1.0.0->dwave-neal) (2.0.2) Requirement already satisfied: dimod<0.13.0,>=0.12.13 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers<2.0.0,>=1.0.0->dwave-neal) (0.12.18) Requirement already satisfied: networkx>=3.0 in /opt/anaconda3/envs/pcchouCR97/lib/python3.13/site-packages (from dwave-samplers<2.0.0,>=1.0.0->dwave-neal) (3.4.2) Downloading dwave_neal-0.6.0-py3-none-any.whl (8.7 kB) Installing collected packages: dwave-neal Successfully installed dwave-neal-0.6.0
import neal
solver = neal.SimulatedAnnealingSampler()
solution = solver.sample(problem, num_reads = 10)
print(solution.aggregate())
0 1 2 3 energy num_oc. 3 +1 -1 +1 -1 -4.0 2 0 +1 +1 +1 -1 0.0 2 1 -1 -1 +1 +1 0.0 3 2 +1 +1 -1 -1 0.0 2 4 -1 -1 -1 +1 0.0 1 ['SPIN', 5 rows, 10 samples, 4 variables]
Hybrid solvers¶
LeapHybridSampler¶
Let's start with the LeapHybridSampler()
. This sampler accepts QUBO and Ising problems and can scale up to a high number of variables because it deivdes the problem, assigns different parts to classical solvers and quantum annealers, and then reconstructs a global solution from the local ones.
import dwave.system
sampler = dwave.system.LeapHybridSampler()
solution = solver.sample(problem, num_reads= 10)
print(solution.aggregate())
0 1 2 3 energy num_oc. 0 +1 -1 +1 -1 -4.0 5 3 -1 +1 -1 +1 -4.0 1 1 +1 +1 +1 -1 0.0 1 2 -1 -1 +1 +1 0.0 1 4 +1 +1 -1 -1 0.0 1 5 +1 -1 -1 -1 0.0 1 ['SPIN', 6 rows, 10 samples, 4 variables]
It's also worth to notice that these hybrid samplers has a property called qouta_conversion_rate
. This can be check by running sampler.properties["quota_conversion_rate"]
. The qouta_conversion_rate
for the LeapHybridSampler
is 20. This means that for each 20 microseconds that you use this hybrid sampler, you will get charge of 1 microsecond of quantum processor access.
The other hybrid solver Dwave provides is LeapHybridCQMsampler
, which is used similarly to LeapHybridSampler
, but with constrained problems. Finally, there is also LeapHybridDQMSampler
, which works with discrete quadratic problems defined as objects of the DiscreteQuadraticModel
class.
sampler.properties["quota_conversion_rate"]
20