Skip to main content

Sampling Models

After defining your model and configuring a backend, sampling is the core operation. The DynexSampler translates your model into a neuromorphic circuit and runs it on the selected compute backend.

Common pattern

import dynex
from dynex import DynexConfig, ComputeBackend

# GPU — Dynex neuromorphic chips, recommended for all production workloads
config = DynexConfig(compute_backend=ComputeBackend.GPU)

model = dynex.BQM(bqm)   # or CQM, DQM
sampler = dynex.DynexSampler(model, config=config)
sampleset = sampler.sample(num_reads=1000, annealing_time=200)

Core parameters

sampleset = sampler.sample(
    num_reads=1000,        # Number of independent reads (parallel samples)
    annealing_time=200,    # ODE integration depth (higher = more thorough search)
    shots=5,               # Minimum worker-returned solutions (network backends)
    preprocess=False,      # Apply preprocessing for QPU backends
    debugging=False,       # Verbose progress output
)

Parameter guidance

num_reads Controls the number of independent samples. More reads means better coverage of the solution space.
BackendRecommended range
GPU (production)1000–10000
CPU500–5000
QPU1–100
LOCAL100–1000
annealing_time Controls the ODE integration depth. Longer annealing gives the system more time to find lower-energy states.
BackendRecommended range
GPU (production)200–1000
CPU100–500
QPU10–1000
LOCAL50–500
shots For network backends (CPU/GPU/QPU), sets the minimum number of solutions to collect from workers before returning. Useful when you need multiple diverse solutions, not just the global optimum. Current recommended maximum: 5. qpu_max_coeff (default: 9.0, QPU only) Maximum allowed absolute value for any BQM coefficient when using a QPU backend. If any linear or quadratic coefficient exceeds this threshold, the sampler automatically scales the entire BQM down proportionally before submitting the job. Solutions are returned in the original variable space. Useful when your QUBO contains large penalty terms that exceed hardware bounds. preprocess Enables automatic scaling and normalization of QUBO coefficients. Recommended for QPU backends to stay within hardware bounds.

Model-specific examples

BQM

model = dynex.BQM(bqm)
sampler = dynex.DynexSampler(model, config=config)
sampleset = sampler.sample(num_reads=1000, annealing_time=200)

CQM

model = dynex.CQM(cqm)
sampler = dynex.DynexSampler(model, config=config)
sampleset = sampler.sample(num_reads=500, annealing_time=100)

DQM

model = dynex.DQM(dqm)
sampler = dynex.DynexSampler(model, config=config)
sampleset = sampler.sample(num_reads=500, annealing_time=100)

GPU (production)

config = DynexConfig(compute_backend=ComputeBackend.GPU)
sampler = dynex.DynexSampler(model, config=config)
sampleset = sampler.sample(
    num_reads=5000,
    annealing_time=500,
    shots=5,
)

QPU with preprocessing

QPU backends require smaller parameter values due to hardware constraints.
from dynex import QPUModel

config = DynexConfig(
    compute_backend=ComputeBackend.QPU,
    qpu_model=QPUModel.APOLLO_RC1
)
sampler = dynex.DynexSampler(model, config=config)
sampleset = sampler.sample(
    num_reads=50,         # QPU: keep in range 1–100
    annealing_time=200,   # QPU: keep in range 10–1000
    shots=1,              # QPU: up to 5
    qpu_max_coeff=9.0,    # Auto-scale BQM if any coefficient exceeds this value
    preprocess=True
)

Working with results

The sampler returns a dimod SampleSet:
# Best solution by energy
best = sampleset.first
print(best.sample)       # dict: {var: value, ...}
print(best.energy)       # float: objective value

# Iterate all samples (sorted by energy)
for sample, energy in sampleset.data(['sample', 'energy']):
    print(f"{sample}{energy:.4f}")

# Check constraint satisfaction (CQM only)
for sample in sampleset.samples():
    violations = cqm.violations(sample)
    feasible = all(v == 0 for v in violations.values())
    print(f"Feasible: {feasible}")

# Convert to pandas
df = sampleset.to_pandas_dataframe()
print(df.sort_values('energy').head(10))

# Get aggregate statistics
energies = [datum.energy for datum in sampleset.data(['energy'])]
print(f"Min energy: {min(energies):.4f}")
print(f"Mean energy: {sum(energies)/len(energies):.4f}")

Advanced ODE parameters

For fine-grained control of the ODE integration, the following parameters can be set. These define upper bounds for automatic parameter tuning:
sampleset = sampler.sample(
    num_reads=1000,
    annealing_time=200,
    alpha=0.1,              # Upper bound for ODE alpha parameter
    beta=0.1,               # Upper bound for ODE beta parameter
    gamma=0.5,              # Upper bound for ODE gamma parameter
    delta=0.5,              # Upper bound for ODE delta parameter
    epsilon=0.5,            # Upper bound for ODE epsilon parameter
    zeta=0.5,               # Upper bound for ODE zeta parameter
    minimum_stepsize=1e-6,  # Minimum adaptive step size
)
See the equations of motion for the mathematical background.

Block fee (spot compute)

For priority compute on the Dynex network, specify a block fee in nanoDNX:
sampleset = sampler.sample(
    num_reads=1000,
    annealing_time=200,
    block_fee=1000000000,  # 1 DNX in nanoDNX
)
Higher block fees prioritize your jobs on the network. If not specified, the current average network fee is used.