Skip to main content

Mining

MatMul proof-of-work mining on the BTX network.

BTX uses MatMul proof-of-work, a mining algorithm based on matrix multiplication. Instead of brute-force hash puzzles, miners perform 512×512 matrix multiplications over a Mersenne prime field and use the computation transcript as the hardness source.

How MatMul mining works

At a high level, each mining attempt proceeds as follows:

  1. The block header (including nonce) seeds a random oracle that generates two n×n matrices (A and B) over the field Fq where q = 231−1 (Mersenne prime M31).
  2. The miner computes the product C = A × B using a block-structured algorithm with transcript block size b = 16. Low-rank random noise of rank r = 8 is injected from the random oracle to make the computation transcript unpredictable.
  3. The intermediate computation transcript is hashed. If the resulting digest meets the current difficulty target, the block is valid.
  4. Validators can verify the proof efficiently using a two-phase check: a cheap header-level filter followed by transcript recomputation. A Freivalds probabilistic verification step provides fast initial confirmation.

Consensus parameters

ParameterValueDescription
Matrix dimension (n)512512×512 matrices, ~2 MiB working set
Transcript block size (b)16Hashing granularity for the computation transcript
Noise rank (r)8Rank of random noise injected per block
FieldM31 (231−1)GPU-friendly int32 Mersenne prime arithmetic
Target block time90 secondsSteady-state (post block 50,000)
Difficulty algorithmASERT (aserti3-2d)Per-block adjustment from genesis
ASERT half-life14,400s (3,600s after height 55,000)Responsiveness parameter
Block subsidy20 BTXHalves every 525,000 blocks
Supply cap21,000,000 BTXHard cap

Hardware requirements

MatMul PoW is designed to leverage the same GPU hardware used for AI/ML training workloads. The core operation (dense matrix multiply over M31) maps directly to GPU tensor cores and matrix multiply units.

ComponentMinimumRecommended
GPUAny with compute capabilityAI-training class (NVIDIA A100/H100, Apple M-series with Metal)
RAM8 GB16+ GB
Storage50 GB SSD200+ GB NVMe
CPU4 cores8+ cores (for validation alongside mining)

CPU-only mining is functional but significantly slower. The Metal backend on Apple Silicon provides competitive throughput for solo and small-scale mining.

getblocktemplate workflow

External miners use the getblocktemplate RPC to obtain a candidate block, solve the MatMul puzzle, and submit the result with submitblock.

# Fetch a block template
btx-cli getblocktemplate '{"rules": ["segwit"]}'

# Submit a solved block
btx-cli submitblock "hexdata"

The template exposes the BTX MatMul nonce range: noncerange = 0000000000000000ffffffffffffffff.

For testing, generateblock with submit=false can produce candidate blocks without broadcasting, useful for verifying the miner pipeline.

Mining to a specific address

The simplest way to mine is with generatetoaddress, which handles template creation, solving, and submission internally:

# Generate 1 block paying to a specific address
btx-cli generatetoaddress 1 "btx1z..."

For production mining, the payout address is typically a post-quantum multisig descriptor address (btx1z...). Operators should derive and verify this address offline before starting the miner.

Pool mining

Stratum integration for pool mining is under development. The getblocktemplate interface provides the foundation for external pool software. A validation script (scripts/m7_miner_pool_e2e.py) exercises the full generateblock submit=false + submitblock pipeline and captures a stratum job artifact.

Current status: solo mining via generatetoaddress or the miner daemon loop is the primary production path. Pool operators should follow the btx-node repository for stratum protocol updates.

Apple Silicon Metal backend

BTX Node includes a native Apple Metal compute backend for MatMul mining on Apple Silicon (M1, M2, M3, M4 series). Enable it at build time:

cmake -B build-btx -DBTX_ENABLE_METAL=ON
cmake --build build-btx -j$(sysctl -n hw.ncpu)

Select the Metal backend at runtime:

export BTX_MATMUL_BACKEND=metal

Available backend tokens:

TokenDescription
cpuReference CPU implementation (always available)
metal / mlxApple Metal compute (Apple Silicon only)
cudaNVIDIA CUDA (scaffolded, disabled by default)

Non-Apple hosts or systems without Metal automatically fall back to CPU. Use btx-matmul-backend-info --backend metal to check backend availability and capabilities.

Tuning parameters

Environment variableDescriptionRecommended
BTX_MATMUL_SOLVE_BATCH_SIZENonces per GPU dispatch4
BTX_MATMUL_PIPELINE_ASYNCAsync GPU pipeline0
BTX_MINE_BATCH_SIZEBlocks per mining batch20

Mining in regtest

Regtest mode skips MatMul validation by default (fSkipMatMulValidation=true), enabling instant block generation for development and testing:

btxd -regtest -daemon
btx-cli -regtest createwallet "test"
btx-cli -regtest -generate 100

To test with full MatMul validation in regtest, use the strict mode flag:

btxd -regtest -test=matmulstrict -daemon

Monitoring with getdifficultyhealth

The getdifficultyhealth RPC provides diagnostics for difficulty adjustment behavior, chain cadence, and mining readiness:

btx-cli getdifficultyhealth

Key fields in the response:

  • target_spacing_s: target block interval (90s)
  • tip_age_s: seconds since the last block timestamp
  • cadence metrics: mean/median/stddev of recent block intervals across configurable time windows
  • reorg_protection: deep-reorg protection configuration and rejection counters

The companion monitoring tool monitor/btx_difficulty_health.py aggregates data from multiple nodes (local + archival fleet) and produces JSON and Markdown reports across 1h/6h/24h time windows. Reports automatically exclude the fast-mine bootstrap phase (blocks 0–49,999) on mainnet.

If tip_age_s exceeds 3× the target spacing while miners are active, the chain should be considered stalled. Inspect getchaintips and debug.log for status=invalid tips.