Skip to main content

HyperSync Consensus

HyperSync is the consensus protocol powering Vexidus. It uses weighted leader rotation with Byzantine fault tolerance, where validators take turns proposing blocks based on stake weight and performance score.

How It Works

  1. Linear leader slots: Each leader produces 5 consecutive blocks before rotation. Selection uses blake3(block_height / 5 || "leader_selection_v1") with stake-weighted random pick. This reduces handoff overhead across geographically distributed validators and stabilizes direct-to-leader transaction routing.
  2. Weight = pure stake-weighted (deterministic, no performance modifiers)
  3. Leader proposes block containing up to 50,000 transactions (100,000 in benchmark mode)
  4. Block is gossiped to all peers via GossipSub "blocks" topic (QUIC transport, TLS 1.3) using Borsh binary encoding with automatic JSON fallback
  5. Validators verify proposer signature (Ed25519) and execute transactions via optimistic parallel execution
  6. Block finalized after gossip consensus

Block Production

  • Adaptive block intervals (~2.2s nominal, 100ms-12s range, configurable via --block-time)
  • Pressure-aware timing: 2s normal, 250ms high load, 100ms extreme load
  • Up to 50,000 transactions per block (100,000 with --benchmark-mode)
  • Vexcel Attestation DAG: When the leader is slow, non-leader validators produce lightweight attestation blocks (zero transactions) that prove liveness without entering the canonical chain. When the leader is fast, the chain is purely linear with zero overhead. This adaptive mechanism ensures global latency fairness -- high-latency validators are never disadvantaged by geography.

Leader Skip & Failover

When a leader misses its block slot (network latency, temporary downtime), the protocol recovers automatically:

  • Staggered failover: Backup validators take over with position-based timeouts (5s base + 3s per position). Only one backup fails over at a time, structurally preventing multi-way forks. Failover works within leader slots -- if a leader goes offline mid-slot, the backup produces the remaining blocks in the slot.
  • Leader skip rotation: Validators that miss 3 consecutive blocks are temporarily excluded from the leader rotation (in-memory only). They must produce 2 consecutive blocks to rejoin. This prevents a high-latency validator from repeatedly stalling the chain.
  • Non-punitive: Skip rotation does not affect rewards, staking, or on-chain state. It is purely a block timing optimization. Only double-signing (provable malicious behavior) triggers jailing.
  • Zero-stall guarantee: Maximum delay from any single validator failure is 5 seconds (one failover timeout). The chain self-recovers without human intervention.

Wire Protocol

Block gossip and P2P sync use a dual-format encoding system for safe rolling upgrades:

  • Borsh binary encoding (default when --borsh-wire is enabled): 5-10x smaller blocks on the wire, ~10x faster serialization than JSON. Messages prefixed with VX (0x56, 0x58) magic bytes.
  • Automatic format detection: Nodes auto-detect Borsh vs JSON on every received message. New and old binary versions can coexist during upgrade transitions.

Validator Economics

ParameterValue
Minimum stake1,000 VXS
Unbonding period21 days
Max active validators100 (configurable)
Epoch duration300 seconds
Commission range0-50%

Block Rewards

~65M VXS emitted over 10 years (~4% of total supply):

PeriodAnnual TotalFoundation (15%)Validator Pool (85%)Per-Block @2s (1.0x density)
Year 1-213M VXS/yr1.95M11.05M~0.700 VXS
Year 3-56.5M VXS/yr0.975M5.525M~0.350 VXS
Year 6-103.9M VXS/yr0.585M3.315M~0.210 VXS
Year 11+0000

Per-block reward is time-weighted: reward = annual x block_interval / seconds_per_year. Transaction density multiplier scales the validator portion: 0.5x (empty) to 2.0x (1000+ txs).

Transaction Fees

  • 80% of fees go to the block proposer, 20% to Foundation Treasury (no burn -- fixed supply)
  • Gas price: 10 nanoVXS per gas unit (configurable via --gas-price)
  • Typical transfer cost: 0.00021 VXS ($0.0002 at $1/VXS)

Leader Skip Rotation

When a validator misses 3 consecutive leader slots, it is automatically skipped in leader rotation until it produces a block again. This prevents offline validators from degrading block time -- instead of waiting 10+ seconds for failover on every missed slot, the remaining validators continue producing at full speed.

  • Trigger: 3 consecutive missed blocks (consecutive_leader_misses >= 3)
  • Recovery: Produce 2 consecutive blocks to rejoin rotation
  • Deterministic: All nodes compute skip status identically from the same block data
  • Failsafe: If all non-jailed validators are skipped, the full non-jailed set is used (chain never stops)
  • Upgrade grace: Skip is not applied during the 100-block upgrade grace period

With 5 validators and 1 offline, block time stays at ~2.5s instead of degrading to ~4-5s.

Linear Leader Slots

As of block 1,899,000, Vexidus uses linear leader slots where each leader produces 5 consecutive blocks before rotation. This is the same approach used by Solana and other high-performance blockchains.

Benefits over per-block rotation:

  • Reduced handoff overhead: Leader changes once every ~13 seconds instead of every ~2.5 seconds
  • Stable routing: Dragonfly Stream's direct-to-leader routing target is stable for the entire slot
  • Geographic efficiency: When EU validators lead consecutive slots, block propagation is sub-50ms. Intercontinental latency is paid once per slot transition, not once per block.
  • Better bundle batching: Transactions have a 13-second window to arrive at the leader instead of 2.5 seconds

Failover within slots: If the leader goes offline mid-slot, the existing position-based failover kicks in after 5 seconds. The backup validator produces the remaining blocks in the slot. The next slot starts automatically with a different leader.

Scaling: With 10 validators at 5 blocks/slot, each validator leads for ~13 seconds every ~130 seconds. With 50 validators, each leads for ~13 seconds every ~650 seconds. Geographic clustering (multiple EU validators in consecutive slots) provides sustained low-latency windows.

No Slashing -- Jailing Instead

Vexidus does not slash validator stake. Misbehaving validators are jailed (removed from rotation after consecutive missed blocks). This protects delegators from losing funds while still incentivizing uptime.

  • Double-sign detection: Validators producing conflicting blocks at the same height are jailed with escalating penalties (1h → 24h → 7d → 30d)
  • Unjail: Submit vex_unjail after jail duration expires
  • Upgrade grace period: 100 blocks (~200s) after upgrade halt -- prevents jailing validators for upgrading

Bad actors lose opportunity cost (missed rewards) and reputation, not principal.

Reputation Scoring

Validator reputation uses a 7-factor, 100-point scoring system that affects leader selection weight and future reward multipliers.

FactorWeightMeasures
Uptime25 ptsBlock production consistency
Commission fairness15 ptsReasonable rates, stability
Self-stake15 ptsSkin in the game
Governance participation15 ptsUpgrade voting activity
Tenure10 ptsTime active on network
Delegator trust10 ptsStake attracted from others
Reliability10 ptsHistorical jail count, missed blocks

Reputation Grades

GradeScore Range
A+95-100
A90-94
B75-89
C55-74
D35-54
F0-34

New validators start at score 50 (neutral, "unproven"). A validator doing everything right reaches 80+ in ~7 days and 90+ in ~30 days.

Vexcel -- Adaptive Attestation DAG

Vexcel (Vexidus Accelerated Consensus Layer) is an adaptive consensus extension that makes HyperSync latency-tolerant across global validator sets.

How it works:

  • When the leader produces blocks on time, the chain is purely linear -- zero overhead, same as a traditional BFT chain.
  • When the leader is slow (network latency, geographic distance), non-leader validators produce lightweight attestation blocks -- signed, zero-transaction blocks that prove liveness and state agreement.
  • Attestation blocks are stored separately and never enter the canonical chain. The next leader references attestation hashes in its block header, forming a DAG of liveness proofs.
  • If the leader fails entirely, the first attestation is promoted to canonical after a grace period -- deterministic, same result on every node.

What this solves:

  • Geographic fairness: High-latency validators (Singapore, Mumbai) produce blocks at the same rate as low-latency validators (Europe). Before Vexcel, the closest validator to the seed produced 68% of blocks. After Vexcel, all 5 validators produce within 16-24% (ideal: 20%).
  • No rollbacks: Traditional failover systems produce temporary blocks that must be rolled back when the leader's block arrives. Vexcel eliminates rollbacks entirely -- attestation blocks are never finalized, so there is nothing to undo.
  • Unified mechanism: Attestation blocks replace 4 separate mechanisms (failover, voting, heartbeat, liveness proof) with one simple concept.

RPC endpoints:

  • vex_getAttestations(height) -- Returns attestation blocks at a given height
  • vex_getBlockDAG(height) -- Returns the canonical block plus all attestation blocks, showing the full DAG structure

Consensus Security

  • Byzantine Fault Tolerance: Tolerates up to 1/3 malicious validators
  • Block Proposer Signatures: Every block is Ed25519-signed by its proposer. Invalid signatures result in instant peer banning.
  • PeerGuard: Per-peer scoring system with replay detection (200-hash ring buffer), rate limiting, and validator allowlisting. Score drops to 0 = permanent ban.
  • Validator Set from State: No self-adding. Validators loaded from on-chain staking state only.
  • Attestation Validation: Attestation blocks are validated for correct signatures, uniqueness (one per validator per height), and zero transaction content. Duplicate attestations are treated as double-sign violations.

Dragonfly Stream Integration

Dragonfly Stream is a complementary transaction delivery system that eliminates the gossip mempool. While Vexcel handles block-level consensus (attestation DAG, leader promotion), Dragonfly Stream handles transaction-level delivery (sealed forwarding, PQ proofs, MEV elimination).

Together they form a complete pipeline: transactions flow directly to the leader via Dragonfly, and if the leader fails, Vexcel promotes the next validator who already holds those transactions via the pipeline backup. Zero gap, zero loss.

See Dragonfly Stream for full details.

P2P Transport

  • QUIC: TLS 1.3 + multiplexing over UDP. Replaced TCP+Noise+Yamux.
  • GossipSub: "blocks" and "votes" topics only (no transaction gossip). Explicit peers for small networks, flood_publish enabled.
  • Authenticated Validator Identity: On connection, validators prove Ed25519 signing key ownership via domain-separated BLAKE3 signature over PeerId. Mutual authentication in one round-trip. Enables allowlisting and direct-to-leader routing.
  • Dragonfly Stream: Direct-to-leader sealed forwarding via DragonflyForward. Leader PeerId resolved from authenticated identity map at submit time. 1 bootnode backup for resilience. Falls back to all-bootnode routing when leader unknown.
  • Bulk Sync: Custom SyncCodec with 64MB response limit, 60s timeout, batch size 10.
  • Bundle TTL: 120s default. Expired bundles rejected on receive, skipped on drain.

Block Commit Pipeline

begin_block()
|
execute bundles + rewards
|
commit_block() (WriteBatch -- atomic)
|
recompute_state_root()
|
flush()

All block writes (accounts, validators, blocks, explorer indexes, intent status) are buffered during block scope and committed atomically via RocksDB WriteBatch. Crash-safe -- no partial state.

CLI Flags

FlagDefaultDescription
--validator-key <path>noneEd25519 signing key for block/vote signing
--block-time <secs>12Block production interval
--max-txs-per-block <N>10000Maximum transactions per block
--min-validators <N>1Minimum validators required
--gas-price <N>10Base gas price in nanoVXS/gas
--reject-unsigned-bundlesfalseReject unsigned transaction bundles
--enforce-validator-allowlistfalseOnly allow authenticated validator peers to gossip blocks (bootnodes exempt)
--rpc-rate-limit <N>100Per-IP RPC rate limit in requests/second (0 = unlimited)
--no-leader-checkfalseDisable leader rotation (solo testnet only)
--dragonfly-streamdeprecatedDragonfly Stream is always active (Phase 3). Flag retained as hidden no-op.