Skip to main content

Dragonfly Stream -- Mempoolless Transaction Pipeline

Dragonfly Stream replaces the traditional gossip-broadcast mempool with a sealed, direct-delivery pipeline. Transactions flow from client to block leader without pooling -- eliminating the primary attack surface for MEV (Maximal Extractable Value).

Why No Mempool?

Every major MEV attack requires observing pending transactions before they are included in a block. On traditional chains, the mempool is a public broadcast channel where bots watch for opportunities to front-run, sandwich, or reorder transactions.

Dragonfly Stream removes the pool entirely. Transactions are forwarded directly to the current block leader via authenticated P2P channels, sealed with post-quantum cryptographic signatures that prove temporal ordering.

Traditional chain:
User -> Broadcast to ALL peers (mempool) -> Bots observe -> Leader includes
Risk: Front-running, sandwich attacks, censorship

Dragonfly Stream:
User -> RPC node -> Sealed forward directly to Leader + 1 backup -> Block
Risk: None of the above -- no public pool to observe

How It Works

Transaction Flow

  1. User submits bundle to any RPC node (signed Ed25519)
  2. RPC validates signature and format
  3. Seal creation -- the receiving validator creates a post-quantum seal over the transaction:
    • Commitment: blake3(tx_hash + block_height + validator_pubkey)
    • Signature: Dilithium3 (NIST Level 3 post-quantum) or Ed25519 (classical fallback)
  4. Leader resolution -- the RPC node resolves the current leader's PeerId from the authenticated validator identity map
  5. Direct forwarding -- sealed transaction sent directly to the elected leader + 1 bootnode backup via dedicated P2P command (not gossip). Falls back to all-bootnode routing if leader PeerId is unknown.
  6. Leader includes the transaction in the next block
  7. Seal verification -- leader verifies seals on received transactions

PQ Seals (Post-Quantum Cryptographic Seals)

Every forwarded transaction carries a cryptographic seal that proves:

  • Who saw the transaction (validator identity)
  • When they saw it (block height binding)
  • Authenticity (quantum-resistant Dilithium3 signature)

Validators auto-generate Dilithium3 keypairs on startup. No manual key management required.

Seal types:

  • PQ (Dilithium3) -- Primary. NIST-approved lattice-based post-quantum signatures. ~3.3 KB per seal.
  • Classical (Ed25519) -- Fallback for validators without PQ keys. 64-byte signature.

Authenticated Validator Identity

On peer connection, validators cryptographically prove their identity by signing a domain-separated message that binds their Ed25519 signing key to their libp2p PeerId:

message = blake3("vexidus-validator-announce-v1" + peer_id)
signature = ed25519_sign(validator_key, message)

Both sides authenticate in one round-trip. This creates a live mapping from validator pubkey to PeerId, enabling direct-to-leader transaction routing. The mapping is self-healing -- if a validator's PeerId regenerates (e.g., data wipe), the next connection re-announces automatically.

Direct-to-Leader Routing

When a transaction is submitted, the RPC layer resolves the elected leader's PeerId from the authenticated identity map and sends the sealed transaction directly:

Phase 3 (before):
Client -> Validator_A (RPC) -> Seed_1, Seed_2, Seed_3 -> All Validators
Latency: ~3 hops, 5x bandwidth amplification

Phase 3.1 (current, with linear leader slots):
Client -> Validator_A (RPC) -> Leader (direct, stable for 5 blocks) + 1 Bootnode (backup)
Latency: ~1 hop, 2x bandwidth, routing target stable for ~13 seconds

Why 1 bootnode backup? The leader may rotate at slot boundaries. The backup bootnode relays to all connected validators, ensuring the transaction reaches the new leader if rotation occurs.

Linear leader slots: With 5-block leader slots (active since block 1,899,000), the routing target is stable for ~13 seconds instead of changing every ~2.5 seconds. This dramatically reduces stale routing and bundle loss on intercontinental links.

Graceful fallback: If the leader's PeerId is unknown (not yet authenticated, disconnected, or validator just started), the system falls back to Phase 3 all-bootnode routing automatically.

LeaderBuffer Re-Forward

All validators buffer incoming transactions in their local LeaderBuffer, but only the elected leader drains it for block inclusion. To prevent silent bundle drops on non-leader nodes:

  • Non-leader detection: Every 2 seconds, non-leaders check if their buffer has bundles sitting for 8+ seconds
  • Re-forward: Stale bundles are drained and forwarded to the current leader via DragonflyForward
  • Deduplication: The leader's LeaderBuffer deduplicates via hash tracking -- re-forwarded bundles that already exist are silently ignored
  • Anti-loop: After draining, the staleness timer resets. No re-trigger until new bundles arrive and age

This ensures transactions submitted to any validator eventually reach the current leader, even across leader slot transitions.

Pipeline Failover

If the leader fails entirely:

  1. Bootnode backup already relayed the transaction to all validators
  2. Vexcel attestation promotion seamlessly transitions to the new leader
  3. New leader already holds the transaction in its local LeaderBuffer
  4. Non-leader re-forward ensures stale bundles reach the new leader within 10 seconds
  5. No re-submission needed

This builds on the existing Vexcel adaptive attestation DAG -- Dragonfly Stream and Vexcel form a self-reinforcing system.

MEV Protection

Dragonfly Stream is one component of a multi-layer MEV elimination strategy:

LayerMechanismWhat It Prevents
No mempoolDragonfly direct deliveryFront-running (no pool to observe)
Deterministic orderingblake3(tx_hash + block_height) sortProposer reordering (content-determined order)
Atomic executionIntentVM composite intentsSandwich attacks (no inter-step window)
PQ sealsDilithium3 temporal proofsQuantum tx forgery
Attestation commitmentsValidators commit received tx hashesCensorship (provably detectable if leader drops attested txs)

At full deployment, Vexidus has zero known MEV vectors: no mempool to observe, the proposer cannot control ordering, intents execute atomically, and censorship is cryptographically detectable. Achieved without BLS/DKG threshold encryption -- simpler, faster, stronger.

Wire Protocol -- Borsh Binary Encoding

All P2P communication (transaction forwarding, block sync, attestation gossip) uses a dual-format encoding system:

  • Borsh binary (active when --borsh-wire enabled): 5-10x smaller messages on the wire, ~10x faster serialization/deserialization compared to JSON.
  • Automatic format detection: Messages prefixed with VX magic bytes (0x56, 0x58) are decoded as Borsh; all others as JSON. This enables safe rolling upgrades -- old and new binary versions coexist seamlessly during transitions.
  • ForwardBundle impact: Under Borsh encoding, sealed transaction bundles forwarded to the leader are significantly smaller, reducing latency on intercontinental P2P links.

Phased Rollout

Dragonfly Stream deployed incrementally via VexVisor on-chain governance -- no genesis restart required at any phase.

PhaseStatusDescription
Phase 1LiveDirect forwarding via DragonflyForward P2P command. Deployed VexVisor #8 (block 91,323).
Phase 2LivePQ sealing. Auto-generated Dilithium3 keys. Seal creation and verification on every forwarded transaction. Deployed VexVisor #8.
Phase 3LiveMempool eliminated. "transactions" GossipSub topic removed. Mempool renamed to LeaderBuffer. PQ sealing always-on. Seed relay via DragonflyRelay. Deployed VexVisor #14 (block 200,551).
Phase 3.1LiveAuthenticated validator P2P identity + direct-to-leader routing. Transactions sent to elected leader (1 hop) + 1 bootnode backup. Deployed VexVisor #15 (block 217,748).
Phase 3.2LiveLinear leader slots (5 blocks/leader) + LeaderBuffer re-forward. Routing target stable for ~13s. Non-leaders drain stale bundles after 8s. Deployed block ~1,898,060 (Apr 3, 2026).
Phase 4PlannedPQ enforcement. Classical-only seals rejected. Validator registration requires Dilithium3 pubkey.

Key Properties

  • O(1) bandwidth -- each transaction forwarded to 2 targets (leader + 1 bootnode backup), not O(N) gossip flood
  • Pipeline failover -- bootnode backup relays to all validators; new leader already holds transactions
  • Censorship accountability -- attestation blocks commit to received transaction hashes; leader censorship is provably detectable
  • Adaptive timing synergy -- Dragonfly Stream + Vexcel + pressure-aware micro-blocks form a self-regulating throughput engine
  • No hard restart -- all parameters (buffer sizes, PQ requirements, max transactions per block) adjustable via governance or CLI flags
  • Quantum-safe from day one -- Dilithium3 seals protect transaction ordering against future quantum computers

Dragonfly Stream + Vexcel

Dragonfly Stream and Vexcel are complementary systems:

  • Vexcel handles block-level consensus (attestation DAG, leader promotion, geographic fairness)
  • Dragonfly Stream handles transaction-level delivery (sealed forwarding, PQ proofs, MEV elimination)

Together they form a complete pipeline: transactions flow directly to the leader via Dragonfly, and if the leader fails, Vexcel promotes the next validator who already holds those transactions via the pipeline backup. Zero gap, zero loss.

Configuration

Dragonfly Stream is always active as of Phase 3. PQ sealing activates automatically when a validator signing key is present (ephemeral Dilithium3 keys are auto-generated on startup). No flags required.

The --dragonfly-stream flag is deprecated (hidden no-op, retained for backward compatibility).

RPC Endpoints

MethodDescription
vex_getDragonflyStatsReturns Dragonfly Stream statistics (forwarded count, received count, seal type, enabled status)