Performance and Parameters
When you run a Ligetron program, you'll see various metrics and parameters in the output. This guide explains what they mean, how they affect performance, and how to optimize your applications.
Understanding Constraints
As explained in The ZK-VM, your program execution is converted into an arithmetic circuit: a collection of constraints. The prover proves that all these constraints hold true, and the verifier checks a sample of them. Understanding the different types of constraints helps you interpret performance metrics and optimize your programs.
Three Types of Constraints
Ligetron generates three types of constraints from your program's execution. These constraints are produced by the underlying WebAssembly operations that your compiled code generates:
-
Code constraints: Verify a witness value equals a specific constant (0 or 1)
assert_one(condition); // Code constraint: condition = 1
assert_zero(value); // Code constraint: value = 0Generated by
assert_one()andassert_zero()API calls. -
Linear constraints: Prove addition/subtraction relationships
int sum = a + b; // Generates linear constraint: sum = a + b
int diff = x - y; // Generates linear constraint: diff = x - yGenerated automatically by WASM
i64.addandi64.subinstructions during program execution. -
Quadratic constraints: Prove multiplication relationships
int product = a * b; // Generates quadratic constraint: product = a × bGenerated automatically by WASM
i64.mulinstructions during program execution.
Important: Constraints are generated by the compiled WASM operations, not directly by your C++ source. For example, assert_one(distance < 5) generates multiple constraints: the comparison operation generates constraints for the subtraction and comparison logic, then assert_one() adds a code constraint verifying the result equals 1.
To learn how to use assert_one(), assert_zero(), and other API functions in your programs, see Generating Constraints with the API.
Why Constraints Matter?
The number and type of constraints directly affect:
- Proof generation time: More constraints = longer proof generation
- Proof size: Scales with the witness size (number of intermediate values)
- Verification time: Grows with proof size but remains fast (checking ~40 sample points)
Example: A simple age verification check might generate ~50-100 constraints, while a complex financial transaction with multiple validations might generate thousands.
Interpreting Prover Output
When you run the prover, you'll see output like this:
ligero-prover v1.1.0+main.c6817f2
packing: 16192, padding: 16384, encoding: 65536
args: {"str":"Ligetron is awesome"}
args: {"str":"Ligero is awesome"}
args: {"i64":19}
args: {"i64":17}
=== Start ===
Start Stage 1
Num Linear constraints: 22253
Num quadratic constraints: 224508
Root of Merkle Tree: d8413070f4765e70...
----------------------------------------
Start Stage 2
Num Linear constraints: 22253
Num quadratic constraints: 224508
----------------------------------------
Start Stage 3
Prover root: d8413070f4765e70...
Validation of encoding: true
Validation of linear constraints: true
Validation of quadratic constraints: true
------------------------------------------
Final prove result: true
========== Timing Info ==========
Instantiate: 81ms
stage1: 262ms
stage2: 448ms
stage3: 502ms
Only packing is a tunable parameter that you can configure for performance optimization. The padding and encoding values are debug information computed internally from packing and may be removed from output in future versions.
How Packing Affects Performance?
The packing parameter controls the matrix width (number of witness values per row) and must be a power of 2. This is the primary performance tuning knob in Ligetron.
For a given program (fixed circuit size), different packing values create different matrix shapes: larger packing produces wider, shorter matrices (fewer rows), while smaller packing produces narrower, taller matrices (more rows). The total witness size (packing × rows) remains constant, but the matrix shape significantly impacts performance across the three prover stages.
Different operations favor different shapes: encoding benefits from fewer rows, while some operations have overhead that scales with row width. This creates a trade-off where certain packing values are more optimal than others. Below is timing data from the edit distance example showing this effect:
- Hardware: MacBook Pro (M3 Pro, 12 CPU cores, 18 GPU cores, 18 GB RAM)
- OS: macOS 15.0 Sequoia
- Build: Native ARM64 release build
- Version: ligero-prover v1.1.0
- Methodology: 5 runs per packing size, averaged
| Packing | Stage 1 | Stage 2 | Stage 3 | Total | Proof Size |
|---|---|---|---|---|---|
| 512 | 418ms | 952ms | 803ms | 2173ms | 12.5 MiB |
| 1024 | 186ms | 415ms | 332ms | 933ms | 4.83 MiB |
| 2048 | 100ms | 210ms | 183ms | 493ms | 2.76 MiB |
| 4096 | 72ms | 135ms | 144ms | 351ms | 2.55 MiB |
| 8192 | 70ms | 112ms | 171ms | 353ms | 3.60 MiB |
| 16384 | 80ms | 103ms | 269ms | 452ms | 6.49 MiB |
| 32768 | 108ms | 105ms | 493ms | 706ms | 12.5 MiB |
Key insights:
- Optimal range: 4096-8192 offers the best balance for this program in both speed and proof size
- Stage 1 (Commit): Time decreases dramatically from 512 to 4096 as fewer rows need Reed-Solomon encoding. However, practical factors like memory access patterns and GPU efficiency create additional overhead at very large row widths, causing times to rise again beyond 4096.
- Stage 2 (Constrain): Sharp initial drop from 512 to 1024 as GPU parallelization becomes efficient, then stable across larger sizes since constraint generation depends on program complexity, not matrix shape.
- Stage 3 (Sample): Initially decreases like Stage 1, but shows the sharpest rise beyond 8192 (2.9× from minimum to maximum). Wider matrices require substantially more expensive Merkle proof generation at sampled indices, as each proof must handle larger row structures. This makes Stage 3 the dominant performance bottleneck at very large packing sizes.
- Proof size: Varies significantly from 2.55 MiB (4096) to 12.5 MiB (512 and 32768) - a 5× difference. Like timing, proof size follows a U-shaped curve, with packing 4096 producing the smallest proofs.
The witness size (before encoding) equals packing × number of rows. After Reed-Solomon encoding (4× expansion), the encoded matrix is larger. Note that padding and encoding values shown in debug output are computed internally from packing and are not user-configurable parameters.
For details on what each stage does, see The ZK-VM: Prover and Verifier.
Choosing Packing Size
When starting with a new circuit, you'll need to choose an initial packing size. While the optimal value depends on your specific circuit and requires experimentation, here's a practical rule of thumb for estimating proof size:
Rule of Thumb: Choose packing ≈ , where circuit size is approximately the number of witness values after circuit computation. You may round to a power of 2 for simplicity, though any value is valid.
The intuition is that proof size is minimized when the final encoded matrix (after padding and Reed-Solomon encoding) is approximately square, balancing the size of row-based and column-based messages in the proof. However, this is just a starting point—the actual optimal value will depend on:
- The specific structure of your circuit (ratio of multiplication gates to total constraints)
- Whether you're optimizing for proof size, prover time, or verifier time
- The number of sample points in the protocol (which affects column-based message costs)
Recommendation: Use the rule of thumb to identify a range of candidate values (e.g., the power of 2 above and below the calculated value), then benchmark your specific circuit at these packing sizes to find the actual optimum.
For a deeper explanation of why approximately square matrices minimize proof size, including the relationship between row-sized and column-sized messages in the protocol, see The Ligero Protocol.
Hardware Acceleration
Ligetron uses WebGPU to accelerate proof generation on GPUs, making billion-gate proofs practical on commodity hardware.
GPU Operations:
- Number Theoretic Transform (NTT): Fast polynomial operations on BN254 scalar field
- Reed-Solomon encoding: 4× error-correcting code expansion
- Constraint validation: Parallel checking of constraints
Platform Support:
- Native builds: Direct GPU access via Dawn (Metal/Vulkan/DirectX 12)
- Web builds: GPU acceleration in browsers via WebGPU API
Any modern device with GPU support can run Ligetron, including laptops with integrated GPUs.
Next Steps
- Try the platform: Experiment on platform.ligetron.com to see real-time metrics
- Build your first app: Follow the Installation Guide
- Learn the architecture: Read The ZK-VM for execution pipeline details
Stage 1
Stage 2
Stage 3
Total