Summary
Recursive proofs have been deployed on the mainnet to extend StarkEx applications and StarkNet. Recursive proofs enhance scalability, reduce costs, and minimize latency without compromising on scalability and latency. Recursive proofs create conditions for L3 and other innovative applications. Check out the blog post on recursive proofs, it's super cool!
Multiplicative scalability!
Recursive proofs supported by Cairo Universal Computation are now in production. This marks a significant improvement in STARK's L2 scalability. The number of transactions written into Ethereum with a single proof can be quickly multiplied.
So far, STARK proofs have achieved scalability by "aggregating" tens of thousands or even hundreds of thousands of transactions into a single proof written into Ethereum. Through recursion, many such proofs can be "aggregated" into a single proof.
This approach has been applied to numerous Cairo-based applications, including StarkWare's SaaS extension engine StarkEx and permissionless Rollup StarkNet.
Development Journey So Far
Since the first proof was generated on the mainnet in March 2020, STARK proofs have gone through different stages of development.
STARK Scalability
In June 2020, the first STARK scalability solution, StarkEx, was deployed on the Ethereum mainnet. StarkEx has a prover that can perform large-scale computations off-chain and generate a STARK proof to verify the accuracy of transactions, as well as a verifier on-chain to validate the proof's accuracy. The initial deployment was limited in functionality as it was built from scratch by StarkWare engineers. Eventually, it was decided that a programming language supporting universal computation for proofs was needed, and thus Cairo was born.
Cairo Programming Language
Cairo made its debut on the Ethereum mainnet in the summer of 2020. Cairo, also known as CPU Algebraic Intermediate Representation (AIR), contains a single AIR for verifying the corresponding "CPU" instruction set. Cairo opens the door for encoding proofs of more complex business logic and arbitrary computational statements, making it faster and more secure. A Cairo program can prove the execution logic of the corresponding application. But a Cairo program can also aggregate multiple such applications, which is SHARP.
SHARP
SHARP, or Shared Prover, can aggregate transactions from several independent applications and prove them in a single STARK proof. Applications can combine different batches of transactions to fill the capacity of the STARK proof faster. Both transaction processing speed and latency are improved. Recursive proofs are the next-generation cutting-edge technology that is applicable not only to some hard-coded logic but also to general computation.
To fully understand the advantages of recursive proofs, it is necessary to further understand how SHARP has executed (non-recursive) proofs so far. Figure 1 illustrates a typical non-recursive process:
Image
In this process, propositions are gradually accumulated. When the capacity (or time) reaches a certain threshold, a large combination proposition (Train) is generated. This combination proposition can only be proven after receiving all individual propositions. This proof takes a long time (approximately the sum of the time required to prove each individual proposition).
Extremely large propositions are ultimately limited by available computing resources such as memory. Before recursion, this was actually a major obstacle to the scalability of STARK proofs.
What is a Recursive Proof?
Through STARK proofs, the time spent proving a proposition is roughly linearly related to the time spent executing the proposition. In addition, if proving a proposition takes time T, the time required to verify the proof is approximately log(T), which is often referred to as "logarithmic compression." In other words, using STARK allows users to spend much less time verifying propositions compared to computing propositions.
Cairo allows the expression of general computational propositions that can be verified by STARK proofs and validated by the corresponding STARK verifier.
This is where the opportunity for recursion lies: just as one can write a Cairo program to prove the correctness of thousands of transactions, one can also write a Cairo program to verify multiple STARK proofs. A proof can be generated to verify the validity of multiple "upstream" proofs. This is what we call a recursive proof.
Due to the logarithmic compression and the roughly linear relationship between proof time and execution time, the time required to prove STARK proofs is relatively short (expected to be only a few minutes in the near future).
When implementing recursion, SHARP can verify propositions as soon as they are received. Proofs can be repeatedly merged into recursive proofs in various patterns until, at some point, the generated proof is submitted to the on-chain verifier contract. Figure 2 shows a typical recursive proof pattern:
Image
In this example, four propositions are sent to SHARP (possibly originating from four different applications). These propositions are individually proven in parallel. Then, each pair of proofs is verified by a recursive verifier proposition (a Cairo program that verifies STARK proofs), resulting in one proof. This proposition verifies that two proofs have been verified. Next, these two proofs are merged again by a recursive verifier proposition. This produces a proof that verifies all four original propositions. This proof can be submitted to the chain and verified by the Solidity verifier smart contract.
Direct Advantages of Recursive Proofs
Reduced on-chain costs
Undoubtedly, we have achieved the "compression" of multiple proofs into one, which means that the on-chain verification cost of each transaction will be significantly lower (each proposition may contain many transactions).
Using recursive proofs can eliminate the computational resource barriers (such as memory) that have limited proof size so far because each proposition has a limited capacity and can be proven individually. Therefore, when using recursion, the capacity of the recursively aggregated proposition (Train) is almost unlimited, and the cost of each transaction can be reduced by several orders of magnitude.
In practice, cost reduction depends on acceptable latency (as well as the speed at which transactions arrive). Additionally, since each proof is usually accompanied by corresponding on-chain data outputs, the amount of data written on-chain together with a single proof is also limited. Nevertheless, reducing costs by an order of magnitude or even further improving performance can be easily achieved.
Reduced latency
The recursive proof pattern can reduce the latency of proving large combination propositions. There are two factors at play:
Received propositions can be proven in parallel (instead of proving a huge combination proposition).
Proofs do not need to wait for the last proposition in the Train to arrive before being proven. Instead, new propositions can be combined with the proof at any time. In other words, the delay caused by the last proposition joining the Train is roughly the time required to prove the last proposition plus the time required for the recursive verifier proposition (which proves all propositions that have been "added" to this particular Train).