Vitalik Buterin recently published:
"Hyper-scaling Ethereum state by creating new forms of state"
Execution has credible 1000x paths.
Data availability has credible 1000x paths.
State does not — unless we change what "state" actually means.
Why State Is Fundamentally Different
State is the live database every validator must store in its entirety to be able to read and update to execute transactions:
- Accounts
- Contract code
- Storage slots
- ERC-20 balances
- NFTs
- DeFi positions
As usage grows, this database grows. By default, it never shrinks.
Execution can be parallelized and proven. Data can be sampled by turning it into "blobs" and distributed across shards, where each validator stores only a portion.
State, under today’s model, requires validators to keep an ever-growing working set online.
That means larger disks, more RAM, higher I/O, longer sync times.
That is structural centralization pressure.
If execution and data scale but state keeps inflating validator requirements, decentralization erodes over time.
The Core Shift: Two Forms of State
Instead of allowing a single monolithic state tree to grow indefinitely, the proposal introduces a second form of state with a different permanence and cost model.
1. Permanent Live State
This is the current structure of state, which would continue to exist under Vitalik's proposal.
Its design makes it suitable for high-value shared objects that the user base requires to remain fully live:
- User accounts
- Core contract code
- Critical shared logic
The goal is to keep this working set, or old state type, intentionally bounded, given it permanently adds to centralization pressures.
2. Cheap, Restrictive State (UTXO-like)
Most per-user objects — balances, NFTs, individual positions — move into a different structure.
These objects do not need to remain permanently live in validator storage.
They are not deleted. They become provable and resurrectable.
The Architectural Unlock: Separating Data From Status
The proposal exploits a powerful idea:
Validators do not need to store full object data — they only need to track its validity status.
The system works as follows:
- Objects are committed into cryptographic trees (Merkle structures).
- The tree roots are embedded in consensus.
- Validators do not retain the full object data indefinitely.
What validators do keep is minimal:
A single bit per object indicating whether it is "spent" or "unspent."
When a user wants to interact with an object:
- Their wallet supplies the object data.
- The wallet supplies a Merkle proof that the object belongs to a known commitment root.
- Validators check:
- The proof matches the root.
- The object’s bit is still "unspent."
If both checks pass: - The transaction executes. - The bit flips to "spent."
That is sufficient.
The validator never needs the full global database of all user balances. It only needs: - The commitment root. - The bitfield tracking spentness.
Verification cost becomes logarithmic in historical data size, not linear in total live state.
Why This Is Radically Different
Under today’s account model:
Validators must store and update entire account states to process changes.
Under this model:
Validators check cryptographic proofs and flip one bit.
The heavy data can be pruned locally.
Users bring it back only when needed.
This decouples:
Total historical state growth
from
Validator working-set size
Total user data can grow orders of magnitude.
Validator hardware requirements do not need to grow proportionally.
Why This Matters for Decentralization
Most state growth today comes from per-user objects that do not need to remain permanently live.
If hardware requirements rise steadily: - Fewer people run nodes. - Participation narrows. - Decentralization weakens.
Execution scaling is engineering, through more efficient zk-proving algorithms.
Data scaling is engineering through more efficient data sampling and sharding protocols.
State scaling is architecturally constrained because it is unprunable, unshardable, and uncompressible.
This proposal redesigns the architecture so state growth no longer forces validator centralization, by making state prunable, and in some sense shardable, across users who store the data for their own unspent UTXOs.
The Road Ahead
Significant development is required to implement this architectural change:
- New transaction formats.
- Wallet-level proof generation.
- Reliable access to historical data.
- Increased client complexity.
Proof availability must be robust.
Resurrection must be seamless.
Tooling must abstract this away from users.
The complexity must move into software layers — not validator hardware requirements.
Bottom Line
This is not incremental optimization.
It is a redefinition of what counts as "live state".
By separating full object data from a one-bit validity flag, Ethereum gains a plausible path to scaling activity by orders of magnitude without steadily raising the cost of verification.
Currently, execution and data have a clear plan for solving but state does not, so Ethereum's scaling will eventually hit a roadblock.
This proposal directly targets the hardest constraint.
[link] [comments]
You can get bonuses upto $100 FREE BONUS when you:
💰 Install these recommended apps:
💲 SocialGood - 100% Crypto Back on Everyday Shopping
💲 xPortal - The DeFi For The Next Billion
💲 CryptoTab Browser - Lightweight, fast, and ready to mine!
💰 Register on these recommended exchanges:
🟡 Binance🟡 Bitfinex🟡 Bitmart🟡 Bittrex🟡 Bitget
🟡 CoinEx🟡 Crypto.com🟡 Gate.io🟡 Huobi🟡 Kucoin.
Comments