Ethereum completed its Fusaka upgrade on December 3, marking one of the network’s most important steps toward long-term scalability.
This upgrade builds on a series of changes since the 2022 merge and follows previous Dencun and Pectra releases that lowered Layer 2 charges and increased blob capacity.
Fusaka will also restructure the way Ethereum verifies that data is available, expanding the channels through which layer 2 networks such as Arbitrum, Optimism, and Base post compressed batches of transactions.
This is done through a new system called PeerDAS. This allows Ethereum to verify large amounts of transaction data without the need for every node to download the data.
Buterin says fusaka is ‘incomplete’
However, Ethereum co-founder Vitalik Buterin cautioned that Fusaka should not be seen as the final version of sharding, the network’s long-term expansion plan.
Buterin pointed out that PeerDAS was the first practical implementation of data sharding. However, he noted that several key components remain unfinished.
He said that while Ethereum now has access to more data at a lower cost, the complete system envisioned over the past decade still requires work across multiple layers of the protocol.
With this in mind, Buterin highlighted three gaps in Fusaka’s sharding.
First, Ethereum’s base layer still processes transactions sequentially. This means that the execution throughput is not increasing with the new data capacity.
Second, block builders, the specialized actors who assemble transactions into blocks, continue to download complete data payloads even when validators are no longer needed, creating centralization risks as data volumes grow.
Finally, Ethereum still uses a single global memory pool, which forces all nodes to process the same pending transactions, limiting network scalability.
His message essentially frames Fusaka as the foundation for the next development cycle. He said:
“The next two years will give us time to refine the PeerDAS mechanism, carefully scale it while continuing to ensure its stability, use it to expand L2, and once ZK-EVM matures, turn inward to expand Ethereum L1 gas as well.”
Gramsterdam becomes the next focus
The closest successor to Fusaka is the upgrade of Gramsterdam, targeted for 2026.
As Fusaka expands Ethereum’s data bandwidth, Gramsterdam will work to ensure the network can handle the associated operational load.
A key feature is its emphasis on the separation of proposers and builders, known as ePBS. This change will move block construction into the protocol itself, reducing Ethereum’s dependence on the small number of external block builders that currently dominate the market.
As the amount of data under Fusaka increases, the influence of these builders will grow even further. ePBS aims to prevent such outcomes by formalizing how builders bid on blocks and how validators participate in the process.
Running in parallel with ePBS is a complementary feature called a block-level access list. These lists require builders to specify which parts of Ethereum’s state the block touches before execution begins.
The customer team says this allows the software to schedule tasks more efficiently and lays the foundation for future parallelism. This is an essential step in preparing the network for greater computational load.
Together, ePBS and access lists form the core of Gramsterdam’s market and performance transformation. These are considered structural prerequisites for operating large data systems without sacrificing decentralization.
Other planned Ethereum upgrades
Beyond Gramsterdam lies The Verge, another roadmap milestone centered around the Verkle tree.
This system restructures the way Ethereum stores and verifies the state of its network.
Rather than requiring a full node to store the entire state locally, Verkle trees allow blocks to be verified with compact proofs, significantly reducing storage requirements. Notably, this is partially addressed in Fusaka.
For node operators and validators, this aligns with one of Ethereum’s core priorities: making running nodes accessible without enterprise-grade hardware.
This work is important because Fusaka’s success will increase the amount of data that Ethereum can ingest. Still, without changes to state management, the cost of maintaining the chain could eventually rise.
The Verge aims to ensure the opposite, making Ethereum easier to run even as it processes more data.
From then on, Ethereum will focus on the Purge update, a long-term effort to remove accumulated historical data, eliminate technical debt, and make the protocol lighter and easier to navigate.
In addition to these changes, there is Splurge, a collection of upgrades designed to improve the user and developer experience.
This is achieved through improved account abstraction, new approaches to MEV mitigation, and continued cryptographic enhancements.
Global payment layer
Taken together, these updates form successive stages of the same goal.
“Ethereum is positioning itself as a global payments layer capable of supporting millions of transactions per second through its Layer 2 ecosystem while maintaining the security guarantees of the base chain.”
Ecosystem figures over the years increasingly reflect that framework. Joseph Rubin, co-founder of Ethereum, said:
“The world economy will be built on Ethereum.”
Rubin noted that the network has operated uninterrupted for nearly a decade and was responsible for settling more than $25 trillion in value last year.
He also pointed out that Ethereum currently hosts the largest share of stablecoins, tokenized assets, and real-world asset issuance, and that ETH itself is becoming a productive asset through staking, re-staking, and DeFi infrastructure.
His remarks capture the broader theme behind the current roadmap. It is a payments platform that can run continuously, absorbs global financial activity, and remains open to any participant wishing to verify and transact.
According to CoinGecko, its future depends on three outcomes: The network must maintain scalability so that rollups can handle large volumes of activity at predictable costs. It must rely on thousands of independent validators whose participation capacity is not limited by hardware requirements to remain secure. And we need to keep it decentralized and allow anyone to run a node or validator without special equipment.