Data Availability support for Cartesi Rollups

We’ve been discussing Data Availability and its importance to Cartesi’s use cases for some time now. The rationale behind this can be found in the 'Cone of Innovation,:

Source + explanation: Grokking Cartesi Rollups, Part 1

Cartesi Rollups were conceptualized with a modularity mindset. However, for its initial versions, we focused on the most straightforward and widely accepted stack: Data Availability and settlement on Ethereum. An explanation of this ‘vanilla’ stack, exemplified by the honeypot example, is available here: A deep dive into Honeypot, an optimistic rollup application. | Cartesi.

The cool stuff that came out of Cartesi Experiment Week really showed off how solid this basic setup is. It’s clear we’re on the right track with some really neat use cases, and things are working out pretty well. But now, I think it’s time to push the boundaries a bit more.

Integrating with a different Data Availability layer, such as Celestia, Espresso DA, or Syscoin, offers some instant advantages:

  • It makes data more abundant and opens up the cone of innovation
  • It makes data cheaper, so even the current use cases are more efficient
  • It draws in a wider community and shares our values with them (say Celestia people get excited and get in on the fun)

The main drawback is the deviation from the generally accepted stack (L2beat can get upset hehe). Nevertheless, this concern diminishes depending on the chosen DA layer, especially if they provide robust light nodes. Furthermore, the availability of the vanilla stack and the flexibility appchains offer to developers in choosing their trust assumptions mitigate this concern. While adding complexity and new considerations, the benefits make this a great path imo.

This proposal is related to the following ones:
Dehashing Device
Cartesi Lambada .
Minimalist Espresso Integration


The ecosystem group is hammering for a while, how important is from an adoption point of view we completely unlock our cone of innovation and allow the right experimentation terrain for developers to exercise their creativity on doing more in web3.

I support this initiative, its the right path to make our rollups protocol useful to support the massive challenge the blockchain ecosystem has on finding use cases that go beyond DeFi

I’m really excited for this proposal!

Random question (from a non-technical person), is the idea to create a sort of pathway or template for DA integration that can be applied to multiple potential DA partners? Or is each integration highly bespoke and a “start from scratch” type scenario?

Not sure if that makes any sense!

A little bit of both! The idea would be to create a generic pathway that could work with different DA solutions (the dehashing device, is a piece of it). However, each integration likely has its own specific needs which would generate specific code.
In the future, it might be the case that the DAs will agree on a general interface, which would make all integrations look the same…but I don’t think we’re at that point yet.

Also, I believe that the best way to figure out the generic pathway is choosing a specific DA and integrating it. And then the next step would be to generalize the parts that could be generalized, after learning more about the process.


That’s how I see it too! I read this proposal as “modular Data Availability support for Rollups”, in the sense that eventually alternative DAs could be added as “plugins”. As @felipeargento said, I believe the Dehashing Device is the key ingredient to make this available - but yes, for the foreseeable future there would need to be a specific dehashing plugin to fetch the data for each DA technology. Also, when fraud proofs are used, there will probably also be a need for some DA-specific code to resolve disputes about the data content.