We’ve been discussing Data Availability and its importance to Cartesi’s use cases for some time now. The rationale behind this can be found in the 'Cone of Innovation,:
Source + explanation:
Grokking Cartesi Rollups, Part 1
Cartesi Rollups were conceptualized with a modularity mindset. However, for its initial versions, we focused on the most straightforward and widely accepted stack: Data Availability and settlement on Ethereum. An explanation of this ‘vanilla’ stack, exemplified by the honeypot example, is available here: A deep dive into Honeypot, an optimistic rollup application. | Cartesi.
The cool stuff that came out of Cartesi Experiment Week really showed off how solid this basic setup is. It’s clear we’re on the right track with some really neat use cases, and things are working out pretty well. But now, I think it’s time to push the boundaries a bit more.
Integrating with a different Data Availability layer, such as Celestia, Espresso DA, or Syscoin, offers some instant advantages:
- It makes data more abundant and opens up the cone of innovation
- It makes data cheaper, so even the current use cases are more efficient
- It draws in a wider community and shares our values with them (say Celestia people get excited and get in on the fun)
The main drawback is the deviation from the generally accepted stack (L2beat can get upset hehe). Nevertheless, this concern diminishes depending on the chosen DA layer, especially if they provide robust light nodes. Furthermore, the availability of the vanilla stack and the flexibility appchains offer to developers in choosing their trust assumptions mitigate this concern. While adding complexity and new considerations, the benefits make this a great path imo.
This proposal is related to the following ones:
Dehashing Device
Cartesi Lambada .
Minimalist Espresso Integration
5 Likes
The ecosystem group is hammering for a while, how important is from an adoption point of view we completely unlock our cone of innovation and allow the right experimentation terrain for developers to exercise their creativity on doing more in web3.
I support this initiative, its the right path to make our rollups protocol useful to support the massive challenge the blockchain ecosystem has on finding use cases that go beyond DeFi
1 Like
I’m really excited for this proposal!
Random question (from a non-technical person), is the idea to create a sort of pathway or template for DA integration that can be applied to multiple potential DA partners? Or is each integration highly bespoke and a “start from scratch” type scenario?
Not sure if that makes any sense!
1 Like
A little bit of both! The idea would be to create a generic pathway that could work with different DA solutions (the dehashing device, is a piece of it). However, each integration likely has its own specific needs which would generate specific code.
In the future, it might be the case that the DAs will agree on a general interface, which would make all integrations look the same…but I don’t think we’re at that point yet.
Also, I believe that the best way to figure out the generic pathway is choosing a specific DA and integrating it. And then the next step would be to generalize the parts that could be generalized, after learning more about the process.
4 Likes
That’s how I see it too! I read this proposal as “modular Data Availability support for Rollups”, in the sense that eventually alternative DAs could be added as “plugins”. As @felipeargento said, I believe the Dehashing Device is the key ingredient to make this available - but yes, for the foreseeable future there would need to be a specific dehashing plugin to fetch the data for each DA technology. Also, when fraud proofs are used, there will probably also be a need for some DA-specific code to resolve disputes about the data content.
4 Likes
I wonder if there is anything we can do to advance integration work with multiple DAs and sequencer projects. Is it the case that specific DA support can only happen once we have a working version of the Generic I/O? Or are there tasks that can be distributed in the form of RFPs? @milton-cartesi @felipeargento
1 Like
We are making progress in this direction already. Generic Machine I/O has been released in Emulator SDK v0.18. For any real implementation, we will need the new Node (v2) to be released which uses this. In parallel, we will start prototyping the other parts. We are currently prioritizing Espresso support (sequencer + DA), but feedback is welcome on prioritizing something else. In any case, at this stage I believe we need to work on one or two integrations until we get all the parts working, and then we can open up for people to pick up working on multiple integrations
1 Like
Btw, details about our progress can be followed in the #machine-io channel on Discord. Moreover, our current architectural vision can be found in this doc.
1 Like
Tbh, about DA integrations, we are indeed approaching the time to start talking about how to support specific DA projects. In general, we need to implement the following:
- A mechanism to inform the ID of a piece of data that is guaranteed to exist in the DA solution. The simplest idea would be to provide a Relay contract that checks if the data identifier (e.g., a hash or CID) is valid before relaying the information to the machine. As such, the machine could interpret any input from this contract as corresponding to data that is trusted to be valid.
- Software running in the machine that recognizes trusted data identifiers, performs a GIO request to retrieve the corresponding data, and if necessary decodes the result.
- GIO service that runs alongside the Node (on the outside) to handle the GIO request and fetch the specified data.
- (future) On-chain and off-chain components to support validation. These must allow the fault proof system (Dave) to trustlessly retrieve the merkle hash of the data returned by a GIO request, and may include a ZK proof of a given data’s corresponding merkle root hash.
I think that with a little more maturity of our architecture we could indeed write some RFPs about this o/
2 Likes