Minimalist integration Cartesi Rollups & Espresso Sequencer

New POC Proposal
Improving Cartesi design space with Espresso Sequencer minimalistic integration

Core Concept and purpose statement

Cartesi current protocol stack partially unlocks the cone of innovation, by bringing big increase on computation and extended design space with Cartesi VM and its Linux OS.

Although this represents by itself a huge leap compared with the current blockchain infrastructure, we still are handicap on data capacity and blockchain latency, affecting further exploration of potential use cases including gaming vertical. Moreover, the lack of data scalability is the limiting factor to fully unlock our cone of innovation

This proposal is to consider in two steps:

1 - Short term: Consider the recent work done by Carsten on using Espresso sequencer and implement a minimalistic integration where Cartesi rollups can use espresso sequencer. That will provide an open experimentation garden for both communities start expressing new ideas and PoCs

2 - Mid Term: make similar implementation with Carsten´s future work on Eigen layer and Celestia

1 Like

I am 100% on the same page on this one. I was actually intending to start playing with this right away (if I find time and live up to the challenge!).

The actual goal for me here is to empower the Espresso team and community members to easily experiment with Cartesi + Espresso DApps, to the point that we can use Sunodo and HLFs with this combo. This would allow their community to participate in our next Experiment Week, for instance.

1 Like

I would just add that we are talking about Espresso DA here too, not only the Sequencer - AFAIK it’s easier to use their DA instead of any other DA when using their sequencer

1 Like

For reference, here is the cone of innovation:

I think exploring these integrations can teach us a lot on how to integrate Espresso and other protocols for real. There is also a lot of value in exploring the expanded use case and understand what kinds of cool things can be built by the end game of Cartesi. Which, in turn, help us prioritize everything else.

In sum, I like this idea for it’s exploratory benefits but also for long term lessons about protocol interactions (including the community aspect of integration).

Would be nice to involve Espresso on this conversations too, they could help a lot and their community is pretty friendly.

1 Like

Yeah!

I think Espresso is specially positioned here because it has both sequencing and DA, so it helps us to explore both designs and integrations.

(plus applications get cheap data and soft finalization, which is pretty cool too)

1 Like

Well said Felipe and Milton! Moreover, the industry is changing on very fast pace toward modular architectures. Our protocol has modularity on its DNA, but seems we havent polish the “pieces of the modular” in a direction that allow a more friendly infrastructure composability. Embracing modularity in that sense, means we along those works will understand which internal interfaces should improve to allow an easier experimentation ground with other protocols

Why am I recommending first Espresso integration? Exactly because of Milton´s words: Not only DA but also provide us a soft confirmation improving dApps UX. That specifically can play an important role on verticals we are interest at this moment as for example gaming

1 Like

Hello everyone! I’d like to give an update about my experimentations so far.

First of all, I’ve got some things running, and code and details can be checked in my rollups-espresso Github repo.

I’d like to summarize the current state of affairs here, in terms of value and strategy:

  • As a minimalistic integration, I’d like to remind you all that the approach I am following is to only use Espresso as a DA. This is much simpler to implement IMHO, and I’d say it’s not that far from actually working (disconsidering arbitration, of course). With this approach, Cartesi DApps work as they do now, with DApps having the option of also using data sent to Espresso (for which just the Espresso block hash is sent to L1). This is useful to allow larger sets of data, but does not help applications that need a lot of transactions (for that, we will still depend on deploying to Optimism or Arbitrum to take advantage of their sequencers)
  • AFAIU, this approach can be directly used for some other DA’s, in particular Syscoin and IPFS (although I personally do not consider IPFS to be a DA solution)
  • This experiment is already exercising the use of the Dehashing Device (for now, just a quick hack illustrating how it would be used). My idea is that with other examples (Syscoin, IPFS, hopefully EigenDA), we can narrow down what we think is reasonable and start to actually implement it for real

Aside from that, using Espresso as a sequencer currently sounds as a completely different ballgame for me. I feel that it will require much more substantial changes to how things work, and I confess I don’t quite understand it yet (such as how to use L1 assets, or how to integrate with smart contracts in general). Nevertheless, this is clearly the end game, and what Espresso was built for, so we should keep it in mind.

1 Like

Milton in your current implementation we wouldnt be using the “sequencer” feature of espresso only its DA? So we would continue sending Tx directly to the underlying layers? Would love to discuss more details with you about this implementation. Is is ready for us try to port some of dApps into this infra?

Yes, this first approach is just sending L1 inputs that refer to Espresso blocks, and having the corresponding data be used in the DApp back-end.

I believe we can already think about porting DApps to use this architecture and see if it makes sense.

Two things to keep in mind though:

  1. I am still working on a minimally working front-end example that sends data appropriately (that’s kind of cumbersome).
  2. A real implementation requires a working dehashing device for Espresso
1 Like

For me, the dehash device will give a truly utility for dApps in our execution layer to serve seamless dApps on the baselayer
I will speak with some builders we can maybe port one of the dApp to this new architecture and learn a lot with it

This topic was voted on by the Technical Vision Council at the December 14, 2023 meeting.

1 Like

I’m curious, what exactly do you see as the difference between using Espresso for DA and DA + sequencing? I guess with just DA the idea is that there is still a permissioned party that gets to order transactions and forward them to Espresso? Would love to explore more what you view as the main challenges to also using Espresso as a sequencer, and see if we can come up with good solutions together.

Hello @gets !

So, to use sequencing + DA with our current architecture requires more changes to how things are done right now. At the moment, Cartesi applications do not use any external sequencers: client inputs are always sent directly to L1. So the sequencing itself happens there: each application has an input box, and each app-specific Cartesi Node reads inputs from there.

With this scenario, I think that using the Espresso sequencer (which is what we want!) will require us to do some extra work, such as the following:

  1. Instead of the standard procedure of submitting data to L1 (e.g. via Metamask), clients will need to sign data themselves and send that to Espresso
  2. The Cartesi Node (or application code) would need to perform some extra work to parse the signed data and extract metadata such as msg_sender (this is currently given “for free” when reading from an Ethereum-compatible blockchain)
  3. The Cartesi Node would need to be changed to read data both from Espresso and from the L1 InputBox. If I’m not mistaken, for each Ethereum block it should read first the Espresso inputs related to the commitment on HotShot, and then any L1 inputs it may have in its input box. I believe L1 inputs are necessary so that things like asset deposits are still possible.

Given the above, I wanted to start experimenting with simply having clients send data to Espresso, and then add the appropriate Espresso block hash as an input to the application. The Cartesi Node then reads the inputs normally, as they already do today. The only difference is that, for inputs that correspond to Espresso block hashes, the application code uses a special procedure to request a “dehashing” of that info, retrieving the corresponding block data. If done right, this process will still allow on-chain dispute resolution for the application. More details can be found here: rollups-espresso/README.md at main · miltonjonat/rollups-espresso · GitHub

But rest assured: this is not what we want! We will strive for the changes above to make it all happen :slight_smile:

1 Like