(this is ongoing work already by the Zippie team/compute - demo: https://www.youtube.com/watch?v=PSA-ccF96MM )
Let’s start with a quick Twitter thread on what a co-processor is: https://twitter.com/sreeramkannan/status/1730310412904599714
Cartesi is in an interesting position as we have a fully deterministic Cartesi Machine capable of running Linux and it’s flexible enough to be used in many different areas, not just rollups.
We are in rollups challenged by proving times (Dave, ZK), bridging costs and effort of building such into EVM L1. And many of our users actually desire to just use Cartesi as an outsourced trustable computation. A co-processor to their existing EVM or other processes.
To explain eigenlayer:
Tl;dr: put existing Ethereum stake additionally at risk and participate as an operator in a “AVS” that allows your stake to be frozen and potentially slashed.
How does it work practically?
Operators will download a Cartesi AVS and it’ll be possible to request the network to do stateless compute (possibly at payment of a token for the compute effort).
The individual operators sign the combination of (input, max RISC-V cycles, program, output/program result)
When majority matching >51% or >2/3 of total weight of stake has done the computation, the signatures can be aggregated together and sent to Ethereum L1/L2’s in one transaction, proving a strong economic security computation result in very little time.
Slashing can then be done of minority votes; which can possibly avoid slashing by proving the correctness of their result with Dave.
Our approach is to use our work on Cartesi Lambada architecture for this, as it makes it much simpler to reason about and easier to get working, quickly (will be described separately).
Great post, and very interesting perspective! So, “compute” is now being baptized by the industry as “co-processors”, cool.
I am trying to reason about how sustainable it is to have a large number of operators all executing a complex computation inside Cartesi. Reading the twitter thread, I guess it boils down to how many nodes are required to reach the 51% or 2/3 stake majority, versus the proof overhead of doing ZK right? So as long as you’re satisfied with less than 10^5 nodes checking your computation, this co-processor approach would make sense. Is that right?
Another interesting entry in the twitter thread is about “micro-rollups” or “flash-rollups”. I agree with that view, I’ve long thought this was a promising use case!
From the thread:
In between coprocessor and rollups is the class of transitory rollups: micro-rollups (@0xStackr) or flash-rollups (@alt_layer), which hold state for a small amount of time, and then the state is dissolved. This pattern obviates the state growth and archival DA problems.
As an example, imagine a game tournament distributing game rewards with game state and execution offchain, but each game settles onchain. Again the key determinant of whether to use zk, optimistic or cryptoeconomic rollup depends on the same evaluation as before.
I think many different models could make sense here - for example, random sampling leveraging drand beacons, to pick a subset of nodes to do computation and taking that for ‘good enough’ as probability of having all of the picked be dishonest could be very low.
I think this is super nice!
An old concept that I tried getting the Cartesi ecosystem to build is somewhat related to this. I called it Financial Finalization:
The idea was quite similar, one would request a computation paying a fee and an economic security budget for finalization. Say I want to sort an array in the middle of my solidity code, and I’m comfortable that no-one would burn 10k usd to trick my code just to mess with it.
It would be something like:
CartesiService.sortArray(fee: 5usd, economic_security: 10k usd, callback: function foo2(), array);
The Cartesi Service call would then advertise a sort array request through an ethereum event, anyone could sort that array on a Cartesi Machine and post the sorted array back with the callback function. They’d get paid 5 usd and lock 10k usd for a
challenge period time. But the sorted array would be available “instantly”. If an ask requires lots of staked funds (high economic security), validators could pool their money together and share the fee. And then sharks and watchers could check the result and slash the validators/steal part of the money after the fact. One is basically saying, I’m ok with someone messing up this service at the cost of x dollars.
I think this would serve a big number of different usecases and would even integrate well with existing smart contracts, just for offloading computations. Your proposal is even more generic than this, allowing this to not only extend smart contracts but the be a service on itself (even separated from the blockchain all together, other than the assets).
At the time, the proposal my proposal was not well accepted because it was believed that it wouldnt scale too well, because of what @milton-cartesi mentioned (large number of operators executing the services). But I still like it.
And therefore I support this proposal as well
I like the idea of picking a subset of nodes to do computations that are more demanding (lower security guarantee, but that may be fine depending on the use cases).
I really like this initiative for the following reasons:
- Its an elegant way to implement the initial version of Cartesi protocol, once called computational oracle or compute.
- It expands the potential use cases to be addressed and fits very well with the economic security guarantees available for example in Eigenlayer
- eventually can open expand the possibility for cartesi token utility which is gold
From Sunodo’s point of view, Ill be paying closely attention to how developers use the co-processor and which type of convenience eventually should be required on top of the “core processing unit”