As it stands, a Cartesi DApp always processes inputs specifically relayed to it. The current production-level implementation is that Rollups DApps read inputs from the InputBox smart contract from the base layer. Alternatively, for a Sovereign Rollup implementation using something like Celestia, the node running the DApp would read inputs sent specifically to its VM ID or namespace, and stuff that into the Cartesi Machine.
While this approach allows for any arbitrary piece of data to be sent to the application, it has the serious drawback of requiring inputs to be explicitly sent to it, and the node having to know where to look. As such, this prevents DApps from processing arbitrary base layer data, or pulling data from other sources, among other things.
In this context, the Dehashing Device is a concept that has long been entertained within the Cartesi community to allow Cartesi Machines to access external pieces of data based on their hash (or id’s). It has similarities in concept with some initiatives from other projects, such as Optimism’s pre-image oracle. Essentially, a database of trusted chunks of data and their corresponding hashes would need to be produced and made available in such a way that it is still possible to build secure fraud proofs of the computations that use them. For the use case of accessing base layer state, this database would simply consist of trusted information pulled from the underlying blockchain itself.
It should be noted that, besides accessing base layer state, there are a lot of important applications that would become possible with this feature, including: pulling data from multiple alternative DAs, properly using external sequencers with censorship resistance (which requires the DApp to read inputs from both L1 and the sequencer in a deterministic order), and even accessing state from other Cartesi DApps.
I’m surprised by the lack of comments in this proposal, I believe it comes from the fact that the dehashing device is a feature that has been discussed many times over the years and is consensually defined to be needed.
Maybe the lack of comments come from the lack of controversy around it hehe
Anyway, I wanted to come here to FULLY support this proposal as one of the main thing to be tackled as soon as possible. Not only it clearly adds value by its own merits (reading the baselayer), it’s also a fundamental component for other proposals.
And a call to action to those reading this, manifest your support here if you believe this proposal should be in front of the line! Silence is not a good way to support this!
I wanted to display my full support for this feature.
Not only that, but I make myself available to anyone who wants to discuss dehashing in more detail. I believe there are many specific aspects that have to be decided on and could influence the features of the final specification.
In addition to what has been said, while many are interested in using the Dehashing Device to pull data from new data availability projects, I wish to use it to pull input data from past Ethereum transactions itself, given an Ethereum transaction hash, this could be in an Ethereum namespace. And also to pull inputs (and maybe notices) given an input number from other Cartesi Dapps, this could also be another namespace. Maybe Ethereum could be one of the first namespaces supported by the dehashing device, since we have strong guarantees on its data availability.
Hey @edubart, I resonate with you! Access to Ethereum (base layer) namespace is proposed here. Access to Cartesi DApp state is proposed here, but that proposal did not include the idea of directly accessing outputs like notices.
I would like to make a distinction between two designs that get often called “dehashing device”, but actually differ in complexity. Note that the two proposals involve the same thing at the machine level: a new device that turns hashes into their originating data. But the difference appears on how this device is used in practice.
Input device - With this change, the device will be used to change the input box, allowing for several improvements on which data can be fed to the Cartesi Machine. Instead of having a dedicated Input Box, the DApp can chose all its sources of inputs and each of these would need to implement the so-called turnstile requirements in order to be disputed in arbitrations. Features that are unlocked by the Input Device are: alternative DA’s, Sequencer, one dapp reading inputs from another dapp. But notably, it is not possible to retrieve arbitrary data from the base layer.
Full Dehashing Device - In this solution, the device is used to expand the latest ethereum block in order to obtain the contents of the block as well as of past blocks. Like Optimism does in Bedrock. This has the advantage of unlocking all the features we want: alternative DA’s, Sequencer, shared inputs and base layer retrieval. However, this comes at the cost of more complexity. We would need to create two services: one at the node level that feeds a database with inverse hashes and another inside the Cartesi Nachine that uses the device to expand Ethereum blocks and read events/storage from them. It is very likely that these two processes could be re-used from Optimism’s minigeth, which is known to have these capabilities.
Anyway, it would be good to have some clarity on which direction is being considered and further discuss the pros and cons of each.
Hey, after thinking more about this, I agree with you in the sense that we don’t need full base layer access to enable the use of sequencers without losing assets and L1 inputs (at least for Rollups, I don’t think that’s true for Lambada)
A small comment about your distinction between “input device” vs “full dehashing device”. I think this is confusing to readers. As you said, it’s just one single device or mechanism to retrieve data. In my view, each namespace/domain will involve different formats for the id (data identifier), and also different implementations to make things work with arbitration. InputBox inputs are simple, Espresso is complex in the arbitration part (will probably require ZK for large payloads), and full base layer access will need a good level of complexity either in the way data is requested or in the arbitration part (many discussions are happening about this).
That being said, it is true: if we implement an InputBox namespace and something like an Espresso namespace, it will be possible for a DApp to fetch data from both the sequencer and L1 inputs in an appropriate order. I thought about a full workflow of how things could work, and I’m quite optimistic about it! I just think we are not fully on the same page about how to best define the “turnstile requirements”, but I guess this can be better discussed in another channel.
I fully support this feature! More specifically, a “de-Keccak-256” device seems very useful for navigating Ethereum blocks. But how do we get Ethereum block hashes into the machine in the first place?
If the machine is still fed with inputs sent to the InputBox contract, then we could add the latest Ethereum block hash as input metadata.
Otherwise, we can instead feed the machine with block hashes. On-chain, we’d have to validate the hash of any given block at height H. This validation could be done interactively with O(log(H'-H)) transactions, where H' is the height of the latest block.
The idea behind Optimism’s Bedrock is to only feed the machine with the hash of the latest Ethereum block at the end of an epoch.
This in turn would make it possible for the machine to expand the block header and find out the hash of the previous block and so on and so forth until it reaches the hash of the previous epoch.
This way, we don’t need to make any frequent updates on-chain, only insert one hash at the end of the epoch and everything else is done at the machine level.
This makes a lot of sense, even for Cartesi Rollups application, given that the machine is equipped with:
a de-Keccak-256 device, which would allow the machine to navigate the Ethereum blockchain through block hashes. This is easily disputable.
a device that tells the machine the address of the application contract so that it can filter out inputs meant for it from the InputBox. This is also easily disputable.
You confused me there quite a bit, especially when I think about reader nodes. The way you described it seems to imply that nodes would only process anything when the epoch ends! But surely this is not what we want for reader nodes. I’m not sure how Bedrock behaves in that respect.
You can start the Cartesi Machine always from the same template. Then, on startup it navigates the blockchain to find out the Merkle hash of the state that got settled in the last epoch. (we call this the Lambda state, but think a drive with some database or some other way to store the state of the machine).
Now, the machine can use the device to turn the hash of its previous state into the actual contents. This way, the machine becomes fully stateless.
Another thing this device can provide to the application back-end is blockchain metadata:
Application address: Used to generate some vouchers (such as NFT withdrawals) and to read base layer state related to the application contract (such as InputBox inputs). I don’t know if it is common practice, but I’ve seen people using this as the VM ID for Espresso as well. It is currently provided by the application address relay contract. However, this contract would not help the back-end know the application address if it were woken up by Ethereum blocks instead of inputs sent through the InputBox. In rollups-contracts@v2, we have deprecated the application address relay contract, and made the InputBox contract add this address as input metadata. This still wouldn’t help a machine being woken up by Ethereum blocks. The machine would then ask this device for application_address and get 20 bytes of data back. This is easily disputable, as the application address is the de facto way to address applications in the base layer. It may only change for back-end upgrades, but that’s far into the future of Cartesi Lambda.
Chain ID: Used by the back-end to decide which set of known contract addresses to use. For the major networks, the address shouldn’t change since we’re using the CREATE2 factories deployed by Safe. But for networks to which such factory was not deployed, the addresses may change. That is why some developers might want to create back-ends that are agnostic to the network to which their application contracts are deployed. The back-end would discover at runtime the network it is running on by asking the device for chain_id and get 32 bytes of data back. This is easily disputable on-chain, since the chain ID can be retrieved by Ethereum smart contracts (CHAINID EVM opcode / block.chainid Solidity keyword).
FYI I have changed the title of this feature since we are dropping the use of the term “Dehashing Device” in favor of better describing the actual mechanism being implemented, which is a generic Cartesi Machine device for I/O operations.