RFP Title:
Immutable SQLite Database Integration for Smart Contracts with Cartesi Coprocessor
Wave 2 Intent:
Intent 5: Grow the Cartesi ecosystem
Overview:
The Cartesi ecosystem opens new avenues for leveraging off-chain computation within blockchain smart contracts. This RFP seeks to address the need for demonstrating how to integrate a read-only SQLite database and Ethereum smart contracts using the Cartesi coprocessor. The goal is to provide smart contracts the ability to query data via named SQL queries to a SQLite database and smart contract provided inputs and be able to receive results securely and cost-efficiently.
This functionality is essential for developers looking to expand smart contract use cases where a database structured set of immutable data is useful during processing in smart contracts and cost of data lookup or storage on Ethereum L1 would be too high.
Solution:
The desired solution involves:
Creating a template that demonstrates how to attach a read-only SQLite database to the source of a function for the Cartesi coprocessor.
Implementing named query functionality within the database, allowing smart contracts to invoke specific queries by name and receive their results so you don’t need to provide custom SQL strings within the smart contract each time.
Delivering documentation and a working example showcasing end-to-end integration, including:
Setup of the SQLite database with a defined schema.
Definition of named SQL queries
Deployment as part of a Cartesi machine.
Use and interaction with Ethereum smart contracts to execute queries and retrieve results.
Mainnet deployment of an at least 100mb large example SQlite database available for querying
Mutability or statefulness of the database is not within scope.
Team Qualifications:
To successfully execute this RFP, the team should have:
Understanding of SQLite and its implementation
Proficiency with Cartesi’s infrastructure, the coprocessor and its integration with Ethereum smart contracts.
Experience in smart contract development, focusing on data integrity and interoperability.
Familiarity with decentralized application development and blockchain protocols.
Skills in technical documentation and creating developer-friendly templates and examples.
Can you expand a bit on the demand you see for this feature? I think an important part of evaluating RFPs is not just assessing whether the proposal is technically sound or useful in general terms, but also considering whether it’s the right time for it. Is this addressing an immediate need within the ecosystem, or are we aiming to anticipate a future demand?
I’m sharing this same feedback on other proposals as well, as I think these questions are key to making well-informed decisions. Thanks for your input!
In general I see this as an interesting perspective on making more familiar technologies, like SQL and SQLite, accessible to smart contract developers. Currently, achieving similar functionality with smart contracts requires cumbersome approaches, such as maintaining a massive Merkle tree and repeatedly generating and verifying Merkle proofs—without the flexibility and ease of SQL for querying and database construction.
In terms of actual demand, while it would be ideal to have specific clients lined up, the appeal of this feature can maybe be inferred from its adoption as a key selling point in other coprocessors, such as the zk-coprocessor highlighted here: Lagrange: ZK Coprocessor
I think it would be cool to see large SQLite databases accessible within Ethereum smart contracts via the Cartesi Coprocessor. Given SQLite’s position as the world’s most deployed database engine, this integration could serve as a compelling demonstration of how to bring large real-world databases to the Ethereum ecosystem (for example, a database of real-world map data).
I’d like to understand the technical constraints that make mutable state problematic and not within scope, specifically what challenges and technical limitations it presents. Although I don’t see it as a major blocker, because the Ethereum smart contract could always update which machine coprocessor it is using through DAO updates or some automated system, at least when wanting to upgrade static real-world data (e.g., maps).
The reason for focusing on immutability is to constrain myself to the current capabilities of the coprocessor and its solvers/operators, rather than speculating about future possibilities. As of today, the coprocessor is designed to execute machines that are uploaded with an input and to collect outputs from them. You can upload an updated machine, and it will work as expected.
However, we do not yet have support for preimage uploads, GIO (General Input/Output) to retrieve them, or facilities for externalizing state—these features are still in progress.
I anticipate that this will evolve into something like an operator API. Such an API would allow you to attach preimages, making them retrievable by a computation during execution, and to collect reports or GIO outputs as well; and then you can basically just refer to the resulting ‘state’ in an output