It has already been demonstrated that Cartesi Machines can be executed inside the browser (kudos to @edubart, who even got Doom running there!). With this capability, it becomes possible for a web app to feed inputs into a machine and then read back its outputs. These inputs could even be a DApp’s actual inputs (e.g., the ones added to its on-chain
InputBox), making it possible for a web front-end to keep in sync with the DApp’s state without needing to fetch data from a Reader Node.
This approach, once baptized by @tuler as “Rollups in Browser” or “RiB”, could bring potential great benefits in terms of convenience and resource consumption. Namely, it would become possible to run applications without having to instantiate Reader Nodes!
One important caveat: in principle, this would only be viable for applications with relatively small Cartesi Machines, because there is a limit of 4MB for the browser WebAssembly environment. In any case, it could be a great solution for small applications, and particularly welcome for newcomers being onboarded (which would have an easier time not having to connect their web app to some Reader Node). Another thing to note is that this proposal is not suggesting there would be a full Reader Node running in the browser, with a GraphQL indexed database and everything! The web app would simply collect the outputs generated by the machine as they are generated, and process or store them as they wish.
RiB could also be an interesting way to scale inspects (state reads). Currently, DApp Reader Nodes need to serve inspect requests from all users of the DApp, but this approach would allow each user to generate the DApp’s state directly inside the browser in “near real-time”.
A final pragmatic note: it probably won’t be practical for the browser to sync an application starting from genesis. For that matter, the best approach would probably be to load the last finalized application state from somewhere like IPFS (preferably with Lambda), start up the machine in the browser with that state, and then sync only the inputs that were sent after that. From then on, the client could keep itself updated by feeding the inputs directly observed from L1 or the sequencer, with potential very low latency.
More details and previous discussion can be found in this Discord thread.