We often get asked how we're different from Dune Analytics. While there are some serious similarities in the services we provide, Transpose is primarily a data platform designed for builders. Let's explore what this means and how we've tailored Transpose to real-time, transactional workloads.
The most obvious similarity between our offerings, and probably the reason we get the most questions on this, is that we both allow open ended SQL queries against a collection of high- and low-level blockchain data. Despite this, we both expose SQL for different reasons: Dune allows SQL queries to facilitate creating dashboards and performing slow, large-scale analytics; we expose SQL to give builders the tools to rapidly retrieve the information they need to support production applications.
Database workloads can be roughly grouped into two distinct categories - transactional (OLTP) and analytical (OLAP). Transactional queries are ones like "give me all the activity associated with wallet X between dates Y and Z," while analytical workloads typically require accessing much larger quantities of data in a single query and are more represented by queries like "count the total number of transactions on this blockchain, grouped by day." Transpose is built around PostgreSQL to support the most demanding transactional workloads out there - rapidly querying data by wallet, asset, timeframe and joining across tables along any desired key.
For a real time data platform to be effective, block time (indexing latency) and query time (query latency) must both be minimized. We've focused on this at Transpose from day 1 and this is evident in how performant our indexing and API stack is today.
Block time: Our average block time is about *10 seconds*, as compared to Dune's 30 minutes+. Our indexing stack is so fast that we had to build in an intelligent system for dealing with block reorgs and unfinalized data. Our solution is to index all activity as soon as we see it on chain, but to only update balances once finality occurs (you can still get real time balances using a couple pre-made SQL queries provided by the Transpose team).
Query Time: While obviously completely dependent on the query, Transpose is built for rapidly querying small chunks of very specific data. Latencies between 50-150ms are typical for most reasonable queries.
One of the other big differences between Transpose and Dune is our approach to indexing high-level data. We believe one of the biggest challenges in working with blockchain data is the inconsistency of protocol-level data for protocols serving the same purpose. Think DEXes, NFT Exchanges, mixers etc. Our indexing team works with customers to design optimal schema that can be used for all protocols in a particular vertical, ensuring data is compatible, consistent and predictable to work with. For example on Ethereum, we've indexed all swaps and liquidity for the 24 highest-volume DEXes, covering a vast majority of defi activity on the chain. Most importantly, all this data is available in the same consistent format defined in our docs: https://docs.transpose.io/sql/tables/protocol-layer/aggregate-dex-swaps/dex_swaps/
All of this adds up to an experience catered to builders and developers. Hundreds of companies rely on Transpose to provide mission-critical data in production environment. Common use cases are tracking portfolio balances across time, quickly retrieving historical token prices (even as OHLC data!), conducting wallet segmentation/ analytics, and tracking engagement with projects on chain.
Both Transpose and Dune have online environments designed for exploring the available data and testing queries. Transpose is built around the Atlas and Playground. The Atlas is a comprehensive library of queries created by the Transpose team and community that showcase what Transpose is capable of. One click lets you run the queries right from the browser to see how quick they run and what data is returned. You can then tweak queries in the Playground - adding parameters, joining additional tables or even writing your own queries after getting inspiration from the Atlas. With a couple clicks, you can then export these queries to make API requests in your code without any additional overhead.