Current State, Competitive Landscape, and Future Opportunities of AI and Web3 Data Industry Integration

Footprint Analytics x Future3 Campus x HashKey Capital Comprehensive Report (Part 2)

Current State, Competitive Landscape, and Future Opportunities of AI and Web3 Data Industry Integration


Authors: Sherry, Humphrey, Researchers at Future3 Campus


  • The integration of AI with Web3 data is advancing the efficiency of data processing and enhancing user experience.
  • Currently, the exploration of Large Language Models (LLMs) in the blockchain data industry primarily focuses on utilizing AI technology to improve data processing efficiency. This includes the construction of AI Agents leveraging the interactive advantages of LLMs, and the use of AI for pricing and trading strategy analysis.
  • Presently, the application of AI in the Web3 data domain faces several challenges, such as accuracy, interpretability, and commercialization. Replacing human intervention entirely is still a long journey ahead.
  • The core competitiveness of Web3 data companies lies not only in the AI technology itself but also in their ability to accumulate data and apply in-depth analysis to it.
  • AI might not be the short-term solution for commercializing data products. Commercialization requires more efforts towards productization.

1.The Current State and Development Path of the Integration of the Web3 Data Industry with AI

1.1 Dune

Dune is a leading open data analytics community within the Web3 industry, providing tools for blockchain querying, extraction, and visualization of extensive data. It allows users and data analytics experts to query on-chain data from Dune’s pre-populated database using simple SQL queries, forming corresponding charts and insights.

In March 2023, Dune announced plans regarding AI and future integration with Large Language Models (LLMs), followed by the release of their Dune AI product in October. The core focus of Dune AI-related products is to leverage the powerful language and analytical capabilities of LLMs to enhance the Wizard UX, better serving users with data queries and SQL writing on Dune.

(1) Query Explanation: Introduced in March, this feature allows users to obtain natural language explanations of SQL queries by clicking a button, aiming to help users better understand complex SQL queries, thereby improving the efficiency and accuracy of data analysis.

(2)Query Translation: Dune plans to transition different SQL query engines on its platform (such as Postgres and Spark SQL) to DuneSQL. Hence, LLMs can provide automated query language translation capabilities, aiding users in the transition and facilitating the promotion of the DuneSQL product.

(3)Natural Language Queries: Launched in October as part of Dune AI. It allows users to ask questions in plain English and retrieve data. The aim of this feature is to enable users without SQL knowledge to easily access and analyze data.

(4)Search Optimization: Dune intends to utilize LLMs to improve its search function, helping users filter information more effectively.

(5)Wizard Knowledge Base: Dune is planning to launch a chatbot that helps users quickly navigate through blockchain and SQL knowledge in the Spellbook and Dune documentation.

(6)Simplifying SQL Writing (Dune Wand): In August, Dune introduced the Wand series of SQL tools. Create Wand allows users to generate complete queries from natural language prompts. Edit Wand enables users to modify existing queries, and a Debug feature automatically troubleshoots syntax errors in queries. The core of these tools is LLM technology, which simplifies the query writing process, allowing analysts to focus on the core logic of data analysis without worrying about code and syntax issues.

1.2 Footprint Analytics

Footprint Analytics is a blockchain data solution provider that offers a no-code data analytics platform, unified data API products, and the Web3 project BI platform Footprint Growth Analytics, all empowered by artificial intelligence technology.

Footprint’s strength lies in its blockchain data production line and ecosystem tool creation. By establishing a unified data lake that connects on-chain and off-chain data with a meta-database of quasi-on-chain business registrations, Footprint ensures the accessibility, usability, and quality of data for analysis and use. Its long-term strategy focuses on technological depth and platform development to build a “machine factory” capable of producing on-chain data and applications.

The integration of Footprint’s products with AI includes:

Since the introduction of LLM models, Footprint has been exploring the integration of its existing data products with AI to enhance data processing and analysis efficiency, creating more user-friendly products. In May 2023, Footprint began offering natural language interaction for data analysis, upgrading from a no-code base to advanced product features. Users can quickly obtain data and generate charts through dialogue, without needing to be familiar with the platform’s tables or design.

In the market, LLM + Web3 data products mainly focus on reducing user barriers and changing interaction paradigms. Footprint’s focus in AI product development is not only on improving the user experience in data analysis but also on accumulating specific data and business understanding in the crypto domain. This includes training language models specific to the crypto field to enhance efficiency and accuracy in vertical scenarios. Footprint’s advantages in this area will be reflected in:

  • Data Knowledge Volume: Quality and quantity of the knowledge base, efficiency of data accumulation, sources, volume, and types. Particularly with its sub-product Footprint MetaMosaic, which exhibits relationship graphs and the accumulation of static data specific to business logic.
  • Knowledge Architecture: Footprint has already accumulated structured data tables abstracted by business segments from over 30 public chains. Knowledge of the production process from raw data to structured data can, in turn, strengthen the understanding of raw data and better train models.
  • Data Types: Training from unstandardized, unstructured on-chain raw data and from structured, business-meaningful data tables and indicators show significant differences in training efficiency and machine costs. Providing LLMs with abundant data, including professional data specific to the crypto field and more readable, structured data, along with larger user feedback, is critical.
  • Crypto Capital Flow Data: Footprint has abstracted investment-related capital flow data, which includes transaction time, subjects (including direction), token types, amounts (linked to token prices at the time), business types, and labels for tokens and subjects. This data serves as a knowledge base and source for LLM, useful for analyzing main fund movements in tokens, locating chip distributions, monitoring capital flows, identifying on-chain anomalies, and tracking smart money.
  • Injection of Private Data: Footprint divides the model into three layers: a base model with world knowledge (OpenAI and other open-source models), a subdivided domain vertical model, and a personalized expert knowledge model. This allows users to manage different sources of knowledge bases unified on Footprint and use private data to train private LLMs for more personalized application scenarios.

In exploring the integration of LLM models, Footprint also encountered several challenges and problems, such as token shortages, time-consuming prompt suggestions, and unstable responses. The greater challenge in the vertical field of on-chain data is the vast and rapidly changing types and quantities of on-chain data entities. How to feed these to LLMs requires more research and exploration in the industry. The current toolchain is still relatively basic and needs more tools to solve specific problems.

In the future, Footprint’s integration with AI in technology and products will include the following aspects:


Footprint plans to combine LLM models for exploration and optimization in three areas:

  • Reasoning on Structured Data: Enable LLMs to utilize the vast accumulated structured data and knowledge in the crypto domain for data consumption and production.
  • Personalized Knowledge Base Development: Assist users in building personalized knowledge bases (including knowledge, data, and experience), and use private data to enhance the capabilities of optimized crypto LLMs, allowing everyone to build their own models.
  • AI-Assisted Analysis and Content Production: Users will be able to create their own GPT for producing and sharing crypto investment content through dialogue, combining capital flow data and private knowledge bases.


Exploration of AI Product Applications and Business Model Innovations. According to Footprint’s recent promotional plans, they will launch a platform providing AI crypto content generation and sharing for users. Additionally, regarding the expansion of future partnerships, Footprint will explore in two areas:

  • Strengthening Collaborations with Key Opinion Leaders (KOLs): Aiming to assist in the production of valuable content, community operation, and monetization of knowledge.
  • Expanding Cooperation with More Project Sides and Data Providers: Creating an open, win-win user incentive and data cooperation system, establishing a mutually beneficial one-stop data service platform.

1.3 GoPlus Security

GoPlus Security, a front-runner in the Web3 industry, is a security data service provider employing a 2B2C service model to serve front-end products like digital wallets, market software, Dex, and various other Web3 applications. Users can access a range of security protection features such as asset security checks, transfer authorizations, and anti-phishing through API integration. GoPlus’ technical architecture focuses on backdoor detection in contract samples to protect user assets from attacks via smart contracts.

GoPlus’s development and planning with AI are primarily reflected in two products: AI Automated Detection and AI Security Assistant.

(1)AI Automated Detection

Since 2022, GoPlus initiated plans to integrate AI technology to enhance the efficiency of its security engine and reduce dependence on manual analysis. The logic behind GoPlus’s security engine involves collecting data analysis samples from front-end clients, analyzing them through a funnel-like process, including static detection, dynamic detection, feature or behavior detection, identifying characteristics of risky samples, and forming patterns of attack types or behavior. These patterns are then used to detect whether risk samples meet these attack characteristics.

AI technology automates the collection and analysis process of risk samples. GoPlus has developed a mature AI framework to automatically generate summaries of phishing attack characteristics, speeding up the reduction of security engine’s risk exposure, enhancing efficiency, and lowering costs. Additionally, AI reduces the complexity and time cost of manual involvement, increasing accuracy in risk sample judgment, especially in new scenarios that are difficult for manual analysis or engine recognition.

In 2023, with the advancement of large models, GoPlus rapidly adapted and adopted LLMs. Compared to traditional AI, LLMs significantly improved efficiency and effectiveness in data recognition, processing, and analysis. The advent of LLMs accelerated GoPlus’s exploration in AI automated detection, further increasing efficiency. However, despite the advantages of AI in automated analysis of backdoor attack samples and phishing website analysis, its accuracy hasn’t yet met the engine’s standards. Thus, AI automated detection currently plays a supporting role, such as using AI to verify phishing website databases, rather than being the main detection method in GoPlus’s security engine.

(2)AI Security Assistant

GoPlus is also leveraging the natural language processing capabilities of LLM-based models to develop an AI Security Assistant to provide instant security consultation and improve user experience. The AI assistant, based on the GPT large model and adjusted via Lang chain, continually improves its accuracy in understanding specific business scenarios. It simplifies communication between users and security issues, lowering the barrier for user understanding. Compared to the AI automated detection in the engine, the AI Security Assistant has shown more promising results and smoother progress.

In terms of the product functions and business model, given AI’s importance in the security domain, it has the potential to completely transform existing security engines or antivirus engine structures, leading to a new engine architecture centered around AI. GoPlus will continue to train and optimize AI models, aiming to transition AI from a supplementary tool to a core function of its security detection engine.

While GoPlus’s services currently target developers and project sides, the company is exploring more products and services directly for C-end users, along with new revenue models related to AI. Providing efficient, accurate, and low-cost services to C-end users will be GoPlus’s core competitiveness in the future. This will require continuous research and more training and output on the AI large model interacting with users. GoPlus also plans to collaborate with other teams, share its security data, and jointly promote AI applications in the security field, preparing for the potential industry transformations ahead.

1.4 Trusta Labs

Trusta Labs, established in 2022, is an AI-driven Web3 data startup. Trusta Labs focuses on utilizing advanced artificial intelligence technology for efficient processing and accurate analysis of blockchain data, aiming to build a reputation and security infrastructure for the blockchain. Currently, Trusta Labs offers two main products: TrustScan and TrustGo.

(1) TrustScan: Designed for B-side customers, TrustScan helps Web3 projects in user acquisition, engagement, and retention through on-chain user behavior analysis and refined segmentation to identify high-value and genuine users.

(2) TrustGo: A B2C product providing MEDIA analysis tools to analyze and assess on-chain addresses from five dimensions: fund amount, activity, diversity, identity rights, and loyalty. This product emphasizes deep analysis of on-chain data to improve the quality and safety of transaction decisions.

Trusta Labs’ development and planning with AI are as follows:

Currently, both products of Trusta Labs utilize AI models to process and analyze interaction data of on-chain addresses. The behavior data of on-chain address interactions, being sequential data, is highly suitable for AI model training. In the process of cleaning, organizing, and labeling on-chain data, Trusta Labs entrusts a large amount of work to AI, significantly improving the quality and efficiency of data processing and reducing substantial labor costs.

Trusta Labs uses AI technology for in-depth analysis and mining of on-chain address interaction data. For B-side customers, it can effectively identify potential witch addresses. In several projects that have used Trusta Labs’ products, potential witch attacks were successfully prevented. For C-side customers, through the TrustGo product, the existing AI models effectively help users deeply understand their on-chain behavior data.

Trusta Labs closely follows the technological progress and practical applications of LLM models. With the continuously decreasing costs of model training and inference, and the accumulation of a large corpus and user behavior data in the Web3 domain, Trusta Labs will find the right opportunity to introduce LLM technology. Leveraging AI’s productivity, Trusta Labs aims to offer more in-depth data mining and analysis functions for products and users. Building on the rich data already provided, Trusta Labs hopes to use AI’s intelligent analysis models to provide more rational and objective data interpretation features, such as providing qualitative and quantitative interpretations of identified witch accounts for B-side users, helping them understand the reasons behind the data and provide substantial evidence for complaints and explanations to their clients.

On the other hand, Trusta Labs plans to use open-source or more mature LLM models, combined with an intent-centered design philosophy, to build AI Agents. These agents will help users solve on-chain interaction problems more quickly and efficiently. In specific application scenarios, users will be able to interact directly with AI assistants trained on LLMs provided by Trusta Labs through natural language. These assistants can intelligently feedback information related to on-chain data, and offer suggestions and plans for subsequent actions based on provided information, truly achieving a user-intent-centered one-stop intelligent operation. This will greatly lower the threshold for users to use data and simplify the execution of on-chain operations.

Additionally, Trusta believes that as more AI-based data products emerge in the future, the core competitive element of each product may not lie in which LLM model is used. The key competitive factor will be a deeper understanding and interpretation of the data already in hand. Only by interpreting the data and combining it with LLM models can a more “intelligent” AI model be trained.

1.5 0xScope

0xScope, established in 2022, is a data-centric innovative platform that focuses on the integration of blockchain technology and artificial intelligence. It aims to change the way people handle, use, and view data. 0xScope has launched two main products targeting B2B and B2C customers: 0xScope SaaS products and 0xScopescan.

(1) 0xScope SaaS Products: A SaaS solution for enterprises, enabling corporate customers to manage post-investment, make better investment decisions, understand user behavior, and closely monitor competitive dynamics.

(2) 0xScopescan: A B2C product that allows cryptocurrency traders to investigate fund flows and activities on selected blockchains.

0xScope’s business focus is on abstracting a general data model from on-chain data, simplifying on-chain data analysis, and transforming on-chain data into understandable operational data, thereby aiding users in in-depth analysis of on-chain data. The data tools platform provided by 0xScope not only enhances the quality of on-chain data and uncovers hidden information, revealing more to users but also greatly lowers the barrier to data mining.

The development and planning of 0xScope with AI are as follows:

0xScope’s products are being upgraded in conjunction with large models, including two directions:

  • Further lowering the barrier for users through natural language interaction;
  • Using AI models to improve efficiency in data cleaning, parsing, modeling, and analysis.

Additionally, a Chat function AI interaction module is about to be launched in 0xScope’s products, significantly lowering the barrier for users to query and analyze data, allowing interaction and queries with the underlying data through natural language.

However, in the training and use of AI, 0xScope faces the following challenges:

First, the cost and time required for AI training are high. After posing a question, AI may take a long time to respond. This difficulty forces the team to streamline and focus business processes, concentrating on vertical domain Q&A rather than making it a comprehensive super AI assistant.

Second, the output of LLM models is uncontrollable. Data-centric products aim for precise results, but current LLM models may produce results that deviate from reality, which can be detrimental to the experience of data products. Additionally, the output of large models could involve users’ private data. Therefore, when using LLM models in products, the team needs to impose significant restrictions to ensure the AI model’s output is controllable and accurate.

In the future, 0xScope plans to use AI to focus on and deeply cultivate specific vertical tracks. Based on the vast accumulation of on-chain data, 0xScope can define the identities of on-chain users. It will continue to use AI tools to abstract on-chain user behavior, thereby creating a unique data modeling system. This system of data mining and analysis will reveal the information implicit in on-chain data.

In terms of collaboration, 0xScope will focus on two groups: first, those who can be directly served by the product, such as developers, project sides, VCs, exchanges, etc., who need the data currently provided by the product; second, partners who need AI Chat, such as Debank, Chainbase, etc., who can directly use AI Chat with relevant knowledge and data.

2. VC Insight — The Commercialization and Future Development Path of AI+Web3 Data Companies

This section presents insights from interviews with four senior venture capital (VC) investors. It explores the current state and development of the AI+Web3 data industry from the perspectives of investment and market, the core competitiveness of Web3 data companies, and their future path to commercialization.

2.1 AI+Web3 Data Industry: Current Status and Development

Currently, the integration of AI and Web3 data is in an active exploratory stage. Looking at the development directions of various leading Web3 data companies, the combination of AI technology and LLMs is an indispensable trend. However, LLMs have their technical limitations and cannot yet solve many problems in the data industry.

Therefore, it’s crucial to recognize that blindly combining with AI does not automatically enhance a project’s advantages or use AI as a hype tool. Instead, it is necessary to explore genuinely practical and promising application fields. From the VC perspective, the integration of AI and Web3 data has been explored in the following aspects:

(1)Enhancing Web3 Data Product Capabilities with AI Technology: This includes using AI to improve internal data processing and analysis efficiency for enterprises and enhance automated analysis and retrieval capabilities for user data products. For example, Yuxing from SevenX Ventures mentioned that the primary benefit of AI in Web3 data is efficiency. For instance, Dune uses LLM models for code anomaly detection and converting natural language into SQL for information indexing. Also, AI is used for security alerts, where AI algorithms perform better in anomaly detection than pure mathematical statistics, thus more effectively monitoring security. In addition, Zixi from Matrix Partners mentioned that enterprises could save a lot of labor costs by pre-annotating data using AI models. Nevertheless, VCs believe that AI plays a supporting role in enhancing the capability and efficiency of Web3 data products, such as data pre-annotation, which may still require manual review for accuracy.

(2)Building AI Agents/Bots Utilizing LLM’s Adaptability and Interactivity: This involves using large language models to search the entire Web3 data, including on-chain data and off-chain news data, for information aggregation and sentiment analysis. Harper from Hashkey Capital believes that such AI Agents are more inclined towards information integration, generation, and user interaction, with relatively weaker accuracy and efficiency.

Although there are already many cases in these two applications, technology and products are still in the early stages of exploration, requiring continuous technological optimization and product improvement.

(3)Using AI for Pricing and Trading Strategy Analysis: Currently, projects in the market are using AI technology to estimate prices for NFTs, like NFTGo invested in by Qiming Venture Partners, and some professional trading teams use AI for data analysis and trade execution. Additionally, Ocean Protocol has recently released an AI product for price prediction. These types of products seem imaginative, but their acceptance among users and accuracy need to be verified.

On the other hand, many VCs, especially those with investments in Web2, are more focused on the advantages and application scenarios that Web3 and blockchain technology can bring to AI. Blockchain’s characteristics of being publicly verifiable and decentralized, along with cryptographic technology providing privacy protection, combined with Web3’s reshaping of production relations, may bring new opportunities to AI:

(1)AI Data Rights and Verification: The emergence of AI has made data content generation abundant and cheap. Tang Yi from Qiming Venture Partners mentioned that it is challenging to determine the quality and creators of digital works and other content. In this regard, data content rights need a new system, where blockchain might help. Zixi from Matrix Partners mentioned that data exchanges are placing data in NFTs for transactions, solving the data rights issue.

Additionally, Yuxing from SevenX Ventures mentioned that Web3 data could improve the issues of AI fraud and black box. The current AI has black box problems in both model algorithms and data, leading to biases in output results. Web3 data is transparent and publicly verifiable, making the training sources and results of AI models clearer, thus making AI fairer and reducing biases and errors. However, the current volume of Web3 data is not enough to empower the training of AI itself, so this won’t happen in the short term. But we can use this feature to put Web2 data on the chain to prevent deep fakes in AI.

(2)AI Data Annotation Crowdsourcing and UGC Community: Traditional AI annotation faces problems of low efficiency and quality, especially in areas requiring professional knowledge, possibly needing interdisciplinary knowledge, which general data annotation companies cannot cover and often need to be done internally by professional teams. Introducing crowdsourcing for data annotation through blockchain and Web3 concepts can significantly improve this issue. For example, Matrix Partners invested in Questlab, which uses blockchain technology to offer crowdsourced data annotation services. Additionally, in some open-source model communities, blockchain concepts can be used to solve economic problems for model creators.

(3)Data Privacy Deployment: Blockchain technology combined with cryptographic techniques can ensure data privacy and decentralization. Zixi from Matrix Partners mentioned their investment in a synthetic data company that uses large models to generate synthetic data for use in software testing, data analysis, and AI large model training. The company deals with many privacy deployment issues in data processing, using the Oasis blockchain, effectively avoiding privacy and regulatory issues.

2.2 How AI+Web3 Data Companies Can Build Core Competitiveness

For Web3 technology companies, the introduction of AI can to some extent increase the attractiveness or attention of a project. However, at present, most Web3 technology companies’ products that integrate AI are not sufficient to become a core competitive strength of the company. They mostly provide a more friendly user experience and efficiency improvements. For example, the threshold for AI Agents is not high; companies that first develop them may have a first-mover advantage in the market but do not create barriers to entry.

The true core competitiveness and barriers in the Web3 data industry should come from the team’s data capabilities and how AI technology is applied to solve specific analytical scenario problems.

Firstly, a team’s data capabilities include data sources and the team’s ability to analyze data and adjust models, which is the basis for subsequent work. In interviews, SevenX Ventures, Matrix Partners, and Hashkey Capital all consistently mentioned that the core competitiveness of AI+Web3 data companies depends on the quality of data sources. On this basis, engineers also need to be proficient in model fine-tuning, data processing, and parsing based on data sources.

On the other hand, the specific scenarios in which the team’s AI technology is integrated are also very important, and these scenarios should be valuable. Harper believes that although the integration of Web3 data companies with AI is basically starting from AI Agents, their positioning is also different. For example, Space and Time, invested in by Hashkey Capital, collaborated with chainML to launch the infrastructure for creating AI agents. The DeFi agent created therein is used by Space and Time.

2.3 The Future Commercialization Path for Web3 Data Companies

Another important topic for Web3 data companies is commercialization. Historically, the profit model of data analysis companies has been somewhat monolithic, primarily free to consumer (ToC) and mainly profitable to business (ToB), heavily dependent on the willingness of B-side customers to pay. In the Web3 domain, the willingness to pay of enterprises themselves is not high, coupled with the fact that the industry is dominated by startups, making it difficult for project sides to support long-term payments. Therefore, the commercialization of Web3 data companies is currently in a difficult situation.

On this issue, VCs generally believe that the current integration of AI technology is only applied internally to solve production process issues and has not fundamentally changed the difficulty of monetization. Some new product forms like AI Bots, with not sufficiently high barriers, may to some extent enhance the willingness to pay in the ToC domain, but it is still not very strong. AI may not be a short-term solution to the commercialization problem of data products; commercialization requires more productization efforts, such as finding more appropriate scenarios and innovative business models.

In the future path of Web3 and AI integration, using Web3’s economic model in combination with AI data may produce some new business models, mainly in the ToC domain. Zixi from Matrix Partners mentioned that AI products could combine some token play, enhancing the stickiness, daily activity, and emotional engagement of the entire community, which is feasible and easier to monetize. Tang Yi from Qiming Venture Partners mentioned that, ideologically, the value system of Web3 can be combined with AI, being well-suited as an account system or value conversion system for bots. For example, a robot having its own account can earn money through its intelligent part and pay for maintaining its underlying computing power, etc. However, this concept belongs to future speculation, and actual application might still have a long way to go.

In terms of the original business model, i.e., direct user payment, there needs to be strong product power to increase users’ willingness to pay. For example, higher-quality data sources, the benefits brought by data exceeding the cost of payment, etc., depend not only on the application of AI technology but also on the capabilities of the data team itself.

This article is jointly published by Footprint Analytics, Future3 Campus, and HashKey Capital.

Footprint Analytics is a blockchain data solutions provider. We leverage cutting-edge AI technology to help analysts, builders, and investors turn blockchain data and combine web2 data into insights with accessible visualization tools and a powerful multi-chain API across 30+ chains for NFTs, GameFi, wallet profiles, and money flow data.

Footprint Website:




Future3 Campus is a Web3.0 innovation incubation platform jointly initiated by Wanxiang Blockchain Labs and HashKey Capital. It focuses on three major tracks: Web3.0 Massive Adoption, DePIN, and AI. The main incubation bases are in Shanghai, the Guangdong-Hong Kong-Macao Greater Bay Area, and Singapore, radiating across the global Web3.0 ecosystem. Additionally, Future3 Campus will launch its first $50 million seed fund for incubating Web3.0 projects, truly serving the innovation and entrepreneurship in the Web3.0 domain.

HashKey Capital is an asset management institution focusing on investments in blockchain technology and digital assets, currently managing over $1 billion in assets. As one of Asia’s largest and most influential blockchain investment institutions and also an early institutional investor in Ethereum, HashKey Capital plays a leading role, bridging Web2 and Web3. Collaborating with entrepreneurs, investors, communities, and regulatory bodies, HashKey Capital is committed to building a sustainable blockchain ecosystem. The company is based in Hong Kong, Singapore, Japan, the United States, and other locations. It has taken the lead in deploying investments across more than 500 global enterprises in tracks spanning Layer 1, protocols, Crypto Finance, Web3 infrastructure, applications, NFTs, the Metaverse, and more. Representative investment projects include Cosmos, Coinlist, Aztec, Blockdaemon, dYdX, imToken, Animoca Brands, Falcon X, Space and Time, Mask Network, Polkadot, Moonbeam, and Galxe (formerly Project Galaxy).

How do you rate this article?


Footprint Analytics
Footprint Analytics

Free Analytics for NFT, GameFi, and cross-chain data. Explore community-built analysis and create charts with no code required

Footprint Analytics
Footprint Analytics

Footprint Analytics provides API and visualization tools to uncover and visualize data across the blockchain, including NFT and GameFi data. It currently collects, parses and cleans data from 20+ chains into structured and semantic tables. It lets users build charts and dashboards without code, using a drag-and-drop interface as well as with SQL.

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.