You are reading an excerpt from our free but shortened abridged report! While still packed with incredible research and data, for just $20/month you can upgrade to our FULL library of 50+ reports (including this one) and complete industry-leading analysis on the top crypto assets.
Becoming a Premium member means enjoying all the perks of a Basic membership PLUS:
- Full-length CORE Reports: More technical, in-depth research, actionable insights, and potential market alpha for serious crypto users
- Early access to future CORE ratings: Being early is sometimes just as important as being right!
- Premium Member CORE+ Reports: Coverage on the top issues pertaining to crypto users like bridge security, layer two solutions, DeFi plays, and more
- CORE report Audio playback: Don’t want to read? No problem! Listen on the go.
Consensus Mechanisms vs. Sybil-Resistant Algorithms
For a decentralized network of nodes/computers to function properly, the independent network participants need to reach an agreement over a shared state (e.g., who owns what on a blockchain). While doing this, the network should remain fault tolerant with valid consensus despite imperfect information or malicious actors (Byzantine Fault Tolerance). Different blockchains implement different methods of doing so, but all are attempting to create a “consensus algorithm” that best fits their chain.
Consensus algorithms are used in public blockchain/distributed computer design to convince nodes in a decentralized system to agree on the next valid state. Within the context of public blockchains like Bitcoin and Ethereum, this signals that at least 51% of network nodes agree on the network's global state. In addition, a consensus algorithm often guarantees (probabilistic or deterministic) that network nodes can reach consensus on the next valid state, even if a minimum number of nodes in the system are adversarial.
Nakamoto’s Consensus requires waiting for the creation of several additional blocks to ensure transactions can't be reverted. As a result, Nakamoto chains have high uptime (they don't go down or stall) but low transaction speed due to their probabilistic finalization guarantee. This is because Nakamoto Consensus requires waiting for “enough” blocks to be mined on top of the block that includes the user’s transaction so that reorganizing or reverting the blockchain becomes economically impractical. This ensures some “economic certainty,” but never theoretic/deterministic certainty.
One issue with allowing anyone to participate in the consensus of an open network is that one malicious actor can create endless nodes, thereby creating multiple identities, as seen by the blockchain. If one person could create enough nodes, they could theoretically control the network, known as a Sybil attack. For this reason, blockchains also need a Sybil-resistance mechanism in addition to its consensus algorithm.
On the other hand, a Sybil resistance mechanism is the process through which a decentralized system deters Sybil attacks. A Sybil assault occurs when a single node can flood the network with several identities and use them to obtain an outsized amount of power.
Ideally, each node in a decentralized system would represent one vote. If a node can impersonate multiple other nodes and get 100, 1,000, or more than 10,000 votes instead of one, then the system is vulnerable to assault. Sybil attacks are often deterred by requiring nodes to show proof of a difficult-to-fake resource (unlike online identities, which are easy to forge).
Fantom doesn’t leverage Proof-of-Work or have the associated security guarantees of a major, well-established network of hash power. The estimated cost of launching a 51% attack on the Bitcoin network is at least in the tens of billions of dollars, whereas buying into half the validator nodes of Fantom would theoretically cost 500,000 FTM multiplied by 38 (51% of the validators), which is 18 million FTM or approximately $4 million at the time of writing. When considering FTM market depth and slippage, this price would unquestionably increase. Precisely how much is uncertain, but the larger point is it would be in the realm of less than $100 million, which is still orders of magnitude less economically secure than a large-cap network such as Bitcoin or Ethereum.
Historically, DAGs have been used in various blockchain and asynchronous consensus mechanisms and in various permutations. Some networks don’t use blockchains in the same way we think of them by association with the prominent L1s in existence today. For instance, DAGs have been used in the past to connect a network of nodes that each keep a local write-only copy of a ledger, and a copy of each chain is held by all nodes. Fantom’s innovation is more along these sorts of lines than, say, Proof of Space-Time. That's to say, Fantom’s consensus mechanism is an asynchronous Proof-of-Stake network rather than a fundamental innovation seen underpinning some of the most competitive blockchains and crypto projects on the market.
Fantom’s validator network doesn’t operate on a Proof-of-Authority style consensus mechanism in the sense that validator nodes aren’t publicly disclosing their identities and staking their reputation in addition to FTM tokens. Instead, Fantom validator nodes are publicly anonymous and stand to lose up to 100% of their staked Fantom in the event of malicious or fraudulent behavior.
Scalability Trilemma
The scalability trilemma is a well-known issue among all blockchains. The idea of the scalability trilemma is that it's impossible to optimize for any two of the three attributes of DLTs without compromising the third. Fantom emphasizes scalability and decentralization, while the network’s security is categorically distinct from that of blockchains such as Bitcoin.
The scalability trilemma. Credits: Vitalik Buterin
The goal is to increase the number of transactions (scalability) while retaining sufficient decentralization. Most L1 chains chasing scalability typically sacrifice decentralization. They design their network to be run/secured with high-powered, expensive nodes, which reduces the number of people that may participate in network consensus by pricing them out. Obviously, a network that can only be verified if you have X amount of dollars in computing budget isn’t an ideal, permissionless system.
Another trade-off often considered is for the network to use fewer nodes to achieve consensus in less time. However, this makes the chain more vulnerable and centralized. It's easier to corrupt or destroy ten nodes rather than 10,000 all over the globe.
Although often discussed, blockchain scalability doesn’t just pertain to transactions per second (TPS). Many L1s, such as Binance Smart Chain (BSC), currently boast high TPS numbers but suffer from “chain bloat” and ever-increasing hardware requirements just to keep the chain running. L1s must be able to process more transactions without creating more problems down the road. A node in a technically sustainable blockchain has to do three things:
- Keep up with the tip of the chain (most recent block) while syncing with other nodes
- Be able to sync from genesis in a reasonable time (days as opposed to weeks)
- Avoid state bloat
Lachesis, Asynchronous Byzantine Fault Tolerance (aBFT) and Directed Acyclic Graphs (DAGs). Source