Blockchain technology has tremendous potential to improve data integrity as well as the way data is handled and stored. However, it comes with several less-than-attractive features.
Blockchain technology and its limitations.
“By design, a blockchain is resistant to modification of the data. It is ‘an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way’. For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for inter-node communication and validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks, which requires consensus of the network majority. Although blockchain records are not unalterable, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance. Decentralized consensus has therefore been claimed with a blockchain.” (https://en.wikipedia.org/wiki/Blockchain)
In a classic blockchain network, everything is stored on the blockchain. This not only includes the actual data but the data required to protect its integrity as well. As a result, the size of a blockchain grows exponentially as it grows in popularity. For example, the Bitcoin blockchain has grown to over 300 GB (source), while only allowing 6 to 7 transactions per second. The same holds true for the Ethereum blockchain which has grown to over 200 GB in roughly half the time (source).
Unfortunately, allowing more transactions per second would result in an even faster-growing blockchain, one that would be impossible to keep synchronized across an entire network. Indeed, one of the major selling points of blockchain, it’s trustless nature, is derived from the fact that every user has a local copy of the blockchain. It’s also one of its major flaws as it serves to impede greater scalability and higher transaction speeds.
Data security & privacy
When it comes to data security and protection against unlawful access, blockchain technology often falls short. Since a classic blockchain protects data integrity and validates user data, it requires data to be stored on the chain itself. This user data thus becomes visible (accessible) to anyone who can access the blockchain network (regardless of whether it needs it to be visible). As a result, blockchain users will not typically store classified data on the blockchain.
The same argument can be made with regards to data privacy. Storing all data on the blockchain defeats any case for true anonymity, despite the fact that blockchain addresses are not directly linked to individuals. Once a blockchain address is linked to a personal identity, the entire transaction history of that identity is accessible via the blockchain. While this transparency may serve to benefit logistics and supply chain solutions, it’s highly problematic for privacy-focused industries like healthcare and company finance.
Again, enabling everyone to possess a copy of the entire blockchain is invariably disconcerting for users concerned about data security and privacy.
Blockchain advocates frequently cite the technology’s immutable nature – the promise that data added to the blockchain can never be altered – as one of its primary benefits. However, this argument is not entirely true, for either blockchain technology or directed acyclic graphs, also known as DAGs (the more basic, underlying technology).
In traditional blockchain technology, several versions of the same blockchain can exist simultaneously as long as new blocks within these versions comply with agreed-upon consensus rules. However, a blockchain can only be used when one blockchain version is accepted as the true version.
Currently, several solutions exist to determine which blockchain version is the true version. Among these solutions are:
- the longest chain principle, in which the longest version of the blockchain that complies with the current consensus rules is considered the true version.
- voting, in which network participants in charge of continuity vote on which version they will use to build upon (in case a conflict should arrive).
- time-based, in which the version built first has priority over all other versions.
Unfortunately, implementing any one of these solutions will permanently erase data stored within an unselected version.
This is the primary reason why most cryptocurrency transactions are not finalized after being added to the blockchain, as they require multiple confirmations before they can be considered final. Even then, another version of the blockchain may be waiting in the wings to replace what is currently considered the true version.
Since blockchain technology is frequently perceived as being synonymous with cryptocurrency, the casual observer can be forgiven for overlooking its primary purpose: keeping track of a particular type of data in a very particular way.
Not incidentally, data management typically requires contending with a vast array of data types, not all of which can benefit from a blockchain-like structure.
Blockchain’s key features and attributes:
When we refer to speed in a blockchain network we’re referring to the number of transactions that can be processed within a given time period. This is typically measured in transactions per second. However, we should not conclude that each second ‘n’ amount of transactions are processed. Instead, we must calculate speed by reviewing how many transactions a block can hold and how quickly a new block is created. (For example, if a block holds 1200 transactions and it takes 1 minute to create a block, the speed would be 1200 transactions per 60 seconds or 20 transactions per second).
So, for a classic blockchain, speed is largely determined by two factors:
Whereas blocksize determines how many transactions can be included in a single block, blocktime determines how quickly a new block needs to be produced. Consequently, improving a blockchain’s speed requires that one increase the blocksize and/or reduce the blocktime.
Unfortunately, one cannot simply modify these factors without consequence. Increasing the blocksize not only impacts the growth rate of the blockchain, it also undermines new block propagation within the network as well (risking the ability for everyone to be synchronized along the way).
In a proof-of-work system, decreasing the blocktime requires computational difficulty to drop so that miners can keep up with the intended blocktime. However, decreasing computational difficulty results in an increase of orphan blocks and chain splits, thereby compromising the blockchain reliability. Additionally, reducing the blocktime would also impact the ability for nodes providing continuity to remain synchronized.
Blockchain security protocols aim to guarantee a blockchain’s trustability and reliability. This includes the trustability of the data stored on the blockchain, the guarantee that each user will be treated fairly, and the inability for a single entity or faction to gain control over the blockchain to manipulate it for self-benefit.
In a traditional blockchain, Merkle trees helps provide trustability for the data included on the blockchain. And hashing makes it possible to determine if the assumed data is, in fact, the same data that was used when the block was created.
As discussed in the previous article (“A Quantum resistant strategy from XTRABYTES™“), hash collisions can occur when an identical hash is created from different inputs. A Merkle tree can help preserve data integrity for vast amounts of data using only limited data. However, it becomes far less secure if only the root hash is added to the block (rather than the entire Merkle tree).
This is exactly what happens in traditional blockchain, since including the complete Merkle tree would take up too much block space. Additional reasoning for this is that, if required, the Merkle tree can be rebuilt using the provided input data and the resulting root hash will confirm the integrity of the input data. Nonetheless, this would not be the case if a hash collision occurs, as the latter involves different input data that produces the same root hash.
In a traditional blockchain, any input data resulting in a hash collision would still be accepted – as the root hash from this “alternative” data would be identical to the root hash added to the block when it was created. Without the remaining original Merkle tree, it would be impossible to determine (in a trustless manner) which input data is the real data.
A network can scale when it retains the ability to offer its services as the number of users grows. This is not the same thing as speed since speed is merely the ability to process a certain number of transactions in a given period of time (regardless of the transaction’s origin). Scalability entails the ability to offer identical transaction speeds, whether all transactions come from one source or from different sources.
Scalability can also be interpreted as the efficiency of a network to service all of its users at the same time. Accordingly, a network that cannot scale will not achieve global adoption, as its users have no guarantee that they will be serviced within an acceptable time frame (without incurring additional costs).
Lack of scalability leads to (in)voluntary exclusions, long waiting periods, and increasing costs for users. An excellent example of this occurred at the height of the crypto bull market in late 2017 and early 2018. The Bitcoin network became so congested that users had to wait multiple days for their transactions to be confirmed. As a result, investors began missing trading windows and losing money.
At the same time, transaction costs in the bitcoin network rose to nearly $70 per transaction. This fee hike reflected supply and demand, as miners sought out transactions with high transaction fees to include in the new blocks. Likewise, miners generally ignored smaller transactions with minimal transaction fees. This bottleneck amply demonstrated Bitcoin’s scalability issues at the time, reflecting the network’s bias toward the financially powerful during peak demand.
Perhaps the second most important feature of a good blockchain is its level of decentralization. Sufficient decentralization ensures that it’s impossible for a single user or faction to take control of the network.
A network hijacking can occur for many reasons, from centralizing block creation for financial purposes to excluding certain users from the network and destabilizing the network’s reliability and trustability. A well-known tactic for hijacking a network is a 51% attack, as it enables an entity or faction to achieve 51% or more control over the resources required to run a network. A successful 51% attack allows the attacker to intentionally exclude or modify the ordering of transactions. Being in control also enables the attacker to reverse transactions, a situation that inevitably leads to a double-spending problem. And although 51% attacks are usually expensive to carry out (especially on well-established blockchains), they are not inconceivable. They’re a very powerful tool for undermining trustability of a blockchain network.
In a classic blockchain network, one or more network characteristics (security, scalability & speed or decentralization) is compromised for the sake of another. Even today, a viable solution that avoids this compromise remains out-of-reach. Critics assert that the very nature of blockchain technology prevents the ability to obtain this ‘sacred’ trilemma, leaving any practical solution to come from outside the classic blockchain structure.
Why is XTRABYTES different?
As mentioned in the previous article (“An Introduction to PoSign”), XTRABYTES intends to employ a network structure which is completely different from a classic blockchain structure. In particular, the XTRABYTES network will employ a structure where data storage, data integrity protection, and data validation are three separate processes (which is quite at odds with a classic blockchain network). Unlike most projects, our technology does not focus on merely one application. Instead, we will be offering a multitude of decentralized services, not all of which will require data storage, data integrity protection, or data validation. In turn, these latter features will be offered as separate decentralized services in their own right. As such, other applications will be able to use these services as needed.
Our decentralized data integrity protection service (PoSign) will produce a data chain that doesn’t require the actual data to be stored should the service be needed. As such, PoSign can provide data integrity protection for a multitude of data structures, all the while remaining far more efficient than a network that employs multiple (block)chains in an effort to achieve the same. Likewise, this special capability decreases the need for storing data on the chain simply to facilitate syncing across the network.
Instead, users only need to store data that is relevant to them – as any other data required for validation can be provided by others (and still be considered trustless thanks to our data integrity protection service). Since our data storage and data validation capabilities do not reside within the same location, data does not need to be validated before it is stored. Consequently, each data transaction will involve two separate and distinct actions: a transaction request and a transaction validation (accept or reject).
All transaction requests and validations (both rejections and approvals) will be recorded on the data chain that supports these (data) transactions. By arranging transactions to be recorded this way, we’re creating a much more transparent network. In addition, this capability enables us to identify the trust level of various network entities.
If we apply this arrangement to cryptocurrency transactions, cryptocurrency tokens can have multiple different states, for example: available for spending (confirmed), waiting for validation (reserved) and waiting for confirmation (pending). However, this list of possible states is not limitative, depending on the application/coin other states can exist as well to accommodate additional features. Accordingly, users will be able to (possibly) recall a transaction request, particularly in the case of an error (wrong address, wrong amount, …). And the transaction’s recipient will be able to either accept or reject this transaction request.
Obviously, this is very different from the way a classic blockchain handles (data) transactions, where only validated transactions are recorded on the blockchain and transactions cannot be recalled without altering blockchain data.
Any data storage type or structure allowed
Since the actual data does not need to be stored in the same location as the proof for data integrity, the actual data no longer needs to be stored in a blockchain structure. Consequently, the actual data involved can be stored in a variety of different data structures: centralized, decentralized, public and private. For instance, data that would normally be stored on local devices, centralized databases, private accounts, classified environments, etc. can also use PoSign without the need for actually revealing this data to the network. Indeed, the actual data protected by our data integrity protection service (PoSign) can be shared conditionally (depending on the application) using strict access control (as determined by the data owner). This capability opens up our network to industries that cannot reveal their data on a public ledger but still seek the trustless characteristics of a public blockchain.
Ultimately, where transparency is required, any application can employ a public ledger. And if data security or privacy is required, private storage is now an option as well.
One true chain
Our data integrity protection service (PoSign) employs a time-based protocol without a competitive aspect. This feature provides continuity to the data integrity protection chain (currently referred to as the XCHAIN). Thus, for each (data) transaction request there will be a unique entry in the XCHAIN, as well as one for every (data) transaction cancellation and every (data) transaction validation. Since no orphan blocks or chain splits can occur, a (data) transaction is final once the (data) transaction validation is added to the XCHAIN. As a result, the risk for double-spending attacks and disappearing transactions is non-existent as no entry can ever be deleted or replaced once it is added to the XCHAIN.
Furthermore, since time is an integral part of our consensus algorithm, it will be impossible for someone to alter any entry that is already part of the XCHAIN. Doing so would require going back in time to be able to provide the required time-based data that is needed to comply with the rules for consensus.
Decentralization, scalability, security – and speed
By not requiring data validation, data storage, and data integrity to co-exist in the same location, the number of (data) transactions (requests & validations) that can be entered into XCHAIN is limited only by how quickly network users can add and validate them. In other words, PoSign does not contain a limiting factor for the number of (data) transactions (requests & validations) that can be added in a single interval. Moreover, the PoSign algorithm is designed to optimize the number of transactions included per time interval. If multiple suggestions regarding the number of transactions for a specific time interval should occur, the algorithm will always select the suggestion that has the most valid entries. And since users need only sync the data that is relevant to them, they will not confront any network synchronization issues that are due to chain size.
XTRABYTES’ network security is obtained through the PoSign protocol on the one hand and the STATIC network on the other hand, as previously discussed in our article on quantum resistance. The integrity of our network data will enjoy higher protection than what is currently provided by a classic blockchain (where only the root hash for the Merkle tree is responsible for this protection). In the XTRABYTES network, thanks to the use of our data integrity protection service, each individual entry will enjoy its own protection as well as the overall protection for all entries per specific time frame.
The XTRABYTES network also retains very strict and specific rules for who can do what on the network (as a means to prevent bad actors from taking over the network). Additionally, the XTRABYTES network offers the necessary transparency to identify who is acting in good faith and who isn’t. This feature enables our decentralized network management service to act accordingly.
Traditionally, any network that employs a ‘limited’ number of nodes limits its capacity for decentralization. This issue is usually the result of too few nodes in the network. By implementing a ‘limited’ number of nodes on the network, the network can process transactions faster, require fewer connections to achieve consensus, and handle a larger blocksize. Unfortunately, this not only impacts the decentralized nature of the network, it also impacts a network’s ability to scale (as each node can only maintain so many outside connections before losing efficiency). As a result, networks with a low number of nodes are more highly centralized and inevitably struggle with growth.
Since PoSign is not a competitive consensus algorithm (like PoW or PoS), node owners have no incentive to “join forces” in order to increase their chances of receiving a reward – their only incentive for providing network continuity. Thus, while PoSign’s non-competitive aspect removes any of the benefits associated with ‘pooling’ assets, it also reduces the dangers of centralization associated with this feature.
Unlike PoW and PoS (and their multiple variations), PoSign does not allow a single STATIC node to provide network continuity on its own. Indeed, PoSign assigns various network continuity tasks in such a way as to prevent a single STATIC or group of cooperating STATICs from controlling the network.
According to https://masternodes.online/, of the over 300 masternode projects currently in operation, 5 networks have over +5k nodes, 17 networks have between 5k and 2k nodes and 31 networks retain between 2k and 1k nodes. The XTRABYTES STATIC network will enroll 3584 nodes in its initial phase, placing the network in 9th place on the list of networks that employ ‘masternodes’. Not incidentally, the XTRABYTES network can easily expand to accommodate more STATIC nodes should the need arise. Indeed, simply doubling the network’s STATIC nodes would put it in second place on the aforementioned list.
Furthermore, to keep up with the growing demand in services offered by the XTRABYTES network the STATIC owners will be incentivized to upgrade the STATIC node’s hardware since the reward that they will receive for participating in these services will be dependent on the quality of the service they offer through their STATIC.
Additionally, since every type of node serves a specific purpose within the network, each node will be optimized for the task it is assigned. Consequently, the nodes tasked with connecting users to the network will primarily be optimized to execute this specific task.
Each STATIC node added to the network also enables it to become more decentralized and efficient. Thus, XTRABYTES’ network can ensure greater network decentralization simply by allowing more STATIC nodes to join the network. Likewise, its ability to scale can be guaranteed with a hardware upgrade of the STATIC nodes.
Finally, whereas a classic blockchain relies on a centralized data validation scheme, XTRABYTES has decentralized this process. Since data validation, data integrity protection, and data storage do not reside in the same location on the XTRABYTES network, it is not up to the STATIC providing continuity to provide data validation. Furthermore, since every data transaction will have at least 2 states (transaction request & transaction validation), data validation can be achieved on both the server-side (STATIC nodes) or the client-side (individual network users).
Accordingly, the XTRABYTES network provides different options for data validation – ranging from validation executed by the users for the transaction requests they have received to decentralized data validation services offered by the STATIC nodes. As such, application developers will be able to select the type of validation that best suits their needs.
And by decentralizing the data validation process, the network’s ability to process data transactions is no longer limited by the performance of one single entity. This, in turn, guarantees a network capable of unlimited speed and scalability.
The XTRABYTES network is much more than a blockchain network with a novel consensus algorithm. Instead, our network employs a completely different structure than found in a classic blockchain network, as it allows both public and private data to be processed and leaves data transparency to the discretion of the data owner.
This new approach enables our network to provide transaction speeds that would otherwise be impossible using a traditional blockchain network. In addition, we also provide a very high level of security when it comes to data integrity and protection against network attacks.
Ultimately, our unique approach to data validation not only guarantees a network that can scale, it also sets the stage for how distributed ledger technology will handle data in the future.