For my first blog post, I'm going to give a controversial analysis about why IPFS and its ecosystem is in a precarious state, and what this could mean for the future of IPFS.
What Is IPFS?
IPFS stands for InterPlanetary FileSystem and is a peer-to-peer network featuring content-addressable storage. Instead of addressing content by its location as is done with the modern Internet, content is directed by its actual content, known as a "content identifier" (CID), also sometimes called an IPFS hash. While IPFS as technology is pretty cool, the current state of it is slightly concerning, and personally indicates that the future of IPFS is not as happy as the ecosystem at large thinks. There's a massive array of problems ranging from technical issues, implementation issues, to the management of the project.
Who Develops IPFS?
Although IPFS is an open-source project, it was created by a company called Protocol Labs, and as such, Protocol Labs is the primary source of development activity. There is a pretty vibrant community of developers, whether its people who help develop the protocol or people building applications on top of IPFS, but by and large, the main driver of design decisions is Protocol Labs. Whether this is a good thing or a bad thing is really up to you, the reader, but personally speaking, I think it has been the main source of problems and will continue to be in the future.
What Are These Problems?
Truth be told I could probably write multiple blog posts on the various problems that IPFS has, but people have short attention spans so I'll keep it short and sweet by focusing on a few fundamental issues:
- IPFS is incredibly slow
- There is a massive amount of "uncoordinated coordinated" developer activity
- To much desire to create new things as opposed to using existing things
- IPFS services are costly
- IPFS is extremely difficult to use
- There is a lack of incentive for Protocol Labs to build out IPFS as opposed to building out FileCcoin
IPFS Is Incredibly Slow
If you use IPFS, this should come as no surprise, but for anyone first experimenting with IPFS, or starting to build applications using IPFS, the most common questions asked to the IPFS community have to do with it being slow. These questions include:
- Why can't nodes discover people content my node is hosting in my house
- Why does my digital ocean VMs get OOM (Out-Of-Memory) errors
- Why do gateways take so long to learn content
- Why does IPFS consume so many resources
There Is A Massive Amount Of "Uncoordinated Coordinated" Developer Activity
This is a mouthful, and might not even make sense to some readers. What I mean by "uncoordinated coordinated" is that although on the surface there is a ton of working groups for IPFS, and a ton of new initiatives put forth by Protocol Labs to build out IPFS and solve problems, the coordination of these activities stop being coordinated once the idea is realized.
Essentially Protocol Labs and the IPFS developers are very good at analyzing what the problems are, and coming up with solutions to these problems. Still, they are not very good at fixing the issues, keeping the problem fixing in check, and not hopping from one issue to the next without ever fully fixing problems.
The next release of IPFS is set to be 0.5.0 and originally was supposed to be released Q4 2019, but it's Q2 2020, and it's nowhere to be seen. How come? Well, due to a series of mishaps in the development process, a ton of regressions, and new issues were introduced through a series of releases, most notably starting with 0.4.18 and ending with 0.4.22. Because of this, further releases were postponed in favor of creating something called the testground whose goal was to be a thorough testing environment for future IPFS releases, and as such new releases were put on hold until testground was ready.
So what happened? Well, it turns out after five months of development activity on the testground, it was realized that development was started without a plan and a concrete set of objectives. I repeat, five months after this mistake was made, it was recognized. This is not good, and it would be okay if this were the first instance of such oversights, and if it was a brand-new startup, but it's not the first mistake and their not a brand new startup. Protocol Labs has been around for about four years, raised millions in VC funding and incubators for IPFS, and have raised 257 million for the development of FileCoin, and incentivization layer for IPFS.
Because the development of IPFS is so public, it's extremely easy to get insight into the developments of IPFS. This includes things like the testground developers starting a new initiative called ResNet, a "resilient network testing environment" despite the testground not being ready. Testground developers are working on a node called "hydra," which is a seed node for IPFS content, and this is just the testground developers.
Too Much Desire To Create New Things As Opposed To Using/Improving Existing Things
If you head on over to the libp2p docs for NAT, (libp2p is the networking stack for IPFS), you'll see that they talk about STUN/TURN for NAT traversal. The wording on the documentation mostly comes across as a STUN/TURN solution that's pure LibP2P. This is actually not quite true; the developers of the NAT functionality are creating a NAT traversal solution from scratch. Is this solution similar to the existing STUN/TURN solution? Yes, it is. Does it work? No. NAT traversal in IPFS is perhaps one of the most troubling things possible, and it barely works on good days.
Technologies like STUN, TURN, ICE, SDP, etc.. are fantastic and work very well. They're backed by significant standard bodies like IETF, IEEE, and are battle-tested. Perhaps the problem with NAT not working is this desire to create a new unique solution, even though there's a viable solution in place that can be modified to work with LibP2P?
Cryptography has a rule where you shouldn't roll your own crypto, and I think this is totally valid logic to apply elsewhere. Just because you can make a new solution doesn't mean you should, and it doesn't mean it's a good investment of resources
IPFS Services Are Extremely Expensive
The majority of IPFS service providers are supremely expensive. Typical IPFS service providers charge 3x-10x the cost of traditional cloud storage solutions for simple data storage services. While in an of itself this isn't necessarily a bad thing, it will harm the adoption of IPFS beyond a tiny circle of users. Can these small circles of users sustain the technology? Sure, but that's not the goal of IPFS. The purpose of IPFS is to replace the traditional web, to become the interplanetary storage solution that allows people on Mars and Earth to use the same data sets, and transfer them to each other. Look at GNUNet; look at FreeNet, they are used, but they're stagnant technologies that never reached the mainstream.
IPFS will not be mainstream if it costs 3x-10x of typical cloud storage technologies, because most people can't give two shits about the benefits of decentralization. The driving factor for the majority of businesses to adopt new technologies is cost.
IPFS Is Extremely Difficult To Use
IPFS is not exactly easy to use. It requires building your application from the ground up, or modifying your existing application to use IPFS, and cope with the variety of niche issues and capabilities to work properly. Additionally, if IPFS services are extremely expensive to use, it pushes people to run their own infrastructure. This wouldn't be so bad if IPFS was battle-tested, didn't have any issues, and the clients were stable, but unfortunately, this is not the case.
Spending 5 minutes on the IPFS forums or GitHub repositories will reveal dozens of issues, ranging from VMs crashing due to out of memory errors to deadlocks, extremely high disk I/O, etc.. This means that it takes a lot of management overhead to deal with. If adopting new technologies means you need to deal with a new set of issues, this translates into higher TCO (Total Cost of Ownership) for businesses. And guess what? More often than not that means companies will stay with their current technology stack
Lack Of Incentive For Protocol Labs To Build Out IPFS As Opposed To Building Out FileCoin
Right now, the majority of Protocol Labs are working on FileCoin, which is behind schedule. Over the last few months, one can easily observe that the amount of official developers working on IPFS has decreased. Have these people been laid off? Not really, they've been transitioned to developing FileCoin. While there is nothing wrong with this, it means that the development of IPFS is stalled.
The problem with this is that it affects other people who are actively using IPFS. It's affecting people that are depending on Protocol Labs to fix IPFS and make it usable, make it so that people don't have to deal with band-aid fixes that when the next release of IPFS the old band-aid fix is replaced by a new one, sometimes more massive wound.
Why is this? Well, it's pretty easy to determine. Protocol Labs raised 257 million in an ICO for FIleCoin, and they are extremely behind schedule. Formerly the "new date" (after 2-4 push backs) was March 2020. But March is almost over, and FileCoin mainnet is nowhere to be seen. This means Protocol Labs has a massive push to get FileCoin out the door, and as such, the development of IPFS is stalling.
Some people may say that FIleCoin is an incentivization layer for IPFS, and IPFS will still be usable without it. While IPFS is certainly usable without FileCoin, the very development of FileCoin serves to distract Protocol Labs from doing IPFS work. It distracts them from making an implementation of IPFS that can live and work without FileCoin.
What Does This Mean For IPFS
If you've stayed around until now, thanks for reading, I imagine this post will upset quite a few people, but it's something that needs to be said and acknowledged. The future of IPFS is not as bright and clear as people think it is, and as Protocol Labs tries to convey. We are in extremely troubled waters where the wrong decisions (of which there are many) can put the final nail in the coffin of IPFS, and we'll go the way of so many former P2P technologies, disappearing to the shadows, unused by anyone except for the nerdiest of nerds