Excess computing power. Most people have it, be it that spare laptop that's sitting around somewhere, be it some meters of pre-filled server racks because first thing the project planner knew was that he'd need massive CPU power, RAM and disk space for his new project. What project? Nevermind, we'll get that server racks put to use eventually (or any grade of excess computing power between these two). We can leave that computing power where it is, if it's not powered up it's just eating space and valuable resources that don't matter much taken by one but are huge when considered altogether.
Enter: Golem Network Token
Sometimes, people or projects need a little extra computing capacity but as pointed out above, holding excess computing capacity is a huge waste of resources. Cloud Service providers like Microsoft, Google and Amazon took a step in the right direction by sharing computing capacitiy amongst users, each one living in a sandboxed environment (well, maybe. Think Meltdown and Spectre and you'll get the picture). Many Distributed Computing projects developed their own client software for their needs, most prominent being SETI@home, each form of distributed computing having their own downsides: Projects like SETI need a very specific client software and users actively seeking to integrate their computing power with the project, "cloud computing" providers share the market amongst a few big companies which are all-to-eager to shut your account down if it seems to be violating their Terms of Service or if management doesn't like the looks of your nose. While the very specific client software is not bad by design, depending on a company to like you is something you should try to avoid.
How does it work? Front.
Seen from the front, Golem Network makes a good job of applying the KISS principle: People having excess computing power may add it to the network in the role of being a provider, the interface allows to select the maximum capacity provided to the Golem Network and always leaves 1 processor core and some RAM unused to allow emergency access to the system even if the Golem Client should lock up everything else. People in need of computing power join the network as Requestors, describe their task and set a maximum bounty for the entire task to be done (should the Providers ask for less, you may even get your task done cheaper than expected - but you always know what the maximum cost of the task submitted will be). While providing and requesting excess capacity are rather simple tasks, Developers are the people who connect both sides by designing programs to be run on the Golem Network. Simple as that.
How does it work? Back.
Developers provide a set of detailed computing instructions for the Provider and instructions the Requestor can hook into; a Software Development Framework allows for usage of common Golem libraries to provide networking and security features as well as abstractions for the computations to be Operating System- and Architecture-independent. Providers install the Golem Client on their machines and decide, how much of their capacity they want to offer and what their minimum fee is. The latter is a rather important setting as every Provider has his own economic neighborhood in which capacity is provided: Using Mongolia's cost structure to decide on a fixed fee for US-american data centers just doesn't work out. Requestors sent out data packets which contain data and instructions that fit into the hooks a developer built into their software; they decide on a maximum fee they are willing to pay.
As the Golem Network is decentralized, trusting other parties to do their job right is not advised: Part of the developers' job is to create a testing engine which automatically checks incoming results for plausibility and downright miscalculations. On Provider systems, the Golem Client creates a secure enclave which is supposed to protect the Requestor's tasks from the host system [You read that correctly: While the common approach to security is protecting the host system from outward influences, Golem Network put some effort into protecting the guest task from being influenced by the host system] and a container which is supposed to protect the providers' system from getting compromised. Once results are returned, the Requestor system checks the result to ensure the processed data works out the way it is supposed to. All the while, a series of Smart Contracts handles the distribution of computing capacity according to Requestors' queue and distributes tasks among Provider systems and makes sure everyone is getting paid their share.
What's coming up?
The first practical deployment of Golem software is splitting 3D (CGI) rendering into many small tasks which can be handled by the Golem Network; next on radar is a framework for Machine Learning. In the long run, you may rent computing capcity to perform any tasks you may wish to compute. Think "Decentralized Distributed Computing" along with "Decentralized Data Storage" such as IPFS. Got an idea of what this project may grow to?
By providing access to my computer systems to total strangers, I take a serious risk of having my machine compromised; by processing data other people have computed for me, I need to trust them not to have put malicious code inside as any secure enclave or container may be bugged to allow breaking out of the container/into the enclave. The Golem Network faces this risk by having several enrolled IT security specialists with a good track record of prior achievements. While researching for this blog post, I found the Golem Chat very helpful and kindly providing links to further details. Altogether, the project has a sound task alongside a great team considering detailed risks your computer, be it Requestor or Provider, may face.
Definitely a project worth keeping an eye on :)