Teen Mom Turned CEO Wins 1st Film Festival Award with Memoir Podcast

Tamika Newhouse defeated the stigma that a Black girl’s life is derailed and sometimes over when she has an unexpected pregnancy. The Texan teen mom of two refused to let anyone or anything stop her…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Blockchain Scaling in PoS

Scaling as a function of finality

In Part 1 of this series, we discussed PoW finality and the consequence of the consensus mechanism which allows there to always be a chance the chain will be overtaken by another competing chain, thereby reverting transactions. So transaction finality is never a guarantee. Because of this, all nodes in the network verify all blocks and all transactions. Therefore, the network can only process as many transactions as one node can independently process.

In Part 2, we looked into ways of improving this by increasing block size and decreasing block frequency but ultimately those changes are limited by how fast transactions and blocks can propagate the network. So again, while the throughput can be increased (slightly, still less than 100 TPS) by these methods, it doesn’t change the constraint of the single node bottleneck. Scaling globally will require the network state validation responsibility to be divided between the nodes. This now becomes an issue of finality.

If there is no probability of transactions beyond a certain age to be reverted, we wouldn’t need to store or verify against the entire history of transactions, only the ones which the current state is dependent on.

The beginnings of PoS come from a long history of research in distributed agreement in replicated state machines. Going back to the 1970s, many people were working on the issues around making reliable communications from unreliable parts in commercial airlines. A fault in a communication 30,000 feet and with 150 passengers has real consequence so being able to figure out data validity is a life and death problem. From this field of work came the classical Byzantine General’s problem that birthed Byzantine Fault Tolerance (BFT). Up until 1999 (when Practical BFT was invented), the Byzantine General’s problem was mostly an academic one which no one had yet solved. Major computing systems around the world had no practical use for it as banking, stocks, communications, and internet were all centrally hosted and controlled.

It wasn’t until 2008 when Satoshi Nakamoto brought a practical solution to the Byzantine General’s problem into a globally scaled distributed network. This quickly proved to be a stable and secure system to achieve fault-tolerant consensus, causing research in applying these previously academic ideas into real-world applications to explode.

Stated simply, Proof-of-Work is really just a way to verify that what the block creator has compiled as a history of what occurred since the last block is valid and allowable. The crux of this consensus protocol is to do this without having to trust the honesty of the block creator and make the process open and available to all (with also dynamically maintaining a 10-minute block frequency despite changes to the overall computing power in the network, details… details). The penalty here is that if you act improperly, the block is provably invalid and you do not get awarded for the computational effort you expended. This computational effort has proven to grow in direct relation to the value of the asset. Over the past couple of years, this has brought some concern. As it currently stands, bitcoin at $4,013, each transaction uses 511 KWh or enough energy to power 17.27 US households producing 242.73 kg of CO2.

So if this can be done by some other means while maintaining the same security, lack of trust and distributed nature of a PoW network without the need to expend physical resources, that would be a more efficient method of achieving BFT consensus and unarguably better for everyone.

Initial implementations of this technology were incompletely implemented as the multitude of issues solved by the Bitcoin implementation of PoW were not completely understood. Some would argue that even today, many of the more nuanced issues have yet to be solved.

There are several ways a block producer can be selected in PoS, whether it be by votes (Delegated PoS, e.g. EOS) or stake (PoS), randomly selected (e.g. Nxt) or by coin age (e.g. Peercoin), but the core concept remains the same, the network is not secured by the physically scarce resource of computing power as in PoW but by the digitally scarce resource of the network currency/asset.

In staking PoS, the users who wish to participate in the block creation process must “stake” their coins by locking them in a wallet. Just like PoW, where the user’s probability for creating a block is proportional to the amount of computing power they have in relation to the network, a staked user in a PoS network will be selected to create the block with a probability proportional to the amount of stake they have in relation to the total amount of staked coins in the network.

This requires the user’s wallet with the staked funds to remain online and available for this process, a requirement that leads to less than desired participation. This architecture has proved to be just about as efficient as PoW implementations with limiting factors in throughput being dependent on block propagation and ultimately single node throughput. Once 2/3 of the nodes in the network* approve the proposed block, the block becomes a finalized addition to the blockchain. You can see that finalization in these networks is highly dependent on the number of staked wallets (therefore, nodes) currently part of the active selection process. Pulling the math from Part 1 of this series, on finality, assuming we have a PoS network with a 5 second block time (fast-ish) so ω = 1/5, and 10,000validators (highly decentralized). That gives us a finality of 10,000/(1/5) or 10,000 x 5 or 50,000 seconds (~14 hours). That isn’t practical for the transaction of anything that requires verifiable proof of validity.

From Part 1 on finality. In PoS networks, time to finality, f, is defined as the above where nis the number of validator nodes in the network needed for consensus and ω is the protocol overhead.

The main issue with pure PoS implementations is something called the Nothing-at-Stake problem. In these networks, there is no punishment for voting on competing blocks. Users could therefore selectively vote on blocks that create a competing fork which does not include a particular unwanted history. Because these blockchains operate on the same longest-chain principle, overtaking the main chain is much cheaper than a similar attack in a PoW network. The solution to this is to create a framework for determining malicious behavior and punishing it. Very much easier said than done.

This issue goes hand in hand with long-range attacks, even if there is punishment for voting on competing blocks, there is nothing stopping an attacker building up a fork from a long range without fear of penalty.If this is done, and users are free to immediately unstake, the incentive not to vote on a long-range fork from some block height ago is removed. If 2/3 of the validators do this and vote for the contentious fork which is longer than the main chain, the history would be overwritten. One solution to this problem is to have unstaked coin have a cooling period before having the ability to revote. This gives the main chain time to grow past the time where the validators are present on both chains, thereby nullifying the risk. The only problem to this is the cool off time for this to effectively work could be on the order of weeks to months. Another solution is to implement explicit finality. If you consider a block at a certain height, (current height)-X,as final and unable to be overwritten, then this would invalidate any long-range fork. This is also solved in DPoS by considering a block with 2/3 of the delegates signatures as final.

Votes and delegates being determined by the number of tokens users own brings in an inherent centralization risk by cartel creation or “whale” manipulation. Now, this issue is not secluded to PoS consensus as mining power is just as much at risk of centralization, PoS just makes this a social issue and opens the door to a much broader topic for another day: distributed governance. No in-protocol solution has been presented to prevent rich-getting-richer or censorship resistance at this time. The hope here is that the social space will inevitably notice cartels forming, create conversation around it, and in the interest of protecting the group interest, vote to reorganize the blockchain.

By not making the consensus process a computational competition to create a block but one of probabilistic election nullifies† the orphan block issue caused by speeding up block frequency in PoW consensus. This gives immediate throughput gains by being able to push the network faster than before. But you are still limited to the time it takes for the block to propagate the network and 2/3 of the node pool to sign the block.

The issues with finality, low actual participation in securing the network and the difficulty in applying a solution to the Nothing-at-Stake problem really can be solved by limiting the number of block producing nodes in the network. Finality is reduced linearly with respect to the number of nodes, and tracking and punishing real-time Byzantine behavior is much easier over 100 or even 10 nodes than 10,000.

So this is the PoS version of the PoW ceiling defined as: “the network can only process as many transactions as one node is able to”. In classical, single chain PoS, your ceiling is defined as: “the network can only process as many transactions as the node pool is able to”. But how can this be done without making it a rich-get-richer ecosystem with no accountability to the users in the network while maintaining “decentralization”? This is where Delegated PoS, or DPoS, has proved to be a much more stable implementation with a tremendous technical advantage over the most accepted PoW networks.

In DPoS, instead of having a staked user be the probabilistic block producer, the pool of candidates is reduced to a static number and block producing status of the node is based on votes from stakeholders. While this reduces the decentralized distribution of wealth within the network, block producers are usually required (or socially pressured) to give value back to the voters or network as a whole in the form of profit sharing or other technological/social increases of value e.g. 3rd party software or social action.

Reducing the number of nodes allows the architects to push the bounds of block size and block frequency to the limits of bandwidth and latency within the network. The distinct variables of the network, e.g. block frequency, size, and node quantity, are based on the accepted trade-offs for finality latency, decentralization and overhead [Fig. 1]. For DPoS mechanisms, since every node must send a message signing the block before the next block can be made, the number of messages per block is one to (block proposal) and one from (block signature) each node per block time, B, or: ω = 2n/B. For example, assuming we have a DPoS network with a 0.5 second block time and 21 validators (EOS), we have a ω = 42/0.5 or 84. That gives us a finality of 21/84 or 0.25 seconds. That is from the moment you send a transaction and it is accepted into a block to it being 100% finalized is at max 1.25 seconds (depending on your local latency).

As a grand consequence of PoS variants with explicit finality, we can approach the problem of scaling from a different angle. Because there is no possibility for earlier transactions to be reverted, there is no need to carry a whole history or blockchain of transactions, only a snapshot at a time prior to the last finalized block. We are then free to explore how little each validating node has to do in the network to security and provable validity. Traditional scaling solutions looked at it as a problem of getting more transactions through nodes or a pool of nodes (vertical scaling) but this allows us to look at scaling as a problem of splitting the network into multiple pools of nodes handling cells of the network, thereby multiplying the the number of transactions able to be processed by the number of pools available (horizontal scaling).

Horizontal scaling has been used in databases for many years. In a common table of data, this is represented by splitting up rows of data and storing them in separate tables, e.g. addresses of people with last names starting with A-M in one table stored on one server and addresses of people with last names starting with N-Z in a second table stored on a second server. This effectively shares the load of the data between two servers, doubling the network capacity and requiring the network to communicate between servers in order to compile network-wide queries. This is database sharding.

A blockchain at its very core is just a database recording events in time-based on a set of rules, so we could apply this theory to blockchain networks. This can be done by splitting up the validator nodes into groups which maintain subsets of addresses, e.g. 0x0 addresses in one group, 0x1 addresses in another group, etc. Those nodes maintain a blockchain for only those addresses and coordinate transactions to other address groups by interacting with the other node groups. In simple terms, that’s exactly what Ethereum’s implementation of sharding proposes to do. This increases the throughput from what one node can process to what one node can process, times the number of shards in the network.

These shards don’t just have to be copies of each other with the same rule set governing them. They can have different assets, block sizes, frequency, consensus mechanism, anonymity, contracts, fees, etc. For all intents and purposes, each shard is a standalone blockchain. The nodes governing that shard maintain consensus within it while also rolling their shard state into the global state of the network. That ties each shard state to each other, combining the state and therefore security of all shards. A consequence of this architecture is everything that happens within shards is usable and intractable with what occurs in other shards. In a single asset sharded network, this would be you being able to send tokens from an 0x0 address to an 0x1 address with the network operating imperceptibly different from a single chain blockchain. But in a sharded network where shards have different rules and functions, this opens the doors to endless complex interoperability.

This is where Ethereum, the second largest cryptocurrency, is headed. And it goes farther than that. Scaling a sharded network is as easy as adding nodes to support the additional shard. Let’s say you have a 10 shard network and every shard can handle 200 TPS, for total network throughput of 2,000 TPS. If you begin to run out of capacity and transactions begin to pile up, you can redefine shard boundaries, add a shard and assign nodes to it to increase throughput to 2,200 TPS. In essence, this architecture allows for endless scalability (THEORETICALLY).

*2/3 of the nodes for fault-tolerant consensus is derived from the Byzantine’s General problem. Suppose we have a traitorous general A, and two commanders, B and C. When A tells B to retreat and C to attack, and B and C compare their messages, neither B or C can figure out who is the traitor since it is not necessarily A generating the traitorous message. It can be shown in general that if n is the number of generals in total and t is the number of traitors, then there can be verifiable consensus only when n > 3t.

Not really. But for the sake of the argument, let’s just assume all the details that go into making that a correct assumption, are baseline.

https://medium.com/@jonchoi/ethereum-casper-101-7a851a4f1eb0

Add a comment

Related posts:

Rendiamoci Indigesti Al Corpo Di Dolore

Eckhart Tolle lo definisce “una forma di energia semi- autonoma che vive nella maggior parte degli esseri umani, ha una sua intelligenza primitiva diretta principalmente alla sopravvivenza, e ha…

Living the Dream

It was a wild day on the used car lot. Customers everywhere, shouting, crying, pleading, laughing. Salesman shuffling about, to and fro, shaking hands, faking smiles, sneaking fast food farts. Some…

Dclick introduces a new way to generate revenue on the internet.

In this era of the internet, when people talk about advertising, almost every time they’re probably referring to online advertising. Online advertising has proven to be extremely profitable both for…