Política de cookies

Utilizamos cookies propias y de terceros para realizar análisis de uso y de medición de nuestra web para mejorar nuestros servicios. Si continua navegando, consideramos que acepta su uso. Puede cambiar la configuración u obtener más información aquí.

Re[*]
#nfqLife
Re[*]

What is Blockchain and how it works

Introduction to Blockchain, DLT Technologies and their economic context.

Introduction

A blockchain is an open distributed ledger where transactions between two peers can be recorded in a trustworthy and immutable way. It is used to record transactions among all the computers of the network, so no transaction can be altered without altering all the transactions on the chain.

The state of blockchain development varies between countries and continents. Most of the companies working with this technology are located in North America (USA & Canada) or Asia (mainly Singapore). Blockchain technology is by far one of the most promising technologies mankind has ever seen as it brings back the power to the people. As Marc Andreesen (co-author of the first widely used web browser) says, blockchain will be a digital tsunami:

The practical consequence is that for the first time, there is a way for one Internet user to transfer a unique piece of digital property to another Internet user without creating a copy of it, such that the transfer is guaranteed to be safe and secure, everyone knows that the transfer has taken place, and nobody can challenge the legitimacy of the transfer. The consequences of this breakthrough are hard to overstate.

To get a grasp of the potential of blockchain technology, here are some market predictions: Distributed Ledger technology (DLT) can save eight of the ten largest banks 32% in costs, Goldman Sachs estimates it can save global capital markets $6 billion annually and Santander believes it can save the banking sector $15 and $20 billion a year in infrastructure costs.

General view of DLT Technologies

What is a ledger?

A centralized ledger is a database, or a physical book, used to keep a series of transactions in a specific order. It represents the current state of a system, if some transaction is not written in the ledger it is not considered at all, and it trivially solves any conflict between pairs, since the ledger is the record of all transactions and they are all in order. It is therefore a tool to store relevant data and to guarantee consensus between pairs.

A centralized ledger, or simply a ledger, is a basic tool for accountancy, bookkeeping and notarization. Practically all financial transactions until the introduction of Bitcoin in January 2009 have been stored on centralized ledgers.

Figure 1: Characteristics of a common economic ledger
Figure 1: Characteristics of a common economic ledger

Each centralized ledger inherently requires or introduces a centralized authority within a system. The owner of the ledger is the owner of the data, and responsible for how the entries are written. In most cases, it charges a fee to each user to access the data and for storing that data. Authority is a social and economic construct, and it has very little to do with technology. If several peers agree on not owning the data and paying a fee, one can assume that there is a strong underlying motivation for doing so.

Maybe the ledger is imposed by a higher authority, like a central bank, or the peers are usually in conflict, and an agreed authority that imposes consensus reduces the amount of time the system is in a transitional state. Either way, the ledger is just a tool with which a trustworthy authority keeps data stored and enforces consensus.

Economic context

Since its introduction, probably 7000 years ago in Mesopotamia, ledgers have been a central part in Economics and Finance. The three keywords mentioned to characterize ledger technology in the previous section: trust, maintenance costs, and consensus, have been present since its beginnings.

These three key aspects also help to understand why a ledger is abandoned by its users. When authority is not trustworthy anymore, the fee is higher than the economic benefit, or the enforcement of consensus is not efficient, the peers move from one ledger to another.

Until very recent times, this meant that the users went from one central authority to a different central authority that offers significant advantages in terms of trust, cost, and consensus quality. Internet and P2P software have proven that one can efficiently store information and enforce consensus between peers without a central authority. However, the third term in the “distributed ledger equation” was still missing: trust.

The actual revolution of the Blockchain, introduced for the first time by Nakamoto, was the fact that consensus could be reached with no trust between peers at all. In consequence, a fully distributed ledger without a central authority was possible for the first time.

This technological achievement was contemporary to many other social and economical changes. The global financial meltdown had eroded the trust in financial institutions, including central banks. At the same time, many applications in P2P business were successful at avoiding human brokers and intermediaries, reducing the operation costs for all peers.

Every new technology that is found to be useful comes with a varying amount of hype attached. There are very few pieces of technology with such a level of hype as distributed ledgers. This helps the technology to be more widely known, but at the same time reduces the signal-to-noise ratio of the information that non-tech-savvy perceive.

In any case, users needing to store transactions and to reach a consensus have now an alternative to an appointed central authority. However, removing trust from the “distributed ledger equation” comes at a cost, and to understand how much one has to pay, we must dive into the technical aspects of Blockchain and distributed ledgers.

Some important theorems and hypotheses

To understand the fundamental aspects of DLT we must dive in some theory about distributed systems. We have already mentioned the three key aspects of any ledger: information storage, consensus and trust. The next step is to link these three properties to more quantifiable ones using the CAP theorem.

The CAP theorem, or Brewer’s theorem

This theorem states that a distributed computer system cannot simultaneously provide at the same time more than two out of three guarantees:

  1. Consistency: Every peer receives the most recent data stored within the system.
  2. Availability: Every request receives a response, without the guarantee that it contains the most recent write.
  3. Partition tolerance: The system continues to operate despite an arbitrary number of messages being dropped or delayed by the network between peers.
Figure 2: Venn diagram that illustrates the CAP theorem. CA, CP, and AP systems are possible.
Figure 2: Venn diagram that illustrates the CAP theorem. CA, CP, and AP systems are possible.

This theorem has three important implications if one wants to use a distributed system, like a DLT, to store the information of a smart contract.

The first two guarantees are the expected properties of any storage system. One expects to always obtain a non-error response from the storage, and that response to be the most recent information stored. Any relational database offers these two guarantees. But in a distributed system, where the nodes send information to their peers through a network, one may sometimes get an error (the information is not available) or outdated information. This is a theorem, there is nothing we can do about it.

The second implication comes from the fact that these three guarantees cannot be provided simultaneously, meaning that in a distributed system, one may get the most recent state without errors, but not exactly at the time it is required. In other words, if one wants to retrieve data from the DLT and wants that data to be the most recent state of the system, there will be some latency, and there is nothing anyone can do about it.

Note that a centralized ledger is not parallel. We could perfectly use a single instance of SQL (or NoSQL) database to store the data with perfect consistency and availability.

Latency is key in systems engineering, because it is one of the determining aspects of the quality of service. We have proved that distributed systems are subject to latency, the question is how much, since some applications require high bandwidth and low latency storage, and DLT may not be able to provide that.

The Byzantine agreement problem and the cost of consistency

Assume that the Byzantine army is encircling a Persian city. The generals, each leading a portion of the army, have to formulate a plan for attacking the city. They must only decide whether to attack or retreat. Some generals prefer to attack, others prefer to retreat, others may be traitors, others may be deaf, some general is sick and cannot leave his tent… The important thing is that the strategy will fail if it is not applied consistently by all generals. The army must reach a consensus between the generals, and the whole army must attack or retreat.

Lack of consistency implies defeat. This problem was first described by Lamport, and it has been proven to be unsolvable, which gives a measure on how hard it is to achieve consistency with faulty communications or hostile nodes.

Figure 3: Original figures from Lamport, L. article about Byzantine agreement. If the amount of generals (or lieutenants) is small enough, a general can find a strategy to reach consensus. However, a consensus protocol becomes harder as the amount of lieutenants grows.
Figure 3: Original figures from Lamport, L. article about Byzantine agreement. If the amount of generals (or lieutenants) is small enough, a general can find a strategy to reach consensus. However, a consensus protocol becomes harder as the amount of lieutenants grows.

Ledgers must be always consistent, because operations with outdated information are errors from a fundamental point of view. Think about a virtual currency. If two peers see a different balance for a user, that user can exploit the difference for an attack, like spending the very same currency twice. This is the double spending attack, which is an example of a Byzantine fault, and it is once and for all solved by the Blockchain network with the Nakamoto consensus.

Since the system has some inherent latency, the attacker’s goal is to make the peers accept that transaction and transfer the goods before they realize they have been paid with the exact same currency. This attack is impossible with actual cash or with a central ledger.

The Nakamoto consensus will be explained later in this document, but the key at this point is to note that consistency comes with a price. The more hostile nodes, or the more sophisticated the attack, the harder is to reach consensus. In consequence, the harder the circumstances, the higher the latency. If the Byzantine fault is too hard, there is a proof that there is no way to find a consensus at all.

Bitcoin as example

It is time to use Bitcoin as an example to clarify the concepts introduced with the CAP theorem and the Byzantine agreement problem, since the Bitcoin’s Nakamoto consensus is designed precisely to deal with those issues. The protocol will not be described in detail, and the focus will be pointed to the key aspects of the consensus.

Assume Alice wants to send 1BTC to Bob to pay her rent. Bob, who does not trust Alice, wants the Blockchain to certify that the payment is legit. Alice will connect to his Bitcoin node to check it has enough currency to perform the transaction. She will build the transaction and send it to the miners. Each miner will build a block, with or without the transaction. Also, even if a miner puts Alice’s transaction in the block, the position within the block may vary between miners. Each miner will then try to compute a proof of work, a mathematical problem that is very hard to compute and very easy to verify. The first miner to find the proof of work will submit the new block to the miners and the nodes, putting Alice’s transaction at the end of the Blockchain, which allows Bob to verify that Alice has actually submitted the transaction looking at the Blockchain on his node. To keep the miners running, each miner gets some currency once the proof of work is computed, and each user can assign a fee to the transaction to push the transaction into the next block.

Figure 4: Characteristic steps of the proof of work process characteristic of the Nakamoto consensus. 1) The client sends the transaction to the miners. 2) One of the miners obtains the proof of work. 3) The clients fetch the new block with the proof of work, and chain it to the previous steps of the chain.
Figure 4: Characteristic steps of the proof of work process characteristic of the Nakamoto consensus. 1) The client sends the transaction to the miners. 2) One of the miners obtains the proof of work. 3) The clients fetch the new block with the proof of work, and chain it to the previous steps of the chain.

This short and incomplete description allows us to analyze this consensus using the CAP theorem. The system is not consistent at all times. The consistency happens in snapshots separated by a given time. During that transient the transactions are being sent to the miners, or the miners are computing the proof of work. The proof of work is such an expensive computation that all the miners are consuming approximately 343 MW worldwide, and following the present trend, they will consume as much electrical power as Denmark by 2020.

The size of each block is fixed at 1MB, and the time between blocks is approximately of 10 minutes. The cost of the proof of work is tuned to keep the time between blocks constant. An average transaction is of 0.5 KiB of size, therefore the average total bandwidth of Bitcoin as a database is between 3 and 4 transactions per second. This average value is far from the worst-case scenario.

If the winning miner pushes an empty block, the bandwidth has been wasted during 10 minutes. In addition, two miners may find the proof of work at the same time, breaking the consistency within the network temporally. For this reason, it is advised to wait for a couple more blocks to appear to verify a transaction. This means that any Bitcoin transaction takes between 8 minutes and half an hour, which is probably too much for most financial transactions.

Most these parameters are tunable, but up to an extent. Mining time probably should not be set to 1 second, the same way one should not configure a block size of 1 GiB. One of the consequences of the CAP theorem is that those parameters will have an optimal value depending on the latency of the network, the number of nodes, the most common attack…

A bank as a counterexample

Assume Alice wants to send 1000€ to Bob to pay her rent. Bob does not trust Alice, but he trusts his bank. Alice will ask her bank to send to Bob’s account a transfer worth 1000€. If Bob’s account is in the same bank as Alice’s the whole process takes place in less than a second and it won’t require a fee. If Bob’s account is in the different bank as Alice’s, the two banks will take care of the transaction for a small fee. As Bob trusts his bank, he will consider that the rent is paid as soon as he queries his bank’s leger, and sees his account balance increased by 1000€.

Figure 4: Flow diagram of a common financial transaction through the banking system
Figure 4: Flow diagram of a common financial transaction through the banking system

Since the banking system is not data-distributed, and each bank maintains its own ledger, the threats to the system are due to security faults or authentication issues, and not due to issues with the consensus protocol. We tend to trust our bank because it seldom fails at managing authentication and safety.

Trust, complexity and latency

We have learned that distributed systems have latency, and that reaching Byzantine consensus involves some additional cost due to the distributed nature of the system. At the same time, non-distributed systems based on trust are significantly more efficient and lean. The amount of trust we have on the nodes is inversely proportional on the cost of the consensus: the more we trust, the easier is the consensus. At the same time, the cost of consensus is proportional to the latency. The harder the consensus, the higher the latency. Therefore, the latency, and the quality of the service, is inversely proportional to the trust.

Trust is a gradable quantity. In our examples, we have gone from a trust-based network, banking, to a no-trust-at-all network, Bitcoin. There are other protocols aimed at solving different instances of Byzantine consensus that are faster than the Blockchain and less naive than a centralized ledger. The consensus issue is similar to security and permissions. One can be fully promiscuous and guarantee full permissions to any user, or enforce restrictions at every user’s step. How to manage consensus, like permissions, is an engineering decision. This is one important contribution from the second generation of DLT appeared after Bitcoin, like Hyperledger or Ethereum: consensus is a pluggable component.

Each Byzantine consensus algorithm is designed to solve one particular Byzantine failure problem, or a restricted set of them. Plugging in one particular consensus algorithm and implementation is therefore an engineering decision. In any case, the dimensional assumption made previously is still valid. The lower the trust, the harder the consensus, and the higher the latency.