Distributed consensus

PDF Publication Title:

Distributed consensus ( distributed-consensus )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 015

CHAPTER 1. INTRODUCTION 15 it holds; namely reliable exactly-once out-of-order message delivery, a bound of at most one participant failing and agreement over a single binary value. This is known as the FLP result. Now that it has been established that some assumptions regarding synchrony were nec- essary to guarantee termination of any distributed consensus algorithm, the question naturally arises of what these assumptions are and what are the weakest possible as- sumptions. These questions were considered by works such as Dolev, Dwork and Stock- meyer [DDS87] and Dwork, Lynch and Stockmeyer [DLS88]. The difficulty of reaching distributed consensus lies with the inability to reliably detect failures. However, despite the fact that failure detectors are unreliable, they are still useful for achieving distributed consensus [CT96, CHT96]. An atomic broadcast is a broadcast which guarantees that all participants in a system eventually receive the same sequence of messages. It is also a powerful primitive in distributed systems and was shown to be equivalent to distributed consensus [CT96]. Early solutions to consensus can be found in the systems such as Viewstamped Repli- cation [OL88], Gbcast [Bir85, BJ87] and in the work of Dwork et al. [DLS88]. At the same time, state machine replication, introduced by Lamport [Lam78b] and popularised by Schneider [Sch90], emerged as a technique to make applications fault-tolerant by repli- cating the application state and coordinating its operations using consensus. Eight years after its submission in 1990, the infamous Part-Time parliament paper [Lam98] describing Paxos was published, by which time attempts to explain the algorithm in simpler terms had already begun [PLL97] and continue today [Lam01a, Lam01b, VRA15]. Paxos became the de facto approach to distributed consensus and thus became the subject of extensive follow-up research, examples of particular relevance to this thesis include Disk Paxos [GL03], Cheap Paxos [LM04], Fast Paxos [Lam05a] and Egalitarian Paxos [MAK13]. The common foundation between Paxos and earlier proposed solutions to consensus has been noted elsewhere in the academic literature [Lam96, vRSS15, LC12]. In 2007, Google published a paper documenting their experience of deploying Paxos at scale [CGR07] in the Chubby locking service [Bur06]. Chubby was in turn used for dis- tributed coordination and metadata storage by Google systems such as GFS [GGL03] and Bigtable [CDG+08]. This was shortly followed by the Zookeeper coordination ser- vice [JRS11, HKJR10], referred to by some as the open source implementation of Chubby. The project became very popular and is credited with bringing distributed consensus to the masses. Meanwhile, the idea of utilising Paxos for state machine replication was improving community understanding and adoption of distributed consensus [BBH+11, LC12, OO14]. The result has been a recent resurgence of distributed consensus in production today2 to 2Implementations include Zookeeper (zookeeper.apache.org), Consul (www.consul.io) and Etcd (coreos.com/etcd).

PDF Image | Distributed consensus

PDF Search Title:

Distributed consensus

Original File Name Searched:

UCAM-CL-TR-935.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)