ODBIERZ TWÓJ BONUS :: »

Apache Flume: Distributed Log Collection for Hadoop. If your role includes moving datasets into Hadoop, this book will help you do it more efficiently using Apache Flume. From installation to customization, it's a complete step-by-step guide on making the service work for you Steve Hoffman, Steven Hoffman, Kevin A. McGrail

Język publikacji: 1
Apache Flume: Distributed Log Collection for Hadoop. If your role includes moving datasets into Hadoop, this book will help you do it more efficiently using Apache Flume. From installation to customization, it's a complete step-by-step guide on making the service work for you Steve Hoffman, Steven Hoffman, Kevin A. McGrail - okladka książki

Apache Flume: Distributed Log Collection for Hadoop. If your role includes moving datasets into Hadoop, this book will help you do it more efficiently using Apache Flume. From installation to customization, it's a complete step-by-step guide on making the service work for you Steve Hoffman, Steven Hoffman, Kevin A. McGrail - okladka książki

Autorzy:
Steve Hoffman, Steven Hoffman, Kevin A. McGrail
Ocena:
Bądź pierwszym, który oceni tę książkę
Stron:
108
Dostępne formaty:
     PDF
     ePub
     Mobi

Ebook 29,90 zł najniższa cena z 30 dni

119,00 zł (-10%)
107,10 zł

Dodaj do koszyka lub Kup na prezent Kup 1-kliknięciem

29,90 zł najniższa cena z 30 dni

Poleć tę książkę znajomemu Poleć tę książkę znajomemu!!

Przenieś na półkę

Do przechowalni

Prezent last minute w ebookpoint.pl
Zostało Ci na świąteczne zamówienie opcje wysyłki »
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Its main goal is to deliver data from applications to Apache Hadoop's HDFS. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with many failover and recovery mechanisms.

Apache Flume: Distributed Log Collection for Hadoop covers problems with HDFS and streaming data/logs, and how Flume can resolve these problems. This book explains the generalized architecture of Flume, which includes moving data to/from databases, NO-SQL-ish data stores, as well as optimizing performance. This book includes real-world scenarios on Flume implementation.

Apache Flume: Distributed Log Collection for Hadoop starts with an architectural overview of Flume and then discusses each component in detail. It guides you through the complete installation process and compilation of Flume.

It will give you a heads-up on how to use channels and channel selectors. For each architectural component (Sources, Channels, Sinks, Channel Processors, Sink Groups, and so on) the various implementations will be covered in detail along with configuration options. You can use it to customize Flume to your specific needs. There are pointers given on writing custom implementations as well that would help you learn and implement them.

By the end, you should be able to construct a series of Flume agents to transport your streaming data and logs from your systems into Hadoop in near real time.

Wybrane bestsellery

O autorach książki

Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (https://orbitz.com/). More information on Steve can be found at https://bit.ly/bacoboy and on Twitter at @bacoboy. This is the first update to Steve's first book, Apache Flume: Distributed Log Collection for Hadoop, Packt Publishing.
Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (https://orbitz.com/). More information on Steve can be found at https://bit.ly/bacoboy and on Twitter at @bacoboy. This is the first update to Steve's first book, Apache Flume: Distributed Log Collection for Hadoop, Packt Publishing.

Packt Publishing - inne książki

Zamknij

Przenieś na półkę
Dodano produkt na półkę
Usunięto produkt z półki
Przeniesiono produkt do archiwum
Przeniesiono produkt do biblioteki

Zamknij

Wybierz metodę płatności

Ebook
107,10 zł
Dodaj do koszyka
Sposób płatności
Zabrania się wykorzystania treści strony do celów eksploracji tekstu i danych (TDM), w tym eksploracji w celu szkolenia technologii AI i innych systemów uczenia maszynowego. It is forbidden to use the content of the site for text and data mining (TDM), including mining for training AI technologies and other machine learning systems.