Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services – just a willingness to learn the basics from this practical step-by-step tutorial
- Autorzy:
- Gerald Turkington, Kevin A. McGrail
- Ocena:
- Bądź pierwszym, który oceni tę książkę
- Stron:
- 398
- Dostępne formaty:
-
PDFePubMobi
Opis ebooka: Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services – just a willingness to learn the basics from this practical step-by-step tutorial
Wybrane bestsellery
-
As Marc Andreessen has said “Data is eating the world,” which can be witnessed today being the age of Big Data, businesses are producing data in huge volumes every day and this rise in tide of data need to be organized and analyzed in a more secured way. With proper and effective us...
Hadoop: Data Processing and Modelling. Data Processing and Modelling Hadoop: Data Processing and Modelling. Data Processing and Modelling
-
Learning Hadoop 2. Design and implement data processing, lifecycle management, and analytic workflows with the cutting-edge toolbox of Hadoop 2 Learning Hadoop 2. Design and implement data processing, lifecycle management, and analytic workflows with the cutting-edge toolbox of Hadoop 2
-
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Its main goal is to deliver data from applications to Apache Hadoop's HDFS. It has a simple and flexible architecture based on streaming data flows. It is ...
Apache Flume: Distributed Log Collection for Hadoop. If your role includes moving datasets into Hadoop, this book will help you do it more efficiently using Apache Flume. From installation to customization, it's a complete step-by-step guide on making the service work for you Apache Flume: Distributed Log Collection for Hadoop. If your role includes moving datasets into Hadoop, this book will help you do it more efficiently using Apache Flume. From installation to customization, it's a complete step-by-step guide on making the service work for you
-
Autodesk Fusion 360 is a multi-use design program made to integrate CAD, CAM, CAE, and PCB all into one software package. In this book, you’ll focus on creating simple, around-the-house projects to gain a better understanding of the Fusion 360 design process.
Improving CAD Designs with Autodesk Fusion 360. A project-based guide to modelling effective parametric designs Improving CAD Designs with Autodesk Fusion 360. A project-based guide to modelling effective parametric designs
-
Learn to use SketchUp to create geometry that can be used to create amazing 3D prints. Find out how SketchUp can be used to generate 3D models from scratch or how it can be used to edit or repair existing models to get them ready to export for 3D printing.
3D Printing with SketchUp. Use SketchUp to generate print-ready models and transform your project from concept to reality - Second Edition 3D Printing with SketchUp. Use SketchUp to generate print-ready models and transform your project from concept to reality - Second Edition
-
With this comprehensive guide to SketchUp, you’ll master the 3D modeling software and discover capabilities you had never known before. Beyond what is built into SketchUp, you’ll also be introduced to extensions that will help you unlock creative and innovative ways to adapt your ow...
Taking SketchUp Pro to the Next Level. Go beyond the basics and develop custom 3D modeling workflows to become a SketchUp ninja Taking SketchUp Pro to the Next Level. Go beyond the basics and develop custom 3D modeling workflows to become a SketchUp ninja
-
Inventor. Pierwsze kroki to podręcznik dla wszystkich osób, które chcą poznać możliwości tej aplikacji i wykorzystać je w swojej pracy. Książka opisuje proces instalacji i konfiguracji programu oraz definiowania parametrów projektu. Pokazuje także kolejne kroki jego realizacji. Przeczytasz w niej...(19.50 zł najniższa cena z 30 dni)
21.45 zł
39.00 zł(-45%) -
Bogato ilustrowana książka CATIA V5. Sztuka modelowania powierzchniowego pozwoli Ci osiągnąć biegłość i całkowitą swobodę w dziedzinie modelowania. Zdradzi Ci wszelkie sekrety, pomagające zachować pełną kontrolę nad procesem zmian. Podpowie, jak używać różnych rodzajów krzywych, a także definiowa...(74.50 zł najniższa cena z 30 dni)
81.95 zł
149.00 zł(-45%) -
Książka ta jest doskonałą inwestycją w przyszłość - nauczy Cię sprawnie obsługiwać program budzący duże zainteresowanie na rynku pracy. W dodatku oferuje znacznie więcej niż zwykły podręcznik – wspiera myślenie kreatywne i dostarcza wielu cennych informacji, wśród których znajdziesz dobre p...(14.50 zł najniższa cena z 30 dni)
15.95 zł
29.00 zł(-45%) -
Znajdziesz tu jasne i przejrzyste opisy środowiska oraz podstawowych narzędzi AutoCAD-a, a także sporo wskazówek praktycznych. Dowiesz się, jak tworzyć i modyfikować proste i bardziej złożone obiekty, dobierać odpowiednie linie, stosować kreskowania, posługiwać się szykami, korzystać z uchwytów, ...(19.50 zł najniższa cena z 30 dni)
21.45 zł
39.00 zł(-45%)
O autorze ebooka
Ebooka "Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services – just a willingness to learn the basics from this practical step-by-step tutorial" przeczytasz na:
-
czytnikach Inkbook, Kindle, Pocketbook, Onyx Boox i innych
-
systemach Windows, MacOS i innych
-
systemach Windows, Android, iOS, HarmonyOS
-
na dowolnych urządzeniach i aplikacjach obsługujących formaty: PDF, EPub, Mobi
Masz pytania? Zajrzyj do zakładki Pomoc »
Audiobooka "Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services ‚Äì just a willingness to learn the basics from this practical step-by-step tutorial" posłuchasz:
-
w aplikacji Ebookpoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych
-
na dowolnych urządzeniach i aplikacjach obsługujących format MP3 (pliki spakowane w ZIP)
Masz pytania? Zajrzyj do zakładki Pomoc »
Kurs Video "Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services – just a willingness to learn the basics from this practical step-by-step tutorial" zobaczysz:
-
w aplikacjach Ebookpoint i Videopoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych z dostępem do najnowszej wersji Twojej przeglądarki internetowej
Szczegóły ebooka
- Tytuł oryginału:
- Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services – just a willingness to learn the basics from this practical step-by-step tutorial.
- ISBN Ebooka:
- 978-18-495-1731-7, 9781849517317
- Data wydania ebooka:
- 2013-02-22 Data wydania ebooka często jest dniem wprowadzenia tytułu do sprzedaży i może nie być równoznaczna z datą wydania książki papierowej. Dodatkowe informacje możesz znaleźć w darmowym fragmencie. Jeśli masz wątpliwości skontaktuj się z nami sklep@ebookpoint.pl.
- Język publikacji:
- angielski
- Rozmiar pliku Pdf:
- 4.4MB
- Rozmiar pliku ePub:
- 9.9MB
- Rozmiar pliku Mobi:
- 15.3MB
Spis treści ebooka
- Hadoop Beginners Guide
- Table of Contents
- Hadoop Beginner's Guide
- Credits
- About the Author
- About the Reviewers
- www.PacktPub.com
- Support files, eBooks, discount offers and more
- Why Subscribe?
- Free Access for Packt account holders
- Support files, eBooks, discount offers and more
- Preface
- What this book covers
- What you need for this book
- Who this book is for
- Conventions
- Time for action heading
- What just happened?
- Pop quiz heading
- Have a go hero heading
- Reader feedback
- Customer support
- Downloading the example code
- Errata
- Piracy
- Questions
- 1. What It's All About
- Big data processing
- The value of data
- Historically for the few and not the many
- Classic data processing systems
- Scale-up
- Early approaches to scale-out
- Classic data processing systems
- Limiting factors
- Big data processing
- A different approach
- All roads lead to scale-out
- Share nothing
- Expect failure
- Smart software, dumb hardware
- Move processing, not data
- Build applications, not infrastructure
- Hadoop
- Thanks, Google
- Thanks, Doug
- Thanks, Yahoo
- Parts of Hadoop
- Common building blocks
- HDFS
- MapReduce
- Better together
- Common architecture
- What it is and isn't good for
- Cloud computing with Amazon Web Services
- Too many clouds
- A third way
- Different types of costs
- AWS infrastructure on demand from Amazon
- Elastic Compute Cloud (EC2)
- Simple Storage Service (S3)
- Elastic MapReduce (EMR)
- What this book covers
- A dual approach
- Summary
- 2. Getting Hadoop Up and Running
- Hadoop on a local Ubuntu host
- Other operating systems
- Hadoop on a local Ubuntu host
- Time for action checking the prerequisites
- What just happened?
- Setting up Hadoop
- A note on versions
- Time for action downloading Hadoop
- What just happened?
- Time for action setting up SSH
- What just happened?
- Configuring and running Hadoop
- Time for action using Hadoop to calculate Pi
- What just happened?
- Three modes
- Time for action configuring the pseudo-distributed mode
- What just happened?
- Configuring the base directory and formatting the filesystem
- Time for action changing the base HDFS directory
- What just happened?
- Time for action formatting the NameNode
- What just happened?
- Starting and using Hadoop
- Time for action starting Hadoop
- What just happened?
- Time for action using HDFS
- What just happened?
- Time for action WordCount, the Hello World of MapReduce
- What just happened?
- Have a go hero WordCount on a larger body of text
- Monitoring Hadoop from the browser
- The HDFS web UI
- The MapReduce web UI
- The HDFS web UI
- Using Elastic MapReduce
- Setting up an account in Amazon Web Services
- Creating an AWS account
- Signing up for the necessary services
- Setting up an account in Amazon Web Services
- Time for action WordCount on EMR using the management console
- What just happened?
- Have a go hero other EMR sample applications
- Other ways of using EMR
- AWS credentials
- The EMR command-line tools
- The AWS ecosystem
- Comparison of local versus EMR Hadoop
- Summary
- 3. Understanding MapReduce
- Key/value pairs
- What it mean
- Why key/value data?
- Some real-world examples
- MapReduce as a series of key/value transformations
- Pop quiz key/value pairs
- Key/value pairs
- The Hadoop Java API for MapReduce
- The 0.20 MapReduce Java API
- The Mapper class
- The Reducer class
- The Driver class
- The 0.20 MapReduce Java API
- Writing MapReduce programs
- Time for action setting up the classpath
- What just happened?
- Time for action implementing WordCount
- What just happened?
- Time for action building a JAR file
- What just happened?
- Time for action running WordCount on a local Hadoop cluster
- What just happened?
- Time for action running WordCount on EMR
- What just happened?
- The pre-0.20 Java MapReduce API
- Hadoop-provided mapper and reducer implementations
- Time for action WordCount the easy way
- What just happened?
- Walking through a run of WordCount
- Startup
- Splitting the input
- Task assignment
- Task startup
- Ongoing JobTracker monitoring
- Mapper input
- Mapper execution
- Mapper output and reduce input
- Partitioning
- The optional partition function
- Reducer input
- Reducer execution
- Reducer output
- Shutdown
- That's all there is to it!
- Apart from the combinermaybe
- Why have a combiner?
- Time for action WordCount with a combiner
- What just happened?
- When you can use the reducer as the combiner
- What just happened?
- Time for action fixing WordCount to work with a combiner
- What just happened?
- Reuse is your friend
- Pop quiz MapReduce mechanics
- Hadoop-specific data types
- The Writable and WritableComparable interfaces
- Introducing the wrapper classes
- Primitive wrapper classes
- Array wrapper classes
- Map wrapper classes
- Time for action using the Writable wrapper classes
- What just happened?
- Other wrapper classes
- What just happened?
- Have a go hero playing with Writables
- Making your own
- Input/output
- Files, splits, and records
- InputFormat and RecordReader
- Hadoop-provided InputFormat
- Hadoop-provided RecordReader
- OutputFormat and RecordWriter
- Hadoop-provided OutputFormat
- Don't forget Sequence files
- Summary
- 4. Developing MapReduce Programs
- Using languages other than Java with Hadoop
- How Hadoop Streaming works
- Why to use Hadoop Streaming
- Using languages other than Java with Hadoop
- Time for action implementing WordCount using Streaming
- What just happened?
- Differences in jobs when using Streaming
- Analyzing a large dataset
- Getting the UFO sighting dataset
- Getting a feel for the dataset
- Time for action summarizing the UFO data
- What just happened?
- Examining UFO shapes
- What just happened?
- Time for action summarizing the shape data
- What just happened?
- Time for action correlating of sighting duration to UFO shape
- What just happened?
- Using Streaming scripts outside Hadoop
- What just happened?
- Time for action performing the shape/time analysis from the command line
- What just happened?
- Java shape and location analysis
- Time for action using ChainMapper for field validation/analysis
- What just happened?
- Have a go hero
- Too many abbreviations
- Using the Distributed Cache
- Time for action using the Distributed Cache to improve location output
- What just happened?
- Counters, status, and other output
- Time for action creating counters, task states, and writing log output
- What just happened?
- Too much information!
- Summary
- 5. Advanced MapReduce Techniques
- Simple, advanced, and in-between
- Joins
- When this is a bad idea
- Map-side versus reduce-side joins
- Matching account and sales information
- Time for action reduce-side join using MultipleInputs
- What just happened?
- DataJoinMapper and TaggedMapperOutput
- What just happened?
- Implementing map-side joins
- Using the Distributed Cache
- Have a go hero - Implementing map-side joins
- Pruning data to fit in the cache
- Using a data representation instead of raw data
- Using multiple mappers
- To join or not to join...
- Graph algorithms
- Graph 101
- Graphs and MapReduce a match made somewhere
- Representing a graph
- Time for action representing the graph
- What just happened?
- Overview of the algorithm
- The mapper
- The reducer
- Iterative application
- Time for action creating the source code
- What just happened?
- Time for action the first run
- What just happened?
- Time for action the second run
- What just happened?
- Time for action the third run
- What just happened?
- Time for action the fourth and last run
- What just happened?
- Running multiple jobs
- Final thoughts on graphs
- Using language-independent data structures
- Candidate technologies
- Introducing Avro
- Time for action getting and installing Avro
- What just happened?
- Avro and schemas
- Time for action defining the schema
- What just happened?
- Time for action creating the source Avro data with Ruby
- What just happened?
- Time for action consuming the Avro data with Java
- What just happened?
- Using Avro within MapReduce
- Time for action generating shape summaries in MapReduce
- What just happened?
- Time for action examining the output data with Ruby
- What just happened?
- Time for action examining the output data with Java
- What just happened?
- Have a go hero graphs in Avro
- Going forward with Avro
- Summary
- 6. When Things Break
- Failure
- Embrace failure
- Or at least don't fear it
- Don't try this at home
- Types of failure
- Hadoop node failure
- The dfsadmin command
- Cluster setup, test files, and block sizes
- Fault tolerance and Elastic MapReduce
- Failure
- Time for action killing a DataNode process
- What just happened?
- NameNode and DataNode communication
- What just happened?
- Have a go hero NameNode log delving
- Time for action the replication factor in action
- What just happened?
- Time for action intentionally causing missing blocks
- What just happened?
- When data may be lost
- Block corruption
- What just happened?
- Time for action killing a TaskTracker process
- What just happened?
- Comparing the DataNode and TaskTracker failures
- Permanent failure
- What just happened?
- Killing the cluster masters
- Time for action killing the JobTracker
- What just happened?
- Starting a replacement JobTracker
- What just happened?
- Have a go hero moving the JobTracker to a new host
- Time for action killing the NameNode process
- What just happened?
- Starting a replacement NameNode
- The role of the NameNode in more detail
- File systems, files, blocks, and nodes
- The single most important piece of data in the cluster fsimage
- DataNode startup
- Safe mode
- SecondaryNameNode
- So what to do when the NameNode process has a critical failure?
- BackupNode/CheckpointNode and NameNode HA
- Hardware failure
- Host failure
- Host corruption
- The risk of correlated failures
- What just happened?
- Task failure due to software
- Failure of slow running tasks
- Time for action causing task failure
- What just happened?
- Have a go hero HDFS programmatic access
- Hadoop's handling of slow-running tasks
- Speculative execution
- Hadoop's handling of failing tasks
- Have a go hero causing tasks to fail
- Task failure due to data
- Handling dirty data through code
- Using Hadoop's skip mode
- Time for action handling dirty data by using skip mode
- What just happened?
- To skip or not to skip...
- What just happened?
- Summary
- 7. Keeping Things Running
- A note on EMR
- Hadoop configuration properties
- Default values
- Time for action browsing default properties
- What just happened?
- Additional property elements
- Default storage location
- Where to set properties
- Setting up a cluster
- How many hosts?
- Calculating usable space on a node
- Location of the master nodes
- Sizing hardware
- Processor / memory / storage ratio
- EMR as a prototyping platform
- How many hosts?
- Special node requirements
- Storage types
- Commodity versus enterprise class storage
- Single disk versus RAID
- Finding the balance
- Network storage
- Hadoop networking configuration
- How blocks are placed
- Rack awareness
- The rack-awareness script
- Time for action examining the default rack configuration
- What just happened?
- Time for action adding a rack awareness script
- What just happened?
- What is commodity hardware anyway?
- Pop quiz setting up a cluster
- Cluster access control
- The Hadoop security model
- Time for action demonstrating the default security
- What just happened?
- User identity
- The super user
- User identity
- More granular access control
- What just happened?
- Working around the security model via physical access control
- Managing the NameNode
- Configuring multiple locations for the fsimage class
- Time for action adding an additional fsimage location
- What just happened?
- Where to write the fsimage copies
- What just happened?
- Swapping to another NameNode host
- Having things ready before disaster strikes
- Time for action swapping to a new NameNode host
- What just happened?
- Don't celebrate quite yet!
- What about MapReduce?
- What just happened?
- Have a go hero swapping to a new NameNode host
- Managing HDFS
- Where to write data
- Using balancer
- When to rebalance
- MapReduce management
- Command line job management
- Have a go hero command line job management
- Job priorities and scheduling
- Time for action changing job priorities and killing a job
- What just happened?
- Alternative schedulers
- Capacity Scheduler
- Fair Scheduler
- Enabling alternative schedulers
- When to use alternative schedulers
- Scaling
- Adding capacity to a local Hadoop cluster
- Have a go hero adding a node and running balancer
- Adding capacity to an EMR job flow
- Expanding a running job flow
- Summary
- 8. A Relational View on Data with Hive
- Overview of Hive
- Why use Hive?
- Thanks, Facebook!
- Overview of Hive
- Setting up Hive
- Prerequisites
- Getting Hive
- Time for action installing Hive
- What just happened?
- Using Hive
- Time for action creating a table for the UFO data
- What just happened?
- Time for action inserting the UFO data
- What just happened?
- Validating the data
- Time for action validating the table
- What just happened?
- Time for action redefining the table with the correct column separator
- What just happened?
- Hive tables real or not?
- Time for action creating a table from an existing file
- What just happened?
- Time for action performing a join
- What just happened?
- Have a go hero improve the join to use regular expressions
- Hive and SQL views
- Time for action using views
- What just happened?
- Handling dirty data in Hive
- Have a go hero do it!
- Time for action exporting query output
- What just happened?
- Partitioning the table
- Time for action making a partitioned UFO sighting table
- What just happened?
- Bucketing, clustering, and sorting... oh my!
- User-Defined Function
- Time for action adding a new User Defined Function (UDF)
- What just happened?
- To preprocess or not to preprocess...
- Hive versus Pig
- What we didn't cover
- Hive on Amazon Web Services
- Time for action running UFO analysis on EMR
- What just happened?
- Using interactive job flows for development
- Have a go hero using an interactive EMR cluster
- Integration with other AWS products
- Summary
- 9. Working with Relational Databases
- Common data paths
- Hadoop as an archive store
- Hadoop as a preprocessing step
- Hadoop as a data input tool
- The serpent eats its own tail
- Common data paths
- Setting up MySQL
- Time for action installing and setting up MySQL
- What just happened?
- Did it have to be so hard?
- Time for action configuring MySQL to allow remote connections
- What just happened?
- Don't do this in production!
- Time for action setting up the employee database
- What just happened?
- Be careful with data file access rights
- Getting data into Hadoop
- Using MySQL tools and manual import
- Have a go hero exporting the employee table into HDFS
- Accessing the database from the mapper
- A better way introducing Sqoop
- Time for action downloading and configuring Sqoop
- What just happened?
- Sqoop and Hadoop versions
- Sqoop and HDFS
- What just happened?
- Time for action exporting data from MySQL to HDFS
- What just happened?
- Mappers and primary key columns
- Other options
- Sqoop's architecture
- What just happened?
- Importing data into Hive using Sqoop
- Time for action exporting data from MySQL into Hive
- What just happened?
- Time for action a more selective import
- What just happened?
- Datatype issues
- What just happened?
- Time for action using a type mapping
- What just happened?
- Time for action importing data from a raw query
- What just happened?
- Have a go hero
- Sqoop and Hive partitions
- Field and line terminators
- Getting data out of Hadoop
- Writing data from within the reducer
- Writing SQL import files from the reducer
- A better way Sqoop again
- Time for action importing data from Hadoop into MySQL
- What just happened?
- Differences between Sqoop imports and exports
- Inserts versus updates
- What just happened?
- Have a go hero
- Sqoop and Hive exports
- Time for action importing Hive data into MySQL
- What just happened?
- Time for action fixing the mapping and re-running the export
- What just happened?
- Other Sqoop features
- Incremental merge
- Avoiding partial exports
- Sqoop as a code generator
- Other Sqoop features
- What just happened?
- AWS considerations
- Considering RDS
- Summary
- 10. Data Collection with Flume
- A note about AWS
- Data data everywhere...
- Types of data
- Getting network traffic into Hadoop
- Time for action getting web server data into Hadoop
- What just happened?
- Have a go hero
- Getting files into Hadoop
- Hidden issues
- Keeping network data on the network
- Hadoop dependencies
- Reliability
- Re-creating the wheel
- A common framework approach
- Introducing Apache Flume
- A note on versioning
- Time for action installing and configuring Flume
- What just happened?
- Using Flume to capture network data
- Time for action capturing network traffic in a log file
- What just happened?
- Time for action logging to the console
- What just happened?
- Writing network data to log files
- Time for action capturing the output of a command to a flat file
- What just happened?
- Logs versus files
- What just happened?
- Time for action capturing a remote file in a local flat file
- What just happened?
- Sources, sinks, and channels
- Sources
- Sinks
- Channels
- Or roll your own
- Understanding the Flume configuration files
- Have a go hero
- It's all about events
- Time for action writing network traffic onto HDFS
- What just happened?
- Time for action adding timestamps
- What just happened?
- To Sqoop or to Flume...
- Time for action multi level Flume networks
- What just happened?
- Time for action writing to multiple sinks
- What just happened?
- Selectors replicating and multiplexing
- Handling sink failure
- Have a go hero - Handling sink failure
- Next, the world
- Have a go hero - Next, the world
- The bigger picture
- Data lifecycle
- Staging data
- Scheduling
- Summary
- 11. Where to Go Next
- What we did and didn't cover in this book
- Upcoming Hadoop changes
- Alternative distributions
- Why alternative distributions?
- Bundling
- Free and commercial extensions
- Cloudera Distribution for Hadoop
- Hortonworks Data Platform
- MapR
- IBM InfoSphere Big Insights
- Choosing a distribution
- Why alternative distributions?
- Other Apache projects
- HBase
- Oozie
- Whir
- Mahout
- MRUnit
- Other programming abstractions
- Pig
- Cascading
- AWS resources
- HBase on EMR
- SimpleDB
- DynamoDB
- Sources of information
- Source code
- Mailing lists and forums
- LinkedIn groups
- HUGs
- Conferences
- Summary
- A. Pop Quiz Answers
- Chapter 3, Understanding MapReduce
- Pop quiz key/value pairs
- Pop quiz walking through a run of WordCount
- Chapter 3, Understanding MapReduce
- Chapter 7, Keeping Things Running
- Pop quiz setting up a cluster
- Index
Packt Publishing - inne książki
-
Mastering Data transformation is essential for enhancing their data models and business intelligence. The Definitive Guide to Power Query equips you with the knowledge and skills to master the tool while leveraging its remarkable capabilities.
The Definitive Guide to Power Query (M). Mastering complex data transformation with Power Query The Definitive Guide to Power Query (M). Mastering complex data transformation with Power Query
Gregory Deckler, Rick de Groot, Melissa de Korte, Brian Julius
Dzieki opcji "Druk na żądanie" do sprzedaży wracają tytuły Grupy Helion, które cieszyły sie dużym zainteresowaniem, a których nakład został wyprzedany.
Dla naszych Czytelników wydrukowaliśmy dodatkową pulę egzemplarzy w technice druku cyfrowego.
Co powinieneś wiedzieć o usłudze "Druk na żądanie":
- usługa obejmuje tylko widoczną poniżej listę tytułów, którą na bieżąco aktualizujemy;
- cena książki może być wyższa od początkowej ceny detalicznej, co jest spowodowane kosztami druku cyfrowego (wyższymi niż koszty tradycyjnego druku offsetowego). Obowiązująca cena jest zawsze podawana na stronie WWW książki;
- zawartość książki wraz z dodatkami (płyta CD, DVD) odpowiada jej pierwotnemu wydaniu i jest w pełni komplementarna;
- usługa nie obejmuje książek w kolorze.
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka, którą chcesz zamówić pochodzi z końcówki nakładu. Oznacza to, że mogą się pojawić drobne defekty (otarcia, rysy, zagięcia).
Co powinieneś wiedzieć o usłudze "Końcówka nakładu":
- usługa obejmuje tylko książki oznaczone tagiem "Końcówka nakładu";
- wady o których mowa powyżej nie podlegają reklamacji;
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka drukowana
Oceny i opinie klientów: Hadoop Beginner's Guide. Get your mountain of data under control with Hadoop. This guide requires no prior knowledge of the software or cloud services ‚Äì just a willingness to learn the basics from this practical step-by-step tutorial Gerald Turkington, Kevin A. McGrail (0) Weryfikacja opinii następuję na podstawie historii zamówień na koncie Użytkownika umieszczającego opinię. Użytkownik mógł otrzymać punkty za opublikowanie opinii uprawniające do uzyskania rabatu w ramach Programu Punktowego.