Deep Learning. A Practitioner's Approach
![Język publikacji: angielski Język publikacji: angielski](https://static01.helion.com.pl/global/flagi/1.png)
- Autorzy:
- Josh Patterson, Adam Gibson
![Deep Learning. A Practitioner's Approach Josh Patterson, Adam Gibson - okładka ebooka](https://static01.helion.com.pl/global/okladki/326x466/e_0lv6.png)
![Deep Learning. A Practitioner's Approach Josh Patterson, Adam Gibson - tył okładki ebooka](https://static01.helion.com.pl/global/okladki-tyl/326x466/e_0lv6.png)
- Ocena:
- Bądź pierwszym, który oceni tę książkę
- Stron:
- 532
- Dostępne formaty:
-
ePubMobi
Opis ebooka: Deep Learning. A Practitioner's Approach
Although interest in machine learning has reached a high point, lofty expectations often scuttle projects before they get very far. How can machine learning—especially deep neural networks—make a real difference in your organization? This hands-on guide not only provides the most practical information available on the subject, but also helps you get started building efficient deep learning networks.
Authors Adam Gibson and Josh Patterson provide theory on deep learning before introducing their open-source Deeplearning4j (DL4J) library for developing production-class workflows. Through real-world examples, you’ll learn methods and strategies for training deep network architectures and running deep learning workflows on Spark and Hadoop with DL4J.
- Dive into machine learning concepts in general, as well as deep learning in particular
- Understand how deep networks evolved from neural network fundamentals
- Explore the major deep network architectures, including Convolutional and Recurrent
- Learn how to map specific deep networks to the right problem
- Walk through the fundamentals of tuning general neural networks and specific deep network architectures
- Use vectorization techniques for different data types with DataVec, DL4J’s workflow tool
- Learn how to use DL4J natively on Spark and Hadoop
Wybrane bestsellery
-
Building models is a small part of the story when it comes to deploying machine learning applications. The entire process involves developing, orchestrating, deploying, and running scalable and portable machine learning workloads--a process Kubeflow makes much easier. This practical book shows da...(174.58 zł najniższa cena z 30 dni)
174.48 zł
219.00 zł(-20%) -
Niniejsza książka jest przydatnym przewodnikiem po uczeniu maszynowym i sieciach neuronowych. Zawiera praktyczne informacje, które doceni każdy programista stawiający pierwsze kroki w tej dziedzinie. Przedstawiono tu podstawy deep learningu i wyjaśniono takie pojęcia, jak strojenie sieci, wielową...(38.50 zł najniższa cena z 30 dni)
38.50 zł
77.00 zł(-50%) -
To czwarte, zaktualizowane wydanie znakomitego przewodnika poświęconego zastosowaniu uczenia maszynowego do rozwiązywania rzeczywistych problemów w analizie danych. Dzięki książce dowiesz się wszystkiego, co trzeba wiedzieć o wstępnym przetwarzaniu danych, znajdowaniu kluczowych spostrzeżeń, prog...
Uczenie maszynowe w języku R. Tworzenie i doskonalenie modeli - od przygotowania danych po dostrajanie, ewaluację i pracę z big data. Wydanie IV Uczenie maszynowe w języku R. Tworzenie i doskonalenie modeli - od przygotowania danych po dostrajanie, ewaluację i pracę z big data. Wydanie IV
(83.40 zł najniższa cena z 30 dni)83.40 zł
139.00 zł(-40%) -
Dzięki tej książce łatwo przyswoisz teoretyczne podstawy i zaczniesz je płynnie wdrażać w rzeczywistych scenariuszach. Dowiesz się, w jaki sposób myślenie przyczynowe ułatwia rozwiązywanie problemów, i poznasz pojęcia Pearla, takie jak strukturalny model przyczynowy, interwencje, kontrfakty itp. ...
Wnioskowanie i związki przyczynowe w Pythonie. Nowoczesne uczenie maszynowe z wykorzystaniem bibliotek DoWhy, EconML, PyTorch i nie tylko Wnioskowanie i związki przyczynowe w Pythonie. Nowoczesne uczenie maszynowe z wykorzystaniem bibliotek DoWhy, EconML, PyTorch i nie tylko
(65.40 zł najniższa cena z 30 dni)65.40 zł
109.00 zł(-40%) -
Oto zaktualizowane wydanie popularnego przewodnika, dzięki któremu skorzystasz z ponad dwustu sprawdzonych receptur bazujących na najnowszych wydaniach bibliotek Pythona. Wystarczy, że skopiujesz i dostosujesz kod do swoich potrzeb. Możesz też go uruchamiać i testować za pomocą przykładowego zbio...
Uczenie maszynowe w Pythonie. Receptury. Od przygotowania danych do deep learningu. Wydanie II Uczenie maszynowe w Pythonie. Receptury. Od przygotowania danych do deep learningu. Wydanie II
(53.40 zł najniższa cena z 30 dni)53.40 zł
89.00 zł(-40%) -
Statystyka to dziedzina wiedzy, która bazuje na danych – przedmiotem jej zainteresowania są metody ich pozyskiwania i prezentacji, a przede wszystkim analizy. W ostatnich latach mocno zyskuje na popularności i dziś niemal każda uczelnia w Polsce oferuje możliwość studiowania na kierunku zwi...
Statystyka. Kurs video. Przewodnik dla studentów kierunków ścisłych Statystyka. Kurs video. Przewodnik dla studentów kierunków ścisłych
(70.95 zł najniższa cena z 30 dni)83.85 zł
129.00 zł(-35%) -
Mastering Data transformation is essential for enhancing their data models and business intelligence. The Definitive Guide to Power Query equips you with the knowledge and skills to master the tool while leveraging its remarkable capabilities.
The Definitive Guide to Power Query (M). Mastering complex data transformation with Power Query The Definitive Guide to Power Query (M). Mastering complex data transformation with Power Query
Gregory Deckler, Rick de Groot, Melissa de Korte, Brian Julius
-
Jeśli w swojej pracy masz lub miewasz do czynienia z danymi, z pewnością orientujesz się, że do tego celu stworzono dotąd całkiem sporo narzędzi. Nic dziwnego – przy tej liczbie danych, z jaką spotykamy się w dzisiejszym cyfrowym świecie, zdolność do ich sprawnego analizowania i wyciągania ...
Grafana. Kurs video. Monitorowanie, analiza i wizualizacja danych w czasie rzeczywistym Grafana. Kurs video. Monitorowanie, analiza i wizualizacja danych w czasie rzeczywistym
(39.90 zł najniższa cena z 30 dni)62.55 zł
139.00 zł(-55%) -
Oto praktyczny przewodnik po nauce o danych w miejscu pracy. Dowiesz się stąd wszystkiego, co ważne na początku Twojej drogi jako danologa: od osobowości, z którymi przyjdzie Ci pracować, przez detale analizy danych, po matematykę stojącą za algorytmami i uczeniem maszynowym. Nauczysz się myśleć ...
Analityk danych. Przewodnik po data science, statystyce i uczeniu maszynowym Analityk danych. Przewodnik po data science, statystyce i uczeniu maszynowym
(41.40 zł najniższa cena z 30 dni)41.40 zł
69.00 zł(-40%) -
Głębokie sieci neuronowe mają niesamowity potencjał. Osiągnięcia ostatnich lat nadały procesom uczenia głębokiego zupełnie nową jakość. Obecnie nawet programiści niezaznajomieni z tą technologią mogą korzystać z prostych i niezwykle skutecznych narzędzi, pozwalających na sprawne implementowanie p...
Uczenie maszynowe z użyciem Scikit-Learn, Keras i TensorFlow. Wydanie III Uczenie maszynowe z użyciem Scikit-Learn, Keras i TensorFlow. Wydanie III
(107.40 zł najniższa cena z 30 dni)107.40 zł
179.00 zł(-40%)
O autorach ebooka
Josh Patterson jest uznanym autorytetem w dziedzinie przetwarzania wielkich ilości danych, uczenia maszynowego i uczenia głębokiego. Aktywnie działa na rzecz tworzenia otwartego oprogramowania, uczestniczy w takich projektach jak DL4J, Apache Mahout, Metronome, IterativeReduce, openPDC i JMotif
Adam Gibson specjalizuje się w uczeniu głębokim. Ma duże doświadczenie w budowaniu systemów do przetwarzania dużych ilości danych w czasie rzeczywistym. Z jego rozwiązań korzystają m.in. firmy z listy Fortune 500, towarzystwa ubezpieczeniowe, firmy public relations i startupy.
Kup polskie wydanie:
Deep Learning. Praktyczne wprowadzenie
- Autor:
- Josh Patterson, Adam Gibson
38,50 zł
77,00 zł
(38.50 zł najniższa cena z 30 dni)
Ebooka "Deep Learning. A Practitioner's Approach" przeczytasz na:
-
czytnikach Inkbook, Kindle, Pocketbook, Onyx Boox i innych
-
systemach Windows, MacOS i innych
-
systemach Windows, Android, iOS, HarmonyOS
-
na dowolnych urządzeniach i aplikacjach obsługujących formaty: PDF, EPub, Mobi
Masz pytania? Zajrzyj do zakładki Pomoc »
Audiobooka "Deep Learning. A Practitioner's Approach" posłuchasz:
-
w aplikacji Ebookpoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych
-
na dowolnych urządzeniach i aplikacjach obsługujących format MP3 (pliki spakowane w ZIP)
Masz pytania? Zajrzyj do zakładki Pomoc »
Kurs Video "Deep Learning. A Practitioner's Approach" zobaczysz:
-
w aplikacjach Ebookpoint i Videopoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych z dostępem do najnowszej wersji Twojej przeglądarki internetowej
Szczegóły ebooka
- ISBN Ebooka:
- 978-14-919-1421-2, 9781491914212
- Data wydania ebooka:
-
2017-07-28
Data wydania ebooka często jest dniem wprowadzenia tytułu do sprzedaży i może nie być równoznaczna z datą wydania książki papierowej. Dodatkowe informacje możesz znaleźć w darmowym fragmencie. Jeśli masz wątpliwości skontaktuj się z nami sklep@ebookpoint.pl.
- Język publikacji:
- angielski
- Rozmiar pliku ePub:
- 17.3MB
- Rozmiar pliku Mobi:
- 38.6MB
Spis treści ebooka
- Preface
- Whats in This Book?
- Who Is The Practitioner?
- Who Should Read This Book?
- The Enterprise Machine Learning Practitioner
- The practicing data scientist
- The Java engineer
- The Enterprise Machine Learning Practitioner
- The Enterprise Executive
- The Academic
- Conventions Used in This Book
- Using Code Examples
- Administrative Notes
- OReilly Safari
- How to Contact Us
- Acknowledgments
- Josh
- Adam
- 1. A Review of Machine Learning
- The Learning Machines
- How Can Machines Learn?
- Biological Inspiration
- What Is Deep Learning?
- Going Down the Rabbit Hole
- The Learning Machines
- Framing the Questions
- The Math Behind Machine Learning: Linear Algebra
- Scalars
- Vectors
- Matrices
- Tensors
- Hyperplanes
- Relevant Mathematical Operations
- Dot product
- Element-wise product
- Outer product
- Converting Data Into Vectors
- Solving Systems of Equations
- Methods for solving systems of linear equations
- Iterative methods
- Iterative methods and linear algebra
- The Math Behind Machine Learning: Statistics
- Probability
- Conditional Probabilities
- Posterior Probability
- Distributions
- Samples Versus Population
- Resampling Methods
- Selection Bias
- Likelihood
- How Does Machine Learning Work?
- Regression
- Setting up the model
- Visualizing linear regression
- Relating the linear regression model
- Regression
- Classification
- Clustering
- Underfitting and Overfitting
- Optimization
- Convex Optimization
- Gradient Descent
- Stochastic Gradient Descent
- Mini-batch training and SGD
- Quasi-Newton Optimization Methods
- Generative Versus Discriminative Models
- Logistic Regression
- The Logistic Function
- Understanding Logistic Regression Output
- Evaluating Models
- The Confusion Matrix
- Sensitivity versus specificity
- Accuracy
- Precision
- Recall
- F1
- Context and interpreting scores
- The Confusion Matrix
- Building an Understanding of Machine Learning
- 2. Foundations of Neural Networks and Deep Learning
- Neural Networks
- The Biological Neuron
- Synapses
- Dendrites
- Axons
- Information flow across the biological neuron
- From biological to artificial
- The Biological Neuron
- The Perceptron
- History of the perceptron
- Definition of the perceptron
- The perceptron learning algorithm
- Limitations of the early perceptron
- Neural Networks
- Multilayer Feed-Forward Networks
- Evolution of the artificial neuron
- Artificial neuron input
- Connection weights
- Biases
- Activation functions
- Evolution of the artificial neuron
- Comparing the biological neuron and the artificial neuron
- Feed-forward neural network architecture
- Input layer
- Hidden layer
- Output layer
- Connections between layers
- Training Neural Networks
- Backpropagation Learning
- Algorithm intuition
- A closer look at backpropagation
- Understanding backpropagation pseudocode
- Updating the output layer weights
- Further expressing the error term
- The new propagation rule for the error value
- Updating the hidden layers
- Backpropagation Learning
- Activation Functions
- Linear
- Sigmoid
- Tanh
- Hard Tanh
- Softmax
- Rectified Linear
- Leaky ReLU
- Softplus
- Loss Functions
- Loss Function Notation
- Loss Functions for Regression
- Mean squared error loss
- Other loss functions for regression
- Mean absolute error loss
- Mean squared log error loss
- Mean absolute percentage error loss
- Regression loss function discussion
- Loss Functions for Classification
- Hinge loss
- Logistic loss
- Negative log likelihood
- Loss Functions for Reconstruction
- Hyperparameters
- Learning Rate
- Regularization
- Momentum
- Sparsity
- 3. Fundamentals of Deep Networks
- Defining Deep Learning
- What Is Deep Learning?
- Defining deep networks
- Evolutionary progress and resurgence
- Advances in network architecture
- Advances in layer types
- Advances in neuron types
- Hybrid architectures
- From feature engineering to automated feature learning
- Feature engineering
- Feature learning
- What Is Deep Learning?
- Generative modeling
- Inceptionism
- Modeling artistic style
- GANs
- Recurrent Neural Networks
- Defining Deep Learning
- The Tao of deep learning
- Organization of This Chapter
- Common Architectural Principles of Deep Networks
- Parameters
- Layers
- Activation Functions
- Activation functions for general architecture
- Hidden layer activation functions
- Output layer for regression
- Output layer for binary classification
- Output layer for multiclass classification
- Activation functions for general architecture
- Loss Functions
- Reconstruction cross-entropy
- Optimization Algorithms
- First-order methods
- Second-order methods
- L-BFGS
- Conjugate gradient
- Hessian-free
- Hyperparameters
- Layer size
- Magnitude hyperparameters
- Learning rate
- Nesterovs momentum
- AdaGrad
- RMSProp
- AdaDelta
- ADAM
- Regularization
- Dropout
- DropConnect
- L1
- L2
- Mini-batching
- Summary
- Building Blocks of Deep Networks
- RBMs
- Network layout
- Visible and hidden layers
- Connections and weights
- Biases
- Network layout
- Training
- Reconstruction
- Other uses of RBMs
- RBMs
- Autoencoders
- Similarities to multilayer perceptrons
- Defining features of autoencoders
- Unsupervised learning of unlabeled data
- Learning to reproduce the input data
- Training autoencoders
- Common variants of autoencoders
- Compression autoencoders
- Denoising autoencoders
- Applications of autoencoders
- Variational Autoencoders
- 4. Major Architectures of Deep Networks
- Unsupervised Pretrained Networks
- Deep Belief Networks
- Feature Extraction with RBM Layers
- Learning higher-order features automatically
- Initializing the feed-forward network
- Feature Extraction with RBM Layers
- Fine-tuning a DBN with a feed-forward multilayer neural network
- Gentle backpropagation
- The output layer
- Deep Belief Networks
- Current state of DBNs
- Unsupervised Pretrained Networks
- Generative Adversarial Networks
- Training generative models, unsupervised learning, and GANs
- The discriminator network
- The generative network
- Training generative models, unsupervised learning, and GANs
- Building generative models and Deep Convolutional Generative Adversarial Networks
- Conditional GANs
- Comparing GANs and variational autoencoders
- Convolutional Neural Networks (CNNs)
- Biological Inspiration
- Intuition
- CNN Architecture Overview
- Neuron spatial arrangements
- Evolution of the connections between layers
- Input Layers
- Convolutional Layers
- Convolution
- Filters
- Activation maps
- Parameter sharing
- Learned filters and renders
- ReLU activation functions as layers
- Convolutional layer hyperparameters
- Filter size
- Output depth
- Stride
- Zero-padding
- Batch normalization and layers
- Pooling Layers
- Fully Connected Layers
- Other Applications of CNNs
- CNNs of Note
- Summary
- Recurrent Neural Networks
- Modeling the Time Dimension
- Lost in time
- Temporal feedback and loops in connections
- Sequences and time-series data
- Understanding model input and output
- Modeling the Time Dimension
- 3D Volumetric Input
- Uneven time-series and masking
- Why Not Markov Models?
- General Recurrent Neural Network Architecture
- Recurrent Neural Networks architecture and time-steps
- LSTM Networks
- Properties of LSTM networks
- LSTM network architecture
- LSTM units
- LSTM layers
- Training
- BPTT and truncated BPTT
- Domain-Specific Applications and Blended Networks
- Recursive Neural Networks
- Network Architecture
- Varieties of Recursive Neural Networks
- Applications of Recursive Neural Networks
- Summary and Discussion
- Will Deep Learning Make Other Algorithms Obsolete?
- Different Problems Have Different Best Methods
- When Do I Need Deep Learning?
- When to use deep learning
- When to stick with traditional machine learning
- 5. Building Deep Networks
- Matching Deep Networks to the Right Problem
- Columnar Data and Multilayer Perceptrons
- Images and Convolutional Neural Networks
- Time-series Sequences and Recurrent Neural Networks
- Using Hybrid Networks
- Matching Deep Networks to the Right Problem
- The DL4J Suite of Tools
- Vectorization and DataVec
- Runtimes and ND4J
- ND4J and the need for speed
- JavaCPP
- CPU backends
- GPU backends
- ND4J and the need for speed
- Benchmarking ND4J and DL4J
- Basic Concepts of the DL4J API
- Loading and Saving Models
- Writing a trained model to disk
- Writing to HDFS
- Writing a trained model to disk
- Reading a saved model from disk
- Reading from HDFS
- Loading and Saving Models
- Getting Input for the Model
- Loading data during training
- Setting Up Model Architecture
- Building layer-oriented architectures
- Hyperparameters
- Training and Evaluation
- Making a prediction
- Training, validation, and test data
- Modeling CSV Data with Multilayer Perceptron Networks
- Setting Up Input Data
- Determining Network Architecture
- General hyperparameters
- First hidden layer
- Output layer for classification
- Training the Model
- Evaluating the Model
- Modeling Handwritten Images Using CNNs
- Java Code Listing for the LeNet CNN
- Loading and Vectorizing the Input Images
- Network Architecture for LeNet in DL4J
- General hyperparameters
- Convolution layers
- Max-pooling layers
- Output layer
- Training the CNN
- Modeling Sequence Data by Using Recurrent Neural Networks
- Generating Shakespeare via LSTMs
- High-level modeling workflow
- Java code for modeling Shakespeare
- Setting up input data and vectorization
- LSTM network architecture
- General comments on hyperparameters
- Training the LSTM network
- Generating Shakespeare samples
- Generating Shakespeare via LSTMs
- Classifying Sensor Time-series Sequences Using LSTMs
- Java code listing for recurrent classification example
- Setting up input data and vectorization
- Network architecture and training
- Using Autoencoders for Anomaly Detection
- Java Code Listing for Autoencoder Example
- Setting Up Input Data
- Autoencoder Network Architecture and Training
- Evaluating the Model
- Using Variational Autoencoders to Reconstruct MNIST Digits
- Code Listing to Reconstruct MNIST Digits
- Examining the VAE Model
- Understanding the scatterplot
- Understanding the generated images
- Applications of Deep Learning in Natural Language Processing
- Learning Word Embedding Using Word2Vec
- The Word2Vec model and algorithm
- Modeling context
- Learning similar meaning and semantic relationships
- Vector arithmetic and word embedding
- Java code listing for Word2Vec example
- Understanding the Word2Vec example
- Other practical uses of Word2Vec
- Learning Word Embedding Using Word2Vec
- Distributed Representations of Sentences with Paragraph Vectors
- Building paragraph vectors
- Understanding the paragraph vectors example
- Using Paragraph Vectors for Document Classification
- Understanding the paragraph vectors classification example
- Further exploration of the Word2Vec approach
- Extensions into specific domains: Gov2Vec
- Graphs and Node2Vec
- Recommendation engines and Item2Vec
- Computer vision and FaceNet
- 6. Tuning Deep Networks
- Basic Concepts in Tuning Deep Networks
- An Intuition for Building Deep Networks
- Building the Intuition as a Step-by-Step Process
- Basic Concepts in Tuning Deep Networks
- Matching Input Data and Network Architectures
- Summary
- Relating Model Goal and Output Layers
- Regression Model Output Layer
- Classification Model Output Layer
- Single-label classification models
- Models with more than two labels
- Multiclass classification models
- Multilabel classification models
- Working with Layer Count, Parameter Count, and Memory
- Feed-Forward Multilayer Neural Networks
- Determining hidden-layer count
- Determining neuron count per layer
- Feed-Forward Multilayer Neural Networks
- Controlling Layer and Parameter Counts
- Getting the parameter count for a network
- Estimating Network Memory Requirements
- Weight Initialization Strategies
- Using Activation Functions
- Summary Table for Activation Functions
- Applying Loss Functions
- Understanding Learning Rates
- Using the Ratio of Updates-to-Parameters
- Specific Recommendations for Learning Rates
- How Sparsity Affects Learning
- Applying Methods of Optimization
- SGD Best Practices
- Using Parallelization and GPUs for Faster Training
- Online Learning and Parallel Iterative Algorithms
- Task parallelism
- Data parallelism
- Online Learning and Parallel Iterative Algorithms
- Parallelizing SGD in DL4J
- Parallel SGD execution
- GPUs
- Controlling Epochs and Mini-Batch Size
- Understanding Mini-Batch Size Trade-Offs
- How to Use Regularization
- Priors as Regularizers
- Max-Norm Regularization
- Dropout
- Issues with dropout
- Other Regularization Topics
- Working with Class Imbalance
- Methods for Sampling Classes
- Weighted Loss Functions
- Dealing with Overfitting
- Using Network Statistics from the Tuning UI
- Detecting Poor Weight Initialization
- Detecting Nonshuffled Data
- Detecting Issues with Regularization
- 7. Tuning Specific Deep Network Architectures
- Convolutional Neural Networks (CNNs)
- Common Convolutional Architectural Patterns
- Configuring Convolutional Layers
- Setting the stride for filters
- Using padding
- Choosing the number of filters
- Configuring filter size
- Convolution mode and calculating spatial size of output volume
- Configuring Pooling Layers
- Transfer Learning
- An alternative to training from scratch
- When to consider trying transfer learning
- Convolutional Neural Networks (CNNs)
- Recurrent Neural Networks
- Network Input Data and Input Layers
- Output Layers and RnnOutputLayer
- Training the Network
- Initializing weights
- Backpropagation through time
- Regularization
- Debugging Common Issues with LSTMs
- Padding and Masking
- Applying padding and masking to volumetric input
- Evaluation and Scoring With Masking
- Classification using the evaluation class
- Scoring new data with MultiLayerNetwork
- Variants of Recurrent Network Architectures
- Restricted Boltzmann Machines
- Hidden Units and Modeling Available Information
- Using Different Units
- Using Regularization with RBMs
- DBNs
- Using Momentum
- Using Regularization
- Dropout
- Determining Hidden Unit Count
- 8. Vectorization
- Introduction to Vectorization in Machine Learning
- Why Do We Need to Vectorize Data?
- Strategies for Dealing with Columnar Raw Data Attributes
- Nominal
- Ordinal
- Interval
- Ratio
- Feature Engineering and Normalization Techniques
- Feature copying
- Normalization
- Standardization and zero mean, unit variance
- Min-max scaling
- Whitening and principal component analysis
- Applying normalization in Recurrent Neural Networks and CNNs
- Normalization for regression models
- Binarization
- Introduction to Vectorization in Machine Learning
- Using DataVec for ETL and Vectorization
- Vectorizing Image Data
- Image Data Representation in DL4J
- Image Data and Vector Normalization with DataVec
- Working with Sequential Data in Vectorization
- Major Variations of Sequential Data Sources
- Vectorizing Sequential Data with DataVec
- Converting time-series to a single vector
- Converting sequential data to a DataSet object in local mode
- Building custom DataSets from sequential data
- Working with Text in Vectorization
- Bag of Words
- TF-IDF
- TF
- IDF
- Computing the full TF-IDF score
- Comparing Word2Vec and VSM Comparison
- Working with Graphs
- 9. Using Deep Learning and DL4J on Spark
- Introduction to Using DL4J with Spark and Hadoop
- Operating Spark from the Command Line
- spark-submit
- Working with Hadoop security and Kerberos
- Uploading the Spark assembly
- Initializing Kerberos
- Operating Spark from the Command Line
- Introduction to Using DL4J with Spark and Hadoop
- Configuring and Tuning Spark Execution
- Running Spark on Mesos
- Running Spark on YARN
- Comparing Spark execution modes
- General Spark Tuning Guide
- Setting the number of executors
- Spark executors and CPU cores
- Spark executors and memory
- Spark and YARN container resource allocation
- Understanding executor memory requests in YARN
- Understanding Spark, the JVM, and garbage collection
- Dealing with slowing garbage collection efficiency or pauses
- Selecting a garbage collector for the JVM and Spark
- Tuning DL4J Jobs on Spark
- Tuning the number of executors
- Tuning the amount of memory for executors
- Setting Up a Maven Project Object Model for Spark and DL4J
- A pom.xml File Dependency Template
- Setting Up a POM File for CDH 5.X
- Setting Up a POM File for HDP 2.4
- Troubleshooting Spark and Hadoop
- Common Issues with ND4J
- ND4J and Kyro serialization
- jnind4j and java.library.path
- Common Issues with ND4J
- DL4J Parallel Execution on Spark
- A Minimal Spark Training Example
- DL4J API Best Practices for Spark
- Multilayer Perceptron Spark Example
- Setting Up MLP Network Architecture for Spark
- Distributed Training and Model Evaluation
- Building and Executing a DL4J Spark Job
- Generating Shakespeare Text with Spark and Long Short-Term Memory
- Setting Up the LSTM Network Architecture
- Training, Tracking Progress, and Understanding Results
- Modeling MNIST with a Convolutional Neural Network on Spark
- Configuring the Spark Job and Loading MNIST Data
- Setting Up the LeNet CNN Architecture and Training
- A. What Is Artificial Intelligence?
- The Story So Far
- Defining Deep Learning
- Defining Artificial Intelligence
- The study of intelligence
- Cognitive dissonance and modern definitions
- What AI is not
- Moving the goal posts
- Segmenting the definitions of AI
- Critical commentary on segments
- A fifth aspirational definition of AI
- The AI winters
- AI Winter I: (19741980)
- AI Winter II: Late 1980s
- The common patterns of AI Winters
- The Story So Far
- What Is Driving Interest Today in AI Today?
- Winter Is Coming
- B. RL4J and Reinforcement Learning
- Preliminaries
- Markov Decision Process
- Terminology
- Preliminaries
- Different Settings
- Model-Free
- Observation Setting
- Single-Player and Adversarial Games
- Q-Learning
- From Policy to Neural Networks the following
- Policy Iteration
- Exploration Versus Exploitation
- Bellman Equation
- Initial State Sampling
- Q-Learning Implementation
- Modeling Q(s,a)
- Experience Replay
- Compression
- Convolutional Layers and Image Preprocessing
- Image processing
- History Processing
- Double Q-Learning
- Clipping
- Scaling Rewards
- Prioritized Replay
- Graph, Visualization, and Mean-Q
- RL4J
- Conclusion
- C. Numbers Everyone Should Know
- D. Neural Networks and Backpropagation: A Mathematical Approach
- Introduction
- Backpropagation in a Multilayer Perceptron
- E. Using the ND4J API
- Design and Basic Usage
- Understanding NDArrays
- ND4J General Syntax
- The Basics of Working with NDArrays
- The ND4J class
- Nd4j.zeros( int ... )
- Nd4j.ones( int ... )
- Initializing with other values
- Initializing with random numbers
- The ND4J class
- Controlling the shape of NDArrays
- Creating basic arrays
- Example: create a 2 x 2 NDArray
- Example: add two 2 x 2 NDArrays together
- Creating NDArrays from Java arrays
- Getting and setting individual NDArray values
- Working with NDArray rows
- Get a single row
- Get multiple rows
- Setting a single row
- Design and Basic Usage
- Quick reference for determining the size/dimensions of NDArrays
- Dataset
- Relationship to NDArray
- Common uses
- Creating Input Vectors
- Basics of Vector Creation
- Sizing the vector
- Setting feature values
- Setting the label
- Single-label output
- Multiple-label output
- Regression output
- Basics of Vector Creation
- Using MLLibUtil
- Converting from INDArray to MLLib Vector
- Converting from MLLib Vector to INDArray
- Making Model Predictions with DL4J
- Using the DL4J and ND4J Together
- Differences between output vector depending on output layer type
- Logistic output layer for binary classification
- Softmax output layer for multilabel classification
- Linear output layer for regression output
- Differences between output vector depending on output layer type
- Getting the redicted label from the returned INDArray
- Using the DL4J and ND4J Together
- F. Using DataVec
- Loading Data for Machine Learning
- Loading CSV Data for Multilayer Perceptrons
- Loading Image Data for Convolutional Neural Networks
- Loading Sequence Data for Recurrent Neural Networks
- Transforming Data: Data Wrangling with DataVec
- DataVec Transforms: Key Concepts
- DataVec Transform Functionality: An Example
- G. Working with DL4J from Source
- Verifying Git Is Installed
- Cloning Key DL4J GitHub Projects
- Downloading Source via Zip File
- Using Maven to Build Source Code
- H. Setting Up DL4J Projects
- Creating a New DL4J Project
- Java
- Working with Maven
- A minimal Project Object Model file
- Project Object Model explanation
- A minimal Project Object Model file
- IDEs
- Quickstart a DL4J project by using IntelliJ
- Creating a New DL4J Project
- Setting Up Other Maven POMs
- ND4J and Maven
- I. Setting Up GPUs for DL4J Projects
- Switching Backends to GPU
- Picking a GPU
- Training on a Multiple GPU System
- Switching Backends to GPU
- CUDA on Different Platforms
- Monitoring GPU Performance
- NVIDIA System Management Interface
- J. Troubleshooting DL4J Installations
- Previous Installation
- Memory Errors When Installing From Source
- Older Versions of Maven
- Maven and PATH Variables
- Bad JDK Versions
- C++ and Other Development Tools
- Windows and Include Paths
- Monitoring GPUs
- Using the JVisualVM
- Working with Clojure
- OS X and Float Support
- Fork-Join Bug in Java 7
- Precautions
- Other Local Repositories
- Check Maven Dependencies
- Reinstall Dependencies
- If All Else Fails
- Different Platforms
- OS X
- Windows
- Setting up Visual Studio
- Working with Windows on 64-bit platforms
- Linux
- Ubuntu
- Centos
- Index
O'Reilly Media - inne książki
-
Keeping up with the Python ecosystem can be daunting. Its developer tooling doesn't provide the out-of-the-box experience native to languages like Rust and Go. When it comes to long-term project maintenance or collaborating with others, every Python project faces the same problem: how to build re...(200.88 zł najniższa cena z 30 dni)
200.68 zł
239.00 zł(-16%) -
Bringing a deep-learning project into production at scale is quite challenging. To successfully scale your project, a foundational understanding of full stack deep learning, including the knowledge that lies at the intersection of hardware, software, data, and algorithms, is required.This book il...(241.21 zł najniższa cena z 30 dni)
241.16 zł
289.00 zł(-17%) -
Frontend developers have to consider many things: browser compatibility, usability, performance, scalability, SEO, and other best practices. But the most fundamental aspect of creating websites is one that often falls short: accessibility. Accessibility is the cornerstone of any website, and if a...(199.59 zł najniższa cena z 30 dni)
199.49 zł
239.00 zł(-17%) -
In this insightful and comprehensive guide, Addy Osmani shares more than a decade of experience working on the Chrome team at Google, uncovering secrets to engineering effectiveness, efficiency, and team success. Engineers and engineering leaders looking to scale their effectiveness and drive tra...(114.38 zł najniższa cena z 30 dni)
114.33 zł
149.00 zł(-23%) -
Data modeling is the single most overlooked feature in Power BI Desktop, yet it's what sets Power BI apart from other tools on the market. This practical book serves as your fast-forward button for data modeling with Power BI, Analysis Services tabular, and SQL databases. It serves as a starting ...(198.78 zł najniższa cena z 30 dni)
198.68 zł
239.00 zł(-17%) -
C# is undeniably one of the most versatile programming languages available to engineers today. With this comprehensive guide, you'll learn just how powerful the combination of C# and .NET can be. Author Ian Griffiths guides you through C# 12.0 and .NET 8 fundamentals and techniques for building c...(240.72 zł najniższa cena z 30 dni)
240.22 zł
289.00 zł(-17%) -
Learn how to get started with Futures Thinking. With this practical guide, Phil Balagtas, founder of the Design Futures Initiative and the global Speculative Futures network, shows you how designers and futurists have made futures work at companies such as Atari, IBM, Apple, Disney, Autodesk, Luf...(147.90 zł najniższa cena z 30 dni)
147.80 zł
179.00 zł(-17%) -
Augmented Analytics isn't just another book on data and analytics; it's a holistic resource for reimagining the way your entire organization interacts with information to become insight-driven.Moving beyond traditional, limited ways of making sense of data, Augmented Analytics provides a dynamic,...(174.34 zł najniższa cena z 30 dni)
173.84 zł
219.00 zł(-21%) -
Learn how to prepare for—and pass—the Kubernetes and Cloud Native Associate (KCNA) certification exam. This practical guide serves as both a study guide and point of entry for practitioners looking to explore and adopt cloud native technologies. Adrián González Sánchez ...
Kubernetes and Cloud Native Associate (KCNA) Study Guide Kubernetes and Cloud Native Associate (KCNA) Study Guide
(169.14 zł najniższa cena z 30 dni)177.65 zł
199.00 zł(-11%) -
Python is an excellent way to get started in programming, and this clear, concise guide walks you through Python a step at a time—beginning with basic programming concepts before moving on to functions, data structures, and object-oriented design. This revised third edition reflects the gro...(139.94 zł najniższa cena z 30 dni)
139.89 zł
179.00 zł(-22%)
Dzieki opcji "Druk na żądanie" do sprzedaży wracają tytuły Grupy Helion, które cieszyły sie dużym zainteresowaniem, a których nakład został wyprzedany.
Dla naszych Czytelników wydrukowaliśmy dodatkową pulę egzemplarzy w technice druku cyfrowego.
Co powinieneś wiedzieć o usłudze "Druk na żądanie":
- usługa obejmuje tylko widoczną poniżej listę tytułów, którą na bieżąco aktualizujemy;
- cena książki może być wyższa od początkowej ceny detalicznej, co jest spowodowane kosztami druku cyfrowego (wyższymi niż koszty tradycyjnego druku offsetowego). Obowiązująca cena jest zawsze podawana na stronie WWW książki;
- zawartość książki wraz z dodatkami (płyta CD, DVD) odpowiada jej pierwotnemu wydaniu i jest w pełni komplementarna;
- usługa nie obejmuje książek w kolorze.
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka, którą chcesz zamówić pochodzi z końcówki nakładu. Oznacza to, że mogą się pojawić drobne defekty (otarcia, rysy, zagięcia).
Co powinieneś wiedzieć o usłudze "Końcówka nakładu":
- usługa obejmuje tylko książki oznaczone tagiem "Końcówka nakładu";
- wady o których mowa powyżej nie podlegają reklamacji;
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka drukowana
![Loader](https://static01.helion.com.pl/ebookpoint/img/ajax-loader.gif)
![ajax-loader](https://static01.helion.com.pl/ebookpoint/img/ajax-loader.gif)
Oceny i opinie klientów: Deep Learning. A Practitioner's Approach Josh Patterson, Adam Gibson (0)
Weryfikacja opinii następuję na podstawie historii zamówień na koncie Użytkownika umieszczającego opinię. Użytkownik mógł otrzymać punkty za opublikowanie opinii uprawniające do uzyskania rabatu w ramach Programu Punktowego.