Opis ebooka: Deep Learning at Scale
Bringing a deep-learning project into production at scale is quite challenging. To successfully scale your project, a foundational understanding of full stack deep learning, including the knowledge that lies at the intersection of hardware, software, data, and algorithms, is required.
This book illustrates complex concepts of full stack deep learning and reinforces them through hands-on exercises to arm you with tools and techniques to scale your project. A scaling effort is only beneficial when it's effective and efficient. To that end, this guide explains the intricate concepts and techniques that will help you scale effectively and efficiently.
You'll gain a thorough understanding of:
- How data flows through the deep-learning network and the role the computation graphs play in building your model
- How accelerated computing speeds up your training and how best you can utilize the resources at your disposal
- How to train your model using distributed training paradigms, i.e., data, model, and pipeline parallelism
- How to leverage PyTorch ecosystems in conjunction with NVIDIA libraries and Triton to scale your model training
- Debugging, monitoring, and investigating the undesirable bottlenecks that slow down your model training
- How to expedite the training lifecycle and streamline your feedback loop to iterate model development
- A set of data tricks and techniques and how to apply them to scale your training model
- How to select the right tools and techniques for your deep-learning project
- Options for managing the compute infrastructure when running at scale
Wybrane bestsellery
-
W dynamicznie zmieniającym się świecie biznesu automatyzacja procesów staje się kluczowym elementem sukcesu każdej organizacji. Technologia RPA (ang. robotic process automation) w połączeniu z zarządzaniem projektami i inżynierią oprogramowania tworzy nowy standard w zarządzaniu zasobami i operac...(49.05 zł najniższa cena z 30 dni)
76.30 zł
109.00 zł(-30%) -
Dzięki tej książce łatwo przyswoisz teoretyczne podstawy i zaczniesz je płynnie wdrażać w rzeczywistych scenariuszach. Dowiesz się, w jaki sposób myślenie przyczynowe ułatwia rozwiązywanie problemów, i poznasz pojęcia Pearla, takie jak strukturalny model przyczynowy, interwencje, kontrfakty itp. ...(65.40 zł najniższa cena z 30 dni)
65.40 zł
109.00 zł(-40%) -
Oto zaktualizowane wydanie popularnego przewodnika, dzięki któremu skorzystasz z ponad dwustu sprawdzonych receptur bazujących na najnowszych wydaniach bibliotek Pythona. Wystarczy, że skopiujesz i dostosujesz kod do swoich potrzeb. Możesz też go uruchamiać i testować za pomocą przykładowego zbio...(53.40 zł najniższa cena z 30 dni)
53.40 zł
89.00 zł(-40%) -
Światowy bestseller, dzięki któremu - według ostrożnych szacunków - codziennie ktoś staje się nowym MILIONEREM! Dowiedz się jak wykorzystać praktycznie nieograniczone możliwości Sztucznej Inteligencji. Nieważne, czy jesteś freelancerem, prowadzisz własną firmę, masz wolny zawód, chcesz zająć się ...
-
Książka stanowi kompendium wiedzy na temat tej niesłychanie szybko rozwijającej się i dynamicznie wkraczającej w nasze życie dziedziny. Została napisana tak, aby była przystępna dla osób posiadających podstawowe umiejętności matematyczne. Może stanowić podręcznik dla studentów takich kierunków ja...(29.40 zł najniższa cena z 30 dni)
29.40 zł
49.00 zł(-40%) -
To książka przeznaczona dla inżynierów, którzy chcą stosować systemy uczenia maszynowego do rozwiązywania rzeczywistych problemów biznesowych. Zaprezentowano w niej systemy ML używane w szybko rozwijających się startupach, a także przedstawiono holistyczne podejście do ich projektowania ― z...(53.40 zł najniższa cena z 30 dni)
53.40 zł
89.00 zł(-40%) -
Ta książka jest praktycznym podręcznikiem opartym na sprawdzonej metodyce: nauce poprzez pisanie kodu w Pythonie. Aby w pełni z niego skorzystać, nie musisz znać wyższej matematyki. Dzięki praktycznym lekcjom szybko zaczniesz programowo tworzyć konkretne rozwiązania. Dowiesz się, jak można zaimpl...(53.40 zł najniższa cena z 30 dni)
53.40 zł
89.00 zł(-40%) -
To praktyczny przewodnik po algorytmach sztucznej inteligencji. Skorzystają z niego programiści i inżynierowie, którzy chcą zrozumieć zagadnienia i algorytmy związane ze sztuczną inteligencją na podstawie praktycznych przykładów i wizualnych wyjaśnień. Książka pokazuje, jak radzić sobie z takimi ...(47.40 zł najniższa cena z 30 dni)
47.40 zł
79.00 zł(-40%)
Ebooka "Deep Learning at Scale" przeczytasz na:
-
czytnikach Inkbook, Kindle, Pocketbook, Onyx Boox i innych
-
systemach Windows, MacOS i innych
-
systemach Windows, Android, iOS, HarmonyOS
-
na dowolnych urządzeniach i aplikacjach obsługujących formaty: PDF, EPub, Mobi
Masz pytania? Zajrzyj do zakładki Pomoc »
Audiobooka "Deep Learning at Scale" posłuchasz:
-
w aplikacji Ebookpoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych
-
na dowolnych urządzeniach i aplikacjach obsługujących format MP3 (pliki spakowane w ZIP)
Masz pytania? Zajrzyj do zakładki Pomoc »
Kurs Video "Deep Learning at Scale" zobaczysz:
-
w aplikacjach Ebookpoint i Videopoint na Android, iOS, HarmonyOs
-
na systemach Windows, MacOS i innych z dostępem do najnowszej wersji Twojej przeglądarki internetowej
Szczegóły ebooka
- ISBN Ebooka:
- 978-10-981-4524-8, 9781098145248
- Data wydania ebooka:
- 2024-06-18 Data wydania ebooka często jest dniem wprowadzenia tytułu do sprzedaży i może nie być równoznaczna z datą wydania książki papierowej. Dodatkowe informacje możesz znaleźć w darmowym fragmencie. Jeśli masz wątpliwości skontaktuj się z nami sklep@ebookpoint.pl.
- Język publikacji:
- angielski
- Rozmiar pliku ePub:
- 15.6MB
- Rozmiar pliku Mobi:
- 15.6MB
Spis treści ebooka
- Preface
- Why Scaling Matters
- Who This Book Is For
- How This Book Is Organized
- Introduction
- Part I: Foundational Concepts of Deep Learning
- Part II: Distributed Training
- Part III: Extreme Scaling
- What You Need to Use This Book
- Setting Up Your Environment for Hands-on Exercises
- Using Code Examples
- Conventions Used in This Book
- OReilly Online Learning
- How to Contact Us
- Acknowledgments
- 1. What Nature and History Have Taught Us About Scale
- The Philosophy of Scaling
- The General Law of Scaling
- History of Scaling Law
- The Philosophy of Scaling
- Scalable Systems
- Nature as a Scalable System
- Our Visual System: A Biological Inspiration
- Artificial Intelligence: The Evolution of Learnable Systems
- It Takes Four to Tango
- The hardware
- The data
- The software
- The (deep learning) algorithms
- It Takes Four to Tango
- Evolving Deep Learning Trends
- General evolution of deep learning
- Evolution in specialized domains
- Math and compute
- Protein folding
- Simulated world
- Scale in the Context of Deep Learning
- Six Development Considerations
- Well-defined problem
- Domain knowledge (a.k.a. the constraints)
- Ground truth
- Model development
- Deployment
- Feedback
- Six Development Considerations
- Scaling Considerations
- Questions to ask before scaling
- Characteristics of scalable systems
- Reliability
- Availability
- Adaptability
- Performance
- Considerations of scalable systems
- Avoiding single points of failure
- Designing for high availability
- Scaling paradigms
- Coordination and communication
- Caching and intermittent storage
- Process state
- Graceful recovery and checkpointing
- Maintainability and observability
- Scaling effectively
- Summary
- I. Foundational Concepts of Deep Learning
- 2. Deep Learning
- The Role of Data in Deep Learning
- Data Flow in Deep Learning
- Hands-On Exercise #1: Implementing Minimalistic Deep Learning
- Developing the Model
- Model input data and pipeline
- Model
- Training loop
- Loss
- Metrics
- Developing the Model
- The Embedded/Latent Space
- A Word of Caution
- The Learning Rate and Loss Landscape
- Scaling Consideration
- Profiling
- Hands-On Exercise #2: Getting Complex with PyTorch
- Model Input Data and Pipeline
- Model
- Auxiliary Utilities
- Callbacks
- Loggers
- Profilers
- Putting It All Together
- Computation Graphs
- Inference
- Summary
- 3. The Computational Side of Deep Learning
- The Higgs Boson of the Digital World
- Floating-Point Numbers: The Faux Continuous Numbers
- Floating-point encoding
- Floating-point standards
- Floating-Point Numbers: The Faux Continuous Numbers
- Units of Data Measurement
- Data Storage Formats: The Trade-off of Latency and Throughput
- The Higgs Boson of the Digital World
- Computer Architecture
- The Birth of the Electromechanical Engine
- Memory and Persistence
- Virtual memory
- Input/output
- Memory and Moores law
- Computation and Memory Combined
- The Scaling Laws of Electronics
- Scaling Out Computation with Parallelization
- Threads Versus Processes: The Unit of Parallelization
- Simultaneous multithreading
- Scenario walkthrough: A web crawler to curate a links dataset
- Threads Versus Processes: The Unit of Parallelization
- Hardware-Optimized Libraries for Acceleration
- Parallel Computer Architectures: Flynns and Duncans Taxonomies
- Accelerated Computing
- Popular Accelerated Devices for Deep Learning
- Graphics processing units (GPUs)
- GPU microarchitecture
- Graphics processing units (GPUs)
- Popular Accelerated Devices for Deep Learning
- CUDA
- NVIDIAs dominance: The competition landscape
- Application-specific integrated circuits (ASICs)
- Tensor Processing Units (TPUs)
- Intelligence Processing Units (IPUs)
- Field programmable gate arrays (FPGAs)
- Wafer Scale Engines (WSEs)
- Accelerator Benchmarking
- Summary
- 4. Putting It All Together: Efficient Deep Learning
- Hands-On Exercise #1: GPT-2
- Exercise Objectives
- Model Architecture
- Key contributors to scale
- Transformer attention block
- Unsupervised training
- Zero-shot learning
- Parameter scale
- Key contributors to scale
- Implementation
- model.py
- dataset.py
- app.py
- Hands-On Exercise #1: GPT-2
- Running the Example
- Experiment Tracking
- Measuring to Understand the Limitations and Scale Out
- Running on a CPU
- Running on a GPU
- Transitioning from Language to Vision
- Hands-On Exercise #2: Vision Model with Convolution
- Model Architecture
- Key contributors to scale in the scene parsing exercise
- Scaling with convolutions
- Scaling with EfficientNet
- Key contributors to scale in the scene parsing exercise
- Implementation
- Model Architecture
- Running the Example
- Observations
- Graph Compilation Using PyTorch 2.0
- New Components of PyTorch 2.0
- Graph Execution in PyTorch 2.0
- Graph acquisition
- Graph lowering
- Graph compilation
- Modeling Techniques to Scale Training on a Single Device
- Graph Compilation
- Reduced- and Mixed-Precision Training
- Mixed precision
- The effect of precision on gradients
- Gradient scaling
- Gradient clipping
- 8-bit optimizers and quantization
- A mixed-precision algorithm
- Memory Tricks for Efficiency
- Memory layout
- Feature compression
- Meta and fake tensors
- Optimizer Efficiencies
- Stochastic gradient descent (SGD)
- Gradient accumulation
- Gradient checkpointing
- Patch Gradient Descent
- Learning rate and weight decay
- Model Input Pipeline Tricks
- Writing Custom Kernels in PyTorch 2.0 with Triton
- Summary
- II. Distributed Training
- 5. Distributed Systems and Communications
- Distributed Systems
- The Eight Fallacies of Distributed Computing
- The Consistency, Availability, and Partition Tolerance (CAP) Theorem
- The Scaling Law of Distributed Systems
- Types of Distributed Systems
- Centralized
- Decentralized
- Distributed Systems
- Communication in Distributed Systems
- Communication Paradigm
- Communication Patterns
- Basic communication patterns
- Collective communication patterns
- Communication Technologies
- RPC
- MPI
- NCCL
- Communication technology summary
- Communication Initialization: Rendezvous
- Hands-On Exercise
- Scaling Compute Capacity
- Infrastructure Setup Options
- Private cloud (on-premise/DIY data centers)
- Public cloud
- Hybrid cloud
- Multicloud
- Federation
- Infrastructure Setup Options
- Provisioning of Accelerated Devices
- Workload Management
- Slurm
- Kubernetes
- Ray
- Distributed memory layer
- Asynchronous model
- Amazon SageMaker
- Google Vertex AI
- Deep Learning Infrastructure Review
- Overview of Leading Deep Learning Clusters
- Similarities Between Todays Most Powerful Systems
- Summary
- 6. Theoretical Foundations of Distributed Deep Learning
- Distributed Deep Learning
- Centralized DDL
- Parameter server configurations
- Subtypes of centralized DDL
- Synchronous centralized DDL
- Asynchronous centralized DDL
- Centralized DDL
- Decentralized DDL
- Limiting divergence
- Subtypes of decentralized DDL
- Synchronous decentralized DDL
- Asynchronous decentralized DDL
- Distributed Deep Learning
- Dimensions of Scaling Distributed Deep Learning
- Partitioning Dimensions of Distributed Deep Learning
- Types of Distributed Deep Learning Techniques
- Ensembling
- Data parallelism
- Model parallelism
- Pipeline parallelism
- Tensor parallelism
- Hybrid parallelism
- Federation/collaborative learning
- Choosing a Scaling Technique
- Measuring Scale
- End-to-End Metrics and Benchmarks
- Time to convergence
- Cost to train
- Multilevel benchmarks
- End-to-End Metrics and Benchmarks
- Measuring Incrementally in a Reproducible Environment
- Summary
- 7. Data Parallelism
- Data Partitioning
- Implications of Data Sampling Strategies
- Working with Remote Datasets
- Data Partitioning
- Introduction to Data Parallel Techniques
- Hands-On Exercise #1: Centralized Parameter Server Using RCP
- Setup
- Observations
- Inspecting involved processes
- Inspecting connections
- Communication patterns
- Discussion
- Hands-On Exercise #1: Centralized Parameter Server Using RCP
- Hands-On Exercise #2: Centralized Gradient-Partitioned Joint Worker/Server Distributed Training
- Setup
- Observations
- Communication patterns
- Discussion
- Hands-On Exercise #3: Decentralized Asynchronous Distributed Training
- Setup
- Observations
- Communication patterns
- Discussion
- Centralized Synchronous Data Parallel Strategies
- Data Parallel (DP)
- Distributed Data Parallel (DDP)
- Devil in the details
- Distributed Data Parallel 2 (DDP2)
- Zero Redundancy OptimizerPowered Data Parallelism (ZeRO-DP)
- Fault-Tolerant Training
- Hands-On Exercise #4: Scene Parsing with DDP
- Setup
- Observations
- Baseline
- Multi-GPU training
- Multinode
- Mixed-precision training
- Hands-On Exercise #5: Distributed Sharded DDP (ZeRO)
- Setup
- Runtime configuration
- Setup
- Observations
- Discussion
- Building Efficient Pipelines
- Dataset Format
- Local Versus Remote
- Staging
- Threads Versus Processes: Scaling Your Pipelines
- Memory Tricks
- Data Augmentations: CPU Versus GPU
- JIT Acceleration
- Hands-On Exercise #6: Pipeline Efficiency with FFCV
- Setup
- Runtime configuration
- Setup
- Observations
- Summary
- 8. Scaling Beyond Data Parallelism: Model, Pipeline, Tensor, and Hybrid Parallelism
- Questions to Ask Before Scaling Vertically
- Theoretical Foundations of Vertical Scaling
- Revisiting the Dimensions of Scaling
- Implementing tensor parallelism
- Implementing model parallelism
- Choosing a scaling dimension
- Revisiting the Dimensions of Scaling
- Operators Perspective of Parallelism Dimensions
- Data Flow and Communications in Vertical Scaling
- Tensor parallelism
- Model parallelism
- Pipeline parallelism: An evolution of model parallelism
- GPipe
- PipeDream
- Hybrid parallelism
- 2D hybrid parallelism
- 3D hybrid parallelism
- Basic Building Blocks for Scaling Beyond DP
- PyTorch Primitives for Vertical Scaling
- Device mesh: Mapping model architecture to physical devices
- Distributed tensors: Tensors with sharding and replication
- Sharding and replication examples
- Partial tensors
- Logical tensors: Representation without materialization
- Meta tensors
- Fake tensors
- PyTorch Primitives for Vertical Scaling
- Working with Larger Models
- Distributed Checkpointing: Saving the Partitioned Model
- Summary
- 9. Gaining Practical Expertise with Scaling Across All Dimensions
- Hands-On Exercises: Model, Tensor, Pipeline, and Hybrid Parallelism
- The Dataset
- Hands-On Exercise #1: Baseline DeepFM
- Training
- Observations
- Hands-On Exercise #2: Model Parallel DeepFM
- Implementation details
- Observations
- Hands-On Exercises: Model, Tensor, Pipeline, and Hybrid Parallelism
- Hands-On Exercise #3: Pipeline Parallel DeepFM
- Implementation details
- Observations
- Hands-On Exercise #4: Pipeline Parallel DeepFM with RPC
- Implementation details
- Observations
- Hands-On Exercise #5: Tensor Parallel DeepFM
- Implementation details
- Observations
- Hands-On Exercise #6: Hybrid Parallel DeepFM
- Implementation details
- Observations
- Tools and Libraries for Vertical Scaling
- OneFlow
- FairScale
- DeepSpeed
- FSDP
- Overview and Comparison
- Hands-On Exercise #7: Automatic Vertical Scaling with DeepSpeed
- Observations
- Summary
- III. Extreme Scaling
- 10. Data-Centric Scaling
- The Seven Vs of Data Through a Deep Learning Lens
- The Scaling Law of Data
- Data Quality
- Validity
- Variety
- Handling too much variety
- Heuristic-based pruning
- Algorithmic outlier pruning
- Hands-on exercise #1: Outlier detection
- Scaling outlier detection
- Handling too much variety
- Handling too-low variety
- Data augmentation
- Advanced data augmentation
- Automated augmentation
- Synthetic data generation
- Handling imbalance
- Sampling
- Hands-on exercise #2: Handling imbalance in a multilabel dataset
- Loss tricks
- Veracity
- Reasons for error in labels
- Approaches to labeling
- Techniques to increase veracity/decrease noise
- Using heuristics to identify noise
- Using inter-label information, such as ontology
- Continuous feedback
- Handling disagreements from multiple annotators
- Identifying noisy samples by loss gradients
- Hands-on exercise #3: Loss tricks to find noisy samples
- Using confident learning
- Summary of veracity tactics
- Value and Volume
- Core principles driving value
- Volume reduction via compression and pruning
- Volume reduction via dimensionality reduction
- Volume reduction via approximation
- Volume reduction via distillation
- Value via regularization
- The Data Engine and Continual Learning
- Volatility
- Velocity
- Summary
- 11. Scaling Experiments: Effective Planning and Management
- Model Development Is Iterative
- Planning for Experiments and Execution
- Simplify the Complex
- Fast Iteration for Fast Feedback
- Decoupled Iterations
- Feasibility Testing
- Developing and Scaling a Minimal Viable Solution
- Setting Up for Iterative Execution
- Techniques to Scale Your Experiments
- Accelerating Model Convergence
- Using transfer learning
- Retraining
- Fine tuning
- Pretraining
- Using transfer learning
- Knowledge distillation
- Accelerating Model Convergence
- Accelerating Learning Via Optimization and Automation
- Hyperparameter optimization
- AutoML
- Neural architecture search
- Model validation
- Simulating optimization behavior with Daydream
- Accelerating Learning by Increasing Expertise
- Continuous learning
- Learning to learn via meta-learning
- Curriculum learning
- Mixture of experts
- Learning with Scarce Supervision
- Self-supervised learning
- Contrastive learning
- Hands-On Exercises
- Hands-On Exercise #1: Transfer Learning
- Hands-On Exercise #2: Hyperparameter Optimization
- Hands-On Exercise #3: Knowledge Distillation
- Hands-On Exercise #4: Mixture of Experts
- Mock MoE
- DeepSpeed-MoE
- Hands-On Exercise #5: Contrastive Learning
- Hands-On Exercise #6: Meta-Learning
- Summary
- 12. Efficient Fine-Tuning of Large Models
- Review of Fine-Tuning Techniques
- Standard Fine Tuning
- Meta-Learning (Zero-/Few-Shot Learning)
- Adapter-Based Fine Tuning
- Low-Rank Tuning
- Review of Fine-Tuning Techniques
- LoRAParameter-Efficient Fine Tuning
- Quantized LoRA (QLoRA)
- Hands-on Exercise: QLoRA-Based Fine Tuning
- Implementation Details
- Inference
- Exercise Summary
- Summary
- 13. Foundation Models
- What Are Foundation Models?
- The Evolution of Foundation Models
- Challenges Involved in Developing Foundation Models
- Measurement Complexity
- Deployment Challenges
- Propagation of Defects to All Downstream Models
- Legal and Ethical Considerations
- Ensuring Consistency and Coherency
- Multimodal Large Language Models
- Projection
- Gated Cross-Attention
- Query-Based Encoding
- Further Exploration
- Summary
- Index
O'Reilly Media - inne książki
-
This concise yet comprehensive guide explains how to adopt a data lakehouse architecture to implement modern data platforms. It reviews the design considerations, challenges, and best practices for implementing a lakehouse and provides key insights into the ways that using a lakehouse can impact ...(193.69 zł najniższa cena z 30 dni)
193.19 zł
249.00 zł(-22%) -
In today's fast-paced world, more and more organizations require rapid application development with reduced development costs and increased productivity. This practical guide shows application developers how to use PowerApps, Microsoft's no-code/low-code application framework that helps developer...(162.47 zł najniższa cena z 30 dni)
162.27 zł
209.00 zł(-22%) -
Welcome to the systems age, where software professionals are no longer building software&emdash;we're building systems of software. Change is continuously deployed across software ecosystems coordinated by responsive infrastructure. In this world of increasing relational complexity, we need t...(152.21 zł najniższa cena z 30 dni)
152.01 zł
209.00 zł(-27%) -
This book provides an ideal guide for Python developers who want to learn how to build applications with large language models. Authors Olivier Caelen and Marie-Alice Blete cover the main features and benefits of GPT-4 and GPT-3.5 models and explain how they work. You'll also get a step-by-step g...(155.41 zł najniższa cena z 30 dni)
155.36 zł
209.00 zł(-26%) -
In today's cloud native world, where we automate as much as possible, everything is code. With this practical guide, you'll learn how Policy as Code (PaC) provides the means to manage the policies, related data, and responses to events that occur within the systems we maintain—Kubernetes, c...(212.59 zł najniższa cena z 30 dni)
212.39 zł
279.00 zł(-24%) -
Geared to intermediate- to advanced-level DBAs and IT professionals looking to enhance their MySQL skills, this guide provides a comprehensive overview on how to manage and optimize MySQL databases. You'll learn how to create databases and implement backup and recovery, security configurations, h...(221.43 zł najniższa cena z 30 dni)
221.33 zł
279.00 zł(-21%) -
Get the details, examples, and best practices you need to build generative AI applications, services, and solutions using the power of Azure OpenAI Service. With this comprehensive guide, Microsoft AI specialist Adrián González Sánchez examines the integration and utilization of Az...(162.23 zł najniższa cena z 30 dni)
162.18 zł
209.00 zł(-22%) -
Despite the increase of high-profile hacks, record-breaking data leaks, and ransomware attacks, many organizations don't have the budget for an information security (InfoSec) program. If you're forced to protect yourself by improvising on the job, this pragmatic guide provides a security-101 hand...(214.77 zł najniższa cena z 30 dni)
214.57 zł
239.00 zł(-10%) -
Keeping up with the Python ecosystem can be daunting. Its developer tooling doesn't provide the out-of-the-box experience native to languages like Rust and Go. When it comes to long-term project maintenance or collaborating with others, every Python project faces the same problem: how to build re...(189.29 zł najniższa cena z 30 dni)
188.79 zł
239.00 zł(-21%) -
Frontend developers have to consider many things: browser compatibility, usability, performance, scalability, SEO, and other best practices. But the most fundamental aspect of creating websites is one that often falls short: accessibility. Accessibility is the cornerstone of any website, and if a...(184.12 zł najniższa cena z 30 dni)
183.92 zł
239.00 zł(-23%)
Dzieki opcji "Druk na żądanie" do sprzedaży wracają tytuły Grupy Helion, które cieszyły sie dużym zainteresowaniem, a których nakład został wyprzedany.
Dla naszych Czytelników wydrukowaliśmy dodatkową pulę egzemplarzy w technice druku cyfrowego.
Co powinieneś wiedzieć o usłudze "Druk na żądanie":
- usługa obejmuje tylko widoczną poniżej listę tytułów, którą na bieżąco aktualizujemy;
- cena książki może być wyższa od początkowej ceny detalicznej, co jest spowodowane kosztami druku cyfrowego (wyższymi niż koszty tradycyjnego druku offsetowego). Obowiązująca cena jest zawsze podawana na stronie WWW książki;
- zawartość książki wraz z dodatkami (płyta CD, DVD) odpowiada jej pierwotnemu wydaniu i jest w pełni komplementarna;
- usługa nie obejmuje książek w kolorze.
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka, którą chcesz zamówić pochodzi z końcówki nakładu. Oznacza to, że mogą się pojawić drobne defekty (otarcia, rysy, zagięcia).
Co powinieneś wiedzieć o usłudze "Końcówka nakładu":
- usługa obejmuje tylko książki oznaczone tagiem "Końcówka nakładu";
- wady o których mowa powyżej nie podlegają reklamacji;
Masz pytanie o konkretny tytuł? Napisz do nas: sklep[at]helion.pl.
Książka drukowana
Oceny i opinie klientów: Deep Learning at Scale Suneeta Mall (0) Weryfikacja opinii następuję na podstawie historii zamówień na koncie Użytkownika umieszczającego opinię. Użytkownik mógł otrzymać punkty za opublikowanie opinii uprawniające do uzyskania rabatu w ramach Programu Punktowego.