자료유형 | 단행본 |
---|---|
서명/저자사항 | Deep Learning with TensorFlow : Explore neural networks and build intelligent systems with Python, 2nd Edition. |
개인저자 | Zaccone, Giancarlo. Karim, Md. Rezaul, |
판사항 | 2nd ed. |
발행사항 | Birmingham: Packt Publishing, 2018. |
형태사항 | 1 online resource (483 pages). |
기타형태 저록 | Print version: Zaccone, Giancarlo. Deep Learning with TensorFlow : Explore neural networks and build intelligent systems with Python, 2nd Edition. Birmingham : Packt Publishing, 짤2018 |
ISBN | 9781788831833 1788831837 |
기타표준부호 | 9781788831109 |
일반주기 |
How does an autoencoder work?
|
내용주기 | Cover; Copyright; Packt Upsell; Contributors; Table of Contents; Preface; Chapter 1: Getting Started with Deep Learning; A soft introduction to machine learning; Supervised learning; Unbalanced data; Unsupervised learning; Reinforcement learning; What is deep learning?; Artificial neural networks; The biological neurons; The artificial neuron; How does an ANN learn?; ANNs and the backpropagation algorithm; Weight optimization; Stochastic gradient descent; Neural network architectures; Deep Neural Networks (DNNs); Multilayer perceptron; Deep Belief Networks (DBNs). Convolutional Neural Networks (CNNs)AutoEncoders; Recurrent Neural Networks (RNNs); Emergent architectures; Deep learning frameworks; Summary; Chapter 2: A First Look at TensorFlow; A general overview of TensorFlow; What's new in TensorFlow v1.6?; Nvidia GPU support optimized; Introducing TensorFlow Lite; Eager execution; Optimized Accelerated Linear Algebra (XLA); Installing and configuring TensorFlow; TensorFlow computational graph; TensorFlow code structure; Eager execution with TensorFlow; Data model in TensorFlow; Tensor; Rank and shape; Data type; Variables; Fetches. Feeds and placeholdersVisualizing computations through TensorBoard; How does TensorBoard work?; Linear regression and beyond; Linear regression revisited for a real dataset; Summary; Chapter 3: Feed-Forward Neural Networks with TensorFlow; Feed-forward neural networks (FFNNs); Feed-forward and backpropagation; Weights and biases; Activation functions; Using sigmoid; Using tanh; Using ReLU; Using softmax; Implementing a feed-forward neural network; Exploring the MNIST dataset; Softmax classifier; Implementing a multilayer perceptron (MLP); Training an MLP; Using MLPs; Dataset description. PreprocessingA TensorFlow implementation of MLP for client-subscription assessment; Deep Belief Networks (DBNs); Restricted Boltzmann Machines (RBMs); Construction of a simple DBN; Unsupervised pre-training; Supervised fine-tuning; Implementing a DBN with TensorFlow for client-subscription assessment; Tuning hyperparameters and advanced FFNNs; Tuning FFNN hyperparameters; Number of hidden layers; Number of neurons per hidden layer; Weight and biases initialization; Selecting the most suitable optimizer; GridSearch and randomized search for hyperparameters tuning; Regularization. Dropout optimizationSummary; Chapter 4: Convolutional Neural Networks; Main concepts of CNNs; CNNs in action; LeNet5; Implementing a LeNet-5 step by step; AlexNet; Transfer learning; Pretrained AlexNet; Dataset preparation; Fine-tuning implementation; VGG; Artistic style learning with VGG-19; Input images; Content extractor and loss; Style extractor and loss; Merger and total loss; Training; Inception-v3; Exploring Inception with TensorFlow; Emotion recognition with CNNs; Testing the model on your own image; Source code; Summary; Chapter 5: Optimizing TensorFlow Autoencoders. |
요약 | Compliant with TensorFlow 1.7, this book introduces the core concepts of deep learning. Get implementation and research details on cutting-edge architectures and apply advanced concepts to your own projects. Develop your knowledge of deep neural networks through hands-on model building and examples of real-world data collection. |
일반주제명 | Machine learning. Artificial intelligence. Python (Computer program language) Artificial intelligence. Machine learning. Python (Computer program language) COMPUTERS / General. |
언어 | 영어 |
바로가기 |