<p align="center"><img width="40%" src="logo/DLBookCover2.png" /></p>
<h1 id="DeepLearningWithTF20" align="center" >Deep Learning with Tensorflow 2.0</h1>
<p align="center">
<a href="https://colab.research.google.com/github/adhiraiyan/DeepLearningWithTF2.0/blob/master/notebooks/Index.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Google Colab">
</a>
<a href="https://hub.mybinder.org/user/adhiraiyan-deeplearningwithtf2.0-h0jryg50/notebooks/notebooks/Index.ipynb">
<img src="https://mybinder.org/badge_logo.svg" alt="Binder">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/cocoapods/l/AFNetworking.svg" alt="GitHub">
</a>
<a href="https://www.python.org/downloads/release/python-360/">
<img src="https://img.shields.io/badge/Python-3.6-blue.svg" alt="Python 3.6">
</a>
<a href="https://www.tensorflow.org/alpha">
<img src="https://img.shields.io/badge/Tensorflow-2.0-orange.svg" alt="Python 3.6">
</a>
<a><img src="https://img.shields.io/badge/Status-Work_In_Progress-yellow.svg" alt="WorkInProgress"></a>
<a href="https://www.adhiraiyan.org/">
<img src="https://img.shields.io/badge/Adhiraiyan AI Blog-red.svg?" alt="Facebook">
</a>
</p>
<p align="center">
<a href="#clipboard-getting-started">Getting Started</a> •
<a href="#about">About</a> •
<a href="#table-of-contents">Table of Contents</a> •
<a href="#donate">Donate</a> •
<a href="#acknowledgment">Acknowledgment</a> •
<a href="#speech_balloon-faq">FAQ</a> •
</p>
<h6 align="center">Made by Mukesh Mithrakumar • :milky_way: <a href="https://mukeshmithrakumar.com">https://mukeshmithrakumar.com</a></h6>
This is the GitHub version of the Deep Learning with Tensorflow 2.0 by Mukesh Mithrakumar. Feel free to watch for updates, you can also follow me to get notified when I make a new post.
<!-- Open Source runs on love, laughter and a whole lot of coffee. Consider buying me one if you find this content useful ☕️😉.
<a href="https://www.buymeacoffee.com/mmukesh"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: auto !important;width: auto !important;" ></a>
You can also support my work via Patron:
<span class="badge-patreon"><a href="https://www.patreon.com/bePatron?u=19664301"
title="Donate to this project using Patreon"><img src="logo/patron button.png" width="175"
alt="Patreon donate button" /></a></span> -->
<h2 align="center">:clipboard: Getting Started</h2>
-
Read the book in its entirety online at https://www.adhiraiyan.org/DeepLearningWithTensorflow.html
-
Run the code using the Jupyter notebooks available in this repository's notebooks directory.
-
Launch executable versions of these notebooks using Google Colab: 
-
Launch a live notebook server with these notebooks using binder: 
<h2 align="center">About</h2>
This Book is a practical guide to Deep Learning with Tensorflow 2.0. We will be using the Deep Learning Book by Ian Goodfellow as our guide. Ian Goodfellows' Deep Learning Book is an excellent, comprehensive textbook on deep learning that I found so far but this book can be challenging because this is a highly theoretical book written as an academic text and the best way to learn these concepts would be by practicing it, working on problems and solving programming examples which resulted in me writing Deep Learning with Tensorflow 2.0 as a practical guide with explanations for complex concepts, summaries for others and practical examples and exercises in Tensorflow 2.0 to help anyone with limited mathematics, machine learning and programming background to get started.
Read more about the book in Introduction.
Finally I would like to ask for your help, this Book is for you, and I would love to hear from you, if you need more explanations, have doubts on certain sections, many others will feel the same so please feel free to reach out to me via:
<a href="https://www.facebook.com/adhiraiyan/">
<img src="https://img.shields.io/badge/Facebook-brightgreen.svg?" alt="Facebook">
<a href="https://www.linkedin.com/in/mukesh-mithrakumar/">
<img src="https://img.shields.io/badge/LinkedIn-blue.svg?" alt="LinkedIn">
</a>
<a href="https://twitter.com/MMithrakumar">
<img src="https://img.shields.io/badge/Twitter-orange.svg?" alt="Twitter">
</a>
<a href="https://www.instagram.com/adhiraiyan/">
<img src="https://img.shields.io/badge/Instagram-blueviolet.svg?" alt="Instagram">
</a>
with your questions, comments or even if you just want to say Hi.
<h2 align="center">Table of Contents</h2>
<p align="right"><a href="#DeepLearningWithTF20"><sup>▴ Back to top</sup></a></p>
<li>01.00 Preface</li>
<li>01.01 Introduction</li>
<li>01.02 Who should read this book</li>
<li>01.03 A Short History of Deep Learning</li>
<li>02.01 Scalars, Vectors, Matrices and Tensors</li>
<li>02.02 Multiplying Matrices and Vectors</li>
<li>02.03 Identity and Inverse Matrices</li>
<li>02.04 Linear Dependence and Span</li>
<li>02.05 Norms</li>
<li>02.06 Special Kinds of Matrices and Vectors</li>
<li>02.07 Eigendecomposition</li>
<li>02.08 Singular Value Decomposition</li>
<li>02.09 The Moore-Penrose Pseudoinverse</li>
<li>02.10 The Trace Operator</li>
<li>02.11 The Determinant</li>
<li>02.12 Example: Principal Components Analysis</li>
<li>03.01 Why Probability?</li>
<li>03.02 Random Variables</li>
<li>03.03 Probability Distributions</li>
<li>03.04 Marginal Probability</li>
<li>03.05 Conditional Probability</li>
<li>03.06 The Chain Rule of Conditional Probabilities</li>
<li>03.07 Independence and Conditional Independence</li>
<li>03.08 Expectation, Variance and Covariance</li>
<li>03.09 Common Probability Distributions</li>
<li>03.10 Useful Properties of Common Functions</li>
<li>03.11 Bayes' Rule</li>
<li>03.12 Technical Details of Continuous Variables</li>
<li>03.13 Information Theory</li>
<li>03.14 Structured Probabilistic Models</li>
<li>04.01 Overflow and Underflow</li>
<li>04.02 Poor Conditioning</li>
<li>04.03 Gradient-Based Optimization</li>
<li>04.04 Constrained Optimization</li>
<li>04.05 Example: Linear Least Squares</li>
<li>05.01 Learning Algorithms</li>
<li>05.02 Capacity, Overfitting and Underfitting</li>
<li>05.03 Hyperparameters and Validation Sets</li>
<li>05.04 Estimators, Bias and Variance</li>
<li>05.05 Maximum Likelihood Estimation</li>
<li>05.06 Bayesian Statistics</li>
<li>05.07 Supervised Learning Algorithms</li>
<li>05.08 Unsupervised Learning Algorithms</li>
<li>05.09 Stochastic Gradient Descent</li>
<li>05.10 Building a Machine Learning Algorithm</li>
<li>05.11 Challenges Motivating Deep Learning</li>
<li>06.01 Example: Learning XOR</li>
<li>06.02 Gradient-Based Learning</li>
<li>06.03 Hidden Units</li>
<li>06.04 Architecture Design</li>
<li>06.05 Back-Propagation and Other Differentiation Algorithms</li>
<li>06.06 Historical Notes</li>
<li>07.01 Parameter Norm Penalties</li>
<li>07.02 Norm Penalties as Constrained Optimization</li>
<li>07.03 Regularization and Under-Constrained Problems</li>
<li>07.04 Dataset Augmentation</li>
<li>07.05 Noise Robustness</li>
<li>07.06 Semi-Supervised Learning</li>
<li>07.07 Multitask Learning</li>
<li>07.08 Early Stopping</li>
<li>07.09 Parameter Tying and Parameter Sharing</li>
<li>07.10 Sparse Representations</li>
<li>07.11 Bagging and Other Ensemble Methods</li>
<li>07.12 Dropout</li>
<li>07.13 Adversarial Training</li>
<li>07.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier</li>
<li>08.01 How Learning Differs from Pure Optimization</li>
<li>08.02 Challenges in Neural Network Optimization</li>
<li>08.03 Basic Algorithms</li>
<li>08.04 Parameter Initialization Strategies</li>
<li>08.05 Algorithms with Adaptive Learning Rates</li>
<li>08.06 Approximate Second-Order Methods</li>
<li>08.07 Optimization Strategie