background

Deep Learning Engineering

Deep Learning Engineering

Transforming Biological neuron to Artificial Neurons
Logical Computations with Neurons
Single Layer Perceptron
Sequential Modelling
Multi-Layer Perceptron
Activation Functions
Loss functions
Vanishing/Exploding Gradients Problems
Batch Normalization
Learning Rates
Train Test and Validation
Overfitting and Underfitting Problems
Dealing with Data Augmentation.
One Hot Encoding
Dropout
Gradient Clipping


Intuition
Zero Padding
Convolution Layers
Max Pooling
Back Propagation
Weights and its Importance
Classification MLPs
Backpropagation
Dealing with Augmented Data
Reusing Pretrained Layers
Transfer Learning with Keras
Unsupervised Pretraining
Faster Optimizers
Momentum Optimization
Batch Size
Max-Norm Regularization
Fine Tuning
CNN Architectures
Self-Organizing Maps
Boltzmann Mechanism
Autoencoders


Intuition
RNN
Bidirectional RNN’S
LSTM
Memory Requirements


Discussion on LeNet-5
Discussion on AlexNet
Discussion on GoogLeNet
Discussion on VGGNet
Discussion on ResNet
Using Pretrained Models from Keras
Pretrained Models for Transfer Learning
Classification and Localization
Object Detection
Fully Convolutional Networks (FCNs)
You Only Look Once (YOLO)
Time Series Analysis
Generative Adversarial Network
Deploying Deep Learning Models using Django


TensorFlow and Keras Initation
Tensors and Operations
Tensors with NumPy
Placeholders
Type Conversions
Variables
Data Structures Indepth
TensorFlow Functions and Graphs
Building an Image Classifier
Using Sequential API to build Regression MLP
Using Sequential API to build Complex Models
Using the Sub classing API Saving and Restoring a Model
Implementing Callbacks
Visualization Using Tensor Board
Fine-Tuning Neural Network Hyperparameters
Hidden Layers
Learning Rate, Batch Size and Other Hyperparameters
Customizing Metrics, Layers, Training Loops , Models and Training Algorithms
Custom Loss Functions
Autograph and Tracing
TF Function Rules