Machine Learning and Deep Learning Study Index
Advanced CNN
Activation Function
- KDNuggets: Neural Network Foundations Explained, Activation Function
- Machine Learning Mastery: How to Choose Activation Function
- StatsExchange: Activation Function for First Layer Nodes in an ANN
- Activation Function in Neural Networks: Sigmoid, Tanh, ReLU, Leaky ReLU, Parametric ReLU, ELU, Softmax, GeLU | Medium
- Using Activation Functions in Neural Networks - MachineLearningMastery.com
CNN
Deep Learning
- Deep Learning Illustrated, Part 3: Convolutional Neural Networks | by Shreya Rao | May, 2024 | Towards Data Science
- Deep Learning Illustrated, Part 4: Recurrent Neural Networks | by Shreya Rao | Jun, 2024 | Towards Data Science
Decision Tree
Image Augmentation
- dl2ai: Image Augmentation Chapter 13
- Nanonets: Data Augmentation
- Roboflow: Why and How to Implement Random Rotate Data Augmentation
- Paperspace: Data Augmentation for Bounding Boxes: Rotation and Shearing
- Shorten and Khoshgoftaar, A Survey on Image Data Augmentation for Deep Learning
Linear Regression and Softmax Regression
- dl2ai: Linear Regression Chapter 3.1
- dl2ai: Linear Regression Implementation from Scratch Chapter 3.2
- dl2ai: Concise Implementation of Linear Regression Chapter 3.3
- dl2ai: Concise Implementation of Softmax Regression Chapter 3.7
Loss Function
- The Difference between MSE Error and Cross-entropy Error in NN
- MSE vs Cross-entropy Loss Function
- MSE vs Cross-entropy (Reddit)
- Stackoverflow: In which cases is the Cross-entropy preferred over the MSE?
Model-related topics
- dl2ai: The Image Classification Dataset Chapter 3.5
- dl2ai: Model Selection, Underfitting, and Overfitting Chapter 4.4
ReLU
Regularization (L1 and L2)
- L1 and L2 Regularization
- L1 vs L2 Loss Function
- L1 vs L2 Regularization: Which one is better in Fighting Overfitting?
- The Difference between L1 and L2 Regularization
- Codebasics: L1 and L2 Regularization
- Krish Naik: Ridge and Lasso Regression In-depth Intuition
- Andrew Ng's Regularization
Step Function
Vanishing Gradient Problems (VGP) and Exploding Gradient Problems (EGP)
Comments
Post a Comment