Ending In:

$660

94% off

Courses

4

Lessons

204

Courses

4

Lessons

204

Bayesian Machine Learning in Python: A/B Testing

$120 Value

Deep Learning: GANs and Variational Autoencoders

$180 Value

Advanced AI: Deep Reinforcement Learning in Python

$180 Value

Artificial Intelligence: Reinforcement Learning in Python

$180 Value

Access

Lifetime

Content

3.5 hours

Lessons

40

By Lazy Programmer | in Online Courses

A/B testing is used everywhere, from marketing, retail, news feeds, online advertising, and much more. If you're a data scientist, and you want to tell the rest of the company, "logo A is better than logo B," you're going to need numbers and stats to prove it. That's where A/B testing comes in. In this course, you'll do traditional A/B testing in order to appreciate its complexity as you elevate towards the Bayesian machine learning way of doing things.

- Access 40 lectures & 3.5 hours of content 24/7
- Improve on traditional A/B testing w/ adaptive methods
- Learn about epsilon-greedy algorithm & improve upon it w/ a similar algorithm called UCB1
- Understand how to use a fully Bayesian approach to A/B testing

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

- Length of time users can access this course: lifetime
- Access options: web streaming, mobile streaming
- Certification of completion not included
- Redemption deadline: redeem your code within 30 days of purchase
- Experience level required: all levels, but knowledge of calculus, probability, Python, Numpy, Scipy, and Matplotlib is expected
- All code for this course is available for download
*here*, in the directory ab_testing

Compatibility

- Internet required

- Introduction and Outline
- What's this course all about? (2:18)
- Where to get the code for this course (1:17)
- How to succeed in this course (3:26)

- Bayes Rule and Probability Review
- Bayes Rule Review (9:28)
- Simple Probability Problem (2:03)
- The Monty Hall Problem (3:57)
- Imbalanced Classes (4:40)
- Maximum Likelihood - Mean of a Gaussian (4:52)
- Maximum Likelihood - Click-Through Rate (4:23)
- Confidence Intervals (10:17)
- What is the Bayesian Paradigm? (5:46)

- Traditional A/B Testing
- A/B Testing Problem Setup (4:26)
- Simple A/B Testing Recipe (5:07)
- P-Values (3:53)
- Test Characteristics, Assumptions, and Modifications (6:45)
- t-test in Code (3:23)
- 0.01 vs 0.011 - Why should we care? (1:46)
- A/B Test for Click-Through Rates (Chi-Square Test) (6:04)
- CTR A/B Test in Code (8:50)
- A/B/C/D/… Testing - The Bonferroni Correction (2:20)
- Statistical Power (3:08)
- A/B Testing Pitfalls (4:01)
- Traditional A/B Testing Summary (3:42)

- Bayesian A/B Testing
- Explore vs. Exploit (4:00)
- The Epsilon-Greedy Solution (2:58)
- UCB1 (4:35)
- Conjugate Priors (7:04)
- Bayesian A/B Testing (4:10)
- Bayesian A/B Testing in Code (8:50)
- The Online Nature of Bayesian A/B Testing (2:31)
- Finding a Threshold Without P-Values (4:52)
- Thompson Sampling Convergence Demo (4:01)
- Confidence Interval Approximation vs. Beta Posterior (5:41)

- Practice Makes Perfect
- Exercise: Compare different strategies (2:06)
- Exercise: Die Roll (2:38)
- Exercise: Multivariate Gaussian Likelihood (5:41)

- Appendix
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:32)
- How to Code by Yourself (part 1) (15:54)
- How to Code by Yourself (part 2) (9:23)
- Where to get Udemy coupons and FREE deep learning material (2:20)

Access

Lifetime

Content

5.5 hours

Lessons

41

By Lazy Programmer | in Online Courses

Variational autoencoders and GANs have been two of the most interesting recent developments in deep learning and machine learning. GAN stands for generative adversarial network, where two neural networks compete with each other. Unsupervised learning means you're not trying to map input data to targets, you're just trying to learn the structure of that input data. In this course, you'll learn the structure of data in order to produce more stuff that resembles the original data.

- Access 41 lectures & 5.5 hours of content 24/7
- Incorporate ideas from Bayesian Machine Learning, Reinforcement Learning, & Game Theory
- Discuss variational autoencoder architecture
- Discover GAN basics

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

- Length of time users can access this course: lifetime
- Access options: web streaming, mobile streaming
- Certification of completion not included
- Redemption deadline: redeem your code within 30 days of purchase
- Experience level required: all levels, but knowledge of calculus, probability, object-oriented programming, Python, Numpy, linear regression, gradient descent, and how to build a feedforward and convolutional neural network in Theano and TensorFlow is expected
- All code for this course is available for download
*here*, in the directory unsupervised_class3

Compatibility

- Internet required

- Introduction and Outline
- Welcome (4:33)
- Where does this course fit into your deep learning studies? (5:00)
- Where to get the code and data (3:51)
- How to succeed in this course (5:19)

- Generative Modeling Review
- What does it mean to Sample? (4:57)
- Sampling Demo: Bayes Classifier (3:57)
- Gaussian Mixture Model Review (10:31)
- Sampling Demo: Bayes Classifier with GMM (3:54)
- Why do we care about generating samples?
- Neural Network and Autoencoder Review (7:26)
- Tensorflow Warmup (4:07)
- Theano Warmup (4:54)

- Variational Autoencoders
- Variational Autoencoders Section Introduction (5:39)
- Variational Autoencoder Architecture (5:57)
- Parameterizing a Gaussian with a Neural Network (8:00)
- The Latent Space, Predictive Distributions and Samples (5:13)
- Cost Function (7:28)
- Tensorflow Implementation (pt 1) (7:18)
- Tensorflow Implementation (pt 2) (2:29)
- Tensorflow Implementation (pt 3) (9:55)
- The Reparameterization Trick (5:05)
- Theano Implementation (10:52)
- Visualizing the Latent Space (3:09)
- Bayesian Perspective (3:09)
- Variational Autoencoder Section Summary (4:02)

- Generative Adversarial Networks (GANs)
- GAN - Basic Principles (5:13)
- GAN Cost Function (pt 1) (7:23)
- GAN Cost Function (pt 2) (4:56)
- DCGAN (7:38)
- Batch Normalization Review (8:01)
- Fractionally-Strided Convolution (8:35)
- Tensorflow Implementation Notes (13:23)
- Tensorflow Implementation (18:13)
- Theano Implementation Notes (7:26)
- Theano Implementation (19:47)
- GAN Summary (9:43)

- Appendix
- How to How to install Numpy, Theano, Tensorflow, etc... (17:32)
- How to Succeed in this Course (Long Version) (5:55)
- How to Code by Yourself (part 1) (15:54)
- How to Code by Yourself (part 2) (9:23)
- Where to get discount coupons and FREE deep learning material (2:20)

Access

Lifetime

Content

5 hours

Lessons

52

By Lazy Programmer | in Online Courses

This course is all about the application of deep learning and neural networks to reinforcement learning. The combination of deep learning with reinforcement learning has led to AlphaGo beating a world champion in the strategy game Go, it has led to self-driving cars, and it has led to machines that can play video games at a superhuman level. Unlike supervised and unsupervised learning algorithms, reinforcement learning agents have an impetus—they want to reach a goal. In this course, you'll work with more complex environments, specifically, those provided by the OpenAI Gym.

- Access 52 lectures & 5 hours of content 24/7
- Extend your knowledge of temporal difference learning by looking at the TD Lambda algorithm
- Explore a special type of neural network called the RBF network
- Look at the policy gradient method
- Examine Deep Q-Learning

The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

- Length of time users can access this course: lifetime
- Access options: web streaming, mobile streaming
- Certification of completion not included
- Redemption deadline: redeem your code within 30 days of purchase
- Experience level required: all levels, but knowledge of calculus, probability, object-oriented programming, Python, Numpy, linear regression, gradient descent, how to build a feedforward and convolutional neural network in Theano and TensorFlow, Markov Decision Processes, and how to implement Dynamic Programming, Monte Carlo, and Temporal Difference is expected
- All code for this course is available for download
*here*, in the directory rl2

Compatibility

- Internet required

- Introduction and Logistics
- Introduction and Outline (9:57)
- Where to get the Code (3:14)
- How to Succeed in this Course (8:45)

- Background Review
- Review Intro (2:41)
- Review of Markov Decision Processes (7:47)
- Review of Dynamic Programming (4:12)
- Review of Monte Carlo Methods (3:55)
- Review of Temporal Difference Learning (4:41)
- Review of Approximation Methods for Reinforcement Learning (2:19)
- Review of Deep Learning (6:47)

- OpenAI Gym and Basic Reinforcement Learning Techniques
- OpenAI Gym Tutorial (5:43)
- Random Search (5:48)
- Saving a Video (2:18)
- CartPole with Bins (Theory) (3:51)
- CartPole with Bins (Code) (6:25)
- RBF Neural Networks
- RBF Networks with Mountain Car (Code) (5:28)
- RBF Networks with CartPole (Theory) (1:54)
- RBF Networks with CartPole (Code) (3:11)
- Theano Warmup (3:04)
- Tensorflow Warmup (2:25)
- Plugging in a Neural Network (3:39)
- OpenAI Gym Section Summary (3:28)

- TD Lambda
- N-Step Methods (3:14)
- N-Step in Code (3:40)
- TD Lambda (7:36)
- TD Lambda in Code (3:00)
- TD Lambda Summary (2:21)

- Policy Gradients
- Policy Gradient Methods (11:38)
- Policy Gradient in TensorFlow for CartPole (7:19)
- Policy Gradient in Theano for CartPole (4:14)
- Continuous Action Spaces (4:16)
- Mountain Car Continuous Specifics (4:12)
- Mountain Car Continuous Theano (7:31)
- Mountain Car Continuous Tensorflow (8:07)
- Mountain Car Continuous Tensorflow (v2) (6:11)
- Mountain Car Continuous Theano (v2) (7:31)
- Policy Gradient Section Summary (1:36)

- Deep Q-Learning
- Deep Q-Learning Intro (3:52)
- Deep Q-Learning Techniques (9:13)
- Deep Q-Learning in Tensorflow for CartPole (5:09)
- Deep Q-Learning in Theano for CartPole (4:48)
- Additional Implementation Details for Atari (5:36)
- Deep Q-Learning in Tensorflow for Breakout (5:58)
- Deep Q-Learning in Theano for Breakout (6:42)
- Partially Observable MDPs (4:52)
- Deep Q-Learning Section Summary (4:45)
- Course Summary (4:57)

- Appendix
- Environment Setup (17:32)
- How to Code by Yourself (part 1) (15:54)
- How to Code by Yourself (part 2) (9:23)
- Where to get Udemy coupons and FREE deep learning material (2:20)

Access

Lifetime

Content

5.5 hours

Lessons

71

By Lazy Programmer | in Online Courses

When people talk about artificial intelligence, they usually don't mean supervised and unsupervised machine learning. These tasks are pretty trivial compared to what we think of AIs doing—playing chess and Go, driving cars, etc. Reinforcement learning has recently become popular for doing all of that and more. Reinforcement learning opens up a whole new world. It's lead to new and amazing insights both in behavioral psychology and neuroscience. It's the closest thing we have so far to a true general artificial intelligence, and this course will be your introduction.

- Access 71 lectures & 5.5 hours of content 24/7
- Discuss the multi-armed bandit problem & the explore-exploit dilemma
- Learn ways to calculate means & moving averages and their relationship to stochastic gradient descent
- Explore Markov Decision Processes, Dynamic Programming, Monte Carlo, & Temporal Difference Learning
- Understand approximation methods

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

- Length of time users can access this course: lifetime
- Access options: web streaming, mobile streaming
- Certification of completion not included
- Redemption deadline: redeem your code within 30 days of purchase
- Experience level required: all levels, but knowledge of calculus, probability, object-oriented programming, Python, Numpy, linear regression, and gradient descent is expected
- All code for this course is available for download
*here*, in the directory rl

Compatibility

- Internet required

- Introduction and Outline
- Introduction and outline (6:22)
- What is Reinforcement Learning? (13:46)
- Where to get the Code (2:41)
- Strategy for Passing the Course (5:56)

- Return of the Multi-Armed Bandit
- Problem Setup and The Explore-Exploit Dilemma (3:55)
- Epsilon-Greedy (1:48)
- Updating a Sample Mean (1:22)
- Comparing Different Epsilons (4:06)
- Optimistic Initial Values (2:56)
- UCB1 (4:56)
- Bayesian / Thompson Sampling (9:52)
- Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1 (5:11)
- Nonstationary Bandits (4:51)

- Build an Intelligent Tic-Tac-Toe Agent
- Naive Solution to Tic-Tac-Toe (3:50)
- Components of a Reinforcement Learning System (8:00)
- Notes on Assigning Rewards (2:41)
- The Value Function and Your First Reinforcement Learning Algorithm (16:33)
- Tic Tac Toe Code: Outline (3:16)
- Tic Tac Toe Code: Representing States (2:56)
- Tic Tac Toe Code: Enumerating States Recursively (6:14)
- Tic Tac Toe Code: The Environment (6:36)
- Tic Tac Toe Code: The Agent (5:48)
- Tic Tac Toe Code: Main Loop and Demo (6:02)
- Tic Tac Toe Summary (5:25)

- Markov Decision Proccesses
- Gridworld (2:13)
- The Markov Property (4:36)
- Defining and Formalizing the MDP (4:10)
- Future Rewards (3:16)
- Value Functions (4:38)
- Optimal Policy and Optimal Value Function (4:09)
- MDP Summary (1:35)

- Dynamic Programming
- Intro to Dynamic Programming and Iterative Policy Evaluation (3:06)
- Gridworld in Code (5:47)
- Iterative Policy Evaluation in Code (6:24)
- Policy Improvement (2:51)
- Policy Iteration (2:00)
- Policy Iteration in Code (3:46)
- Policy Iteration in Windy Gridworld (4:57)
- Value Iteration (3:58)
- Value Iteration in Code (2:14)
- Dynamic Programming Summary (5:14)

- Monte Carlo
- Monte Carlo Intro (3:10)
- Monte Carlo Policy Evaluation (5:45)
- Monte Carlo Policy Evaluation in Code (3:35)
- Policy Evaluation in Windy Gridworld (3:38)
- Monte Carlo Control (5:59)
- Monte Carlo Control in Code (4:04)
- Monte Carlo Control without Exploring Starts (2:58)
- Monte Carlo Control without Exploring Starts in Code (2:51)
- Monte Carlo Summary (3:42)

- Temporal Difference Learning
- Temporal Difference Intro (1:42)
- TD(0) Prediction (3:46)
- TD(0) Prediction in Code (2:27)
- SARSA (5:15)
- SARSA in Code (3:38)
- Q Learning (3:05)
- Q Learning in Code (2:13)
- TD Summary (2:34)

- Approximation Methods
- Approximation Intro (4:11)
- Linear Models for Reinforcement Learning (4:16)
- Features (4:02)
- Monte Carlo Prediction with Approximation (1:54)
- Monte Carlo Prediction with Approximation in Code (2:58)
- TD(0) Semi-Gradient Prediction (4:22)
- Semi-Gradient SARSA (3:08)
- Semi-Gradient SARSA in Code (4:08)
- Course Summary and Next Steps (8:38)

- Appendix
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:32)
- How to Code by Yourself (part 1) (15:54)
- How to Code by Yourself (part 2) (9:23)
- Where to get discount coupons and FREE deep learning material (2:20)

- Instant digital redemption