Recently Published
Markov Decision Processes
Finite Markov Decision Processes. This problem involves evaluative feedback and associative aspect. Refer to Chapter 3 Finite Markov Decision Processes by Sutton and Barto.
Exploration vs Exploitation RL Algorithms
Dive into the heart of our RL, where a spectrum of exploration vs. exploitation algorithms awaits to enhance your decision-making prowess. Uncover the strategic brilliance of the Upper Confidence Bound (UCB) algorithm, dynamically balancing the quest for knowledge with a calibrated confidence-driven approach. Embrace simplicity and effectiveness with the Epsilon-Greedy algorithm, allowing you to seamlessly toggle between exploration and exploitation. For those navigating continuous action spaces, explore Gradient-Based Methods, leveraging derivatives for optimal search. Lastly, adopt the Bayesian perspective with Thompson Sampling, maintaining a distribution over possible models for principled decision-making in uncertain terrains. This repository offers a versatile suite of algorithms, each tailored to specific needs, providing you with the tools to elevate your exploration and exploitation strategies across diverse applications.
Inverse-Transform Method Application
This is an app for Developing Data Products Class
Maximum Likelihood Estimation
Maximum likelihood estimation method, kernel density estimation and interactive graphs created with plotly