Multi-Armed bandit problems pertain to optimal sequential decision making and Learning in unknown environments.
We start in Cha.
This book covers classic results and recent development on both Bayesian and frequentist bandit problems.
Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of Applications across diverse domains.
Multi-Armed bandit problems pertain to optimal sequential decision making and Learning in unknown environments