• Skip to primary navigation
  • Skip to content
  • Skip to footer
Hao Qin
  • Publications
  • Posts

    Hao Qin

    PhD student in Statistics at the University of Arizona

    • Tucson, Arizona
    • Resume
    • GitHub
    • LinkedIn
    • Email

    Multi-armed Bandits with Bounded Rewards a Short Survey

    less than 1 minute read

    Here is a short survey covering the most common seen Multi-armed bandits (MAB) algorithms. You can download the full survey here.

    Categories: Survey

    Updated: January 29, 2024

    Share on

    Twitter Facebook LinkedIn
    Previous Next
    • Follow:
    • GitHub
    • Feed
    © 2024 Hao Qin. Powered by Jekyll & Minimal Mistakes.