Multi-Armed Bandits

This is an umbrella project for several related efforts at Microsoft Research Silicon Valley that address various Multi-Armed Bandit (MAB) formulations motivated by web search and ad placement. The MAB problem is a classical paradigm in Machine Learning in which an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies.

This page is inactive since the closure of MSR-SVC in September 2014.

The name “multi-armed bandits” comes from a whimsical scenario in which a gambler faces several slot machines, a.k.a. “one-armed bandits”, that look identical at first but produce different expected winnings. The crucial issue here is the trade-off between acquiring new information (exploration) and capitalizing on the information available so far (exploitation). While the MAB problems have been studied extensively in Machine Learning, Operations Research and Economics, many exciting questions are open. One aspect that we are particularly interested in concerns modeling and efficiently using various types of side information that may be available to the algorithm.

Contact: Alex Slivkins.

Research directions

  • MAB with similarity information
  • MAB in a changing environment
  • Explore-exploit tradeoff in mechanism design
  • Explore-exploit learning with limited resources
  • Risk vs. reward tradeoff in MAB

External visitors and collaborators

Prof. Sébastien Bubeck (opens in new tab) (Princeton)
Prof. Robert Kleinberg (opens in new tab) (Cornell)
Filip Radlinski (opens in new tab) (MSR Cambridge)
Prof. Eli Upfal (opens in new tab) (Brown)

Former interns
Yogi Sharma (opens in new tab) (Cornell —> Facebook; intern at MSR-SV in summer 2008)
Umar Syed (opens in new tab) (Princeton —> Google; intern at MSR-SV in summer 2008)
Shaddin Dughmi (opens in new tab) (Stanford —>USC; intern at MSR-SV in summer 2010)
Ashwinkumar Badanidiyuru (opens in new tab) (Cornell –> Google; intern at MSR-SV in summer 2012)

MAB problems with similarity information

MAB problems in a changing environment

Explore-exploit tradeoff in mechanism design

Explore-exploit learning with limited resources

  • Dynamic pricing with limited supply (opens in new tab)
    Moshe Babaioff, Shaddin Dughmi, Robert Kleinberg and Alex Slivkins (EC 2012 (opens in new tab))
    Abstract We consider dynamic pricing with limited supply and unknown demand distribution. We extend MAB techniques to the limited supply setting, and obtain optimal regret rates.
  • Bandits with Knapsacks (opens in new tab)
    Ashwinkumar Badanidiyuru, Robert Kleinberg and Alex Slivkins (FOCS 2013 (opens in new tab))
    Abstract We define a broad class of explore-exploit problems with knapsack-style resource constraints, which subsumes dynamic pricing, dynamic procurement, pay-per-click ad allocation, and many other problems. Our algorithms achieve optimal regret w.r.t. the optimal dynamic policy.

Risk vs. reward trade-off in MAB