10/14/2021 Interdisciplinary Science Seminar

10/14/2021 6:42 pm - 8:42 pm

Title: D3C: Reducing the Price of Anarchy in Multi-Agent Learning

Abstract: In multi-agent systems the complex interaction of fixed incentives can lead agents to outcomes that are poor (inefficient) not only for the group but also for each individual agent. Price of anarchy is a technical game theoretic definition introduced to quantify the inefficiency arising in these scenarios– it compares the welfare that can be achieved through perfect coordination against that achieved by self-interested agents at a Nash equilibrium. We derive a differentiable upper bound on a price of anarchy that agents can cheaply estimate during learning. Equipped with this estimator agents can adjust their incentives in a way that improves the efficiency incurred at a Nash equilibrium. Agents adjust their incentives by learning to mix their reward (equiv. negative loss) with that of other agents by following the gradient of our derived upper bound. We refer to this approach as D3C. In the case where agent incentives are differentiable D3C resembles the celebrated Win-Stay Lose-Shift strategy from behavioral game theory thereby establishing a connection between the global goal of maximum welfare and an established agent-centric learning rule. In the non-differentiable setting as is common in multiagent reinforcement learning we show the upper bound can be reduced via evolutionary strategies until a compromise is reached in a distributed fashion. We demonstrate that D3C improves outcomes for each agent and the group as a whole on several social dilemmas including a traffic network exhibiting Braess’s paradox a prisoner’s dilemma and several reinforcement learning domains.