Social Laws for Multi-Agent Systems: Logic and Games
Professor Thomas Ågotnes, University of Bergen
Thomas Agotnes is a professor at the Department of Information Science and Media Studies at the University of Bergen, Norway. Since his PhD in 2004 form University of Bergen, Agotnes has worked and published extensively in multi-agent systems and related areas. In 2009, he won best paper award at AAMAS 2009 on logical analysis of normative systems. Agotnes is an active member of the international multi-agent systems community. He is a member of the board of directors of the European Association for Multi-Agent Systems (EURAMAS), was the general chair of STAIRS 2010, and has served on the program committees of the main conferences and workshops in his field, including the senior program committees of AAMAS and IJCAI.
Introduction to the lecture series;
Coordination is a central problem in multi-agent systems. Social laws (or normative systems) have emerged as a natural and powerful paradigm for coordinating multi-agent systems. The social laws paradigm exposes the whole spectrum between fully centralised and fully decentralised coordination mechanisms. A social law is, intuitively, a constraint on the behaviour of agents, which ensures that their individual behaviours are compatible. Typically, a social law is imposed off-line, minimising the chances of on-line conflict or the need to negotiate. The lecture series an introduction and overview of the state-of-the-art in the use of social laws for coordination. It discusses questions such as: how can a social law that ensures some particular global behaviour be automatically constructed? If two social laws achieve the same objective, which one should we use? How can we construct a social law that works even if some agents do not comply? Which agents are most important for a social law to achieve its objective? It turns out that to answer questions like these, we can apply a suit of tools available from the interdisciplinary tool chest of multi-agent systems, combining, e.g., formal logic, game theory, voting theory and complexity theory.
Lecture 1: Specifying and verifying state-transition models for multi-agent Systems
Lecture 2: Social laws for coordination
Lecture 3: Dealing with non-compliance
Lecture 4: Coordinating self-interested agents
Lecture 5: Social laws design as an optimisation problem, and as a mechanism design problem
Lecture 6: Reasoning about social laws
Lecture 7: Strategic reasoning under imperfect information
I review state-transition models of multi-agent systems. Coordination is concerned with the behaviour of a system as a whole, with the global properties of the system. To this end, I introduce the language of Computation Tree Logic (CTL), with its natural branching time semantics to model possible computations of distributed systems. I then extend CTL to Alternating-time Temporal Logic, proposed by by Alur, Henzinger and Kupferman in 1998 as an `agentised' extension of CTL. I will focus on model checking, including computational complexity issues, which will be important in the following lectures.
I present a by now standard framework for social laws, introduced by Shoham and Tennenholtz in the early 1990s. The framework is illustrated by examples from the robotics and communication protocols domains. A key assumption is that the designer (or analyst) of the multi-agent system has an objective, a property he or she wants the global behaviour of the system to satisfy. Key problems involving social laws are discussed, including: the effectiveness problem - will a given social law ensure the objective? The feasilbility problem - given an objective, does there exist an effective social law? The synthesis problem - given an objective, construct an effective social law. I discuss different languages for expressing objectives, including Shoham and Tennenholtz original language as well as CTL and ATL. In particular, I show how these problems can be framed as model checking problems, allowing standard model checkers from multi-agent systems to be used to solve them, and I discuss the computational complexity of the problems. A social law is effective if it guarantees that the system objective holds. In general, there might be several effective social laws for the same multi-agent system. I will discuss some natural criteria for choosing between alternative effective social laws. In particular, I will focus on two. The first is called minimality. Intuitively, a minimal social laws attempts to minimise the constraints put on the agents and thus guarantees maximal individual flexibility. The second criterion is called simplicity. Intuitively, a social law is simple if it is easy for an agent to follow it. I will discuss computational issues related to minimal and simple social laws, and the relationship between the two.
In many cases it might be that some agents choose not to comply with a given social law. There are many possible causes of non-compliance; it could be deliberate because the agent does not consider it to be in his best interest to comply, or it could be that a component in the system fails. In this lecture I discuss how to analyse the properties of a social law under possible non-compliance. In particular, I look at how robust the social law is, and try to identify the agents that are most important for the correct functioning of the system. We say that the social law is robust if the objective is still achieved if only a small number of the agents choose not to comply. Key problems here are: which agents are necessary, in the sense that the objective does not hold unless they comply? Does there exist a social law that is robustly feasible in the sense that compliance of a given group (or number) of agents is sufficient to ensure the objective? I further analyse the relative importance of agents by employing power indices, such as the Banzhaf index, to measure the influence an agent has on satisfaction of the objective in terms of choosing to comply with the social law or not. For example, I discuss how we can ensure that power is distributed evenly amongst the agents in a system, so as to avoid bottlenecks or single points of failure, or to understand where the key risks or vulnerabilities in a social law lie. Computational issues are discussed.
I look more closely at one particular type of possible non-compliance: deliberate non-compliance by rational, self-interested agents. Thus we shift from the perspective of the designer to the perspective of the agent, and assume that also each agent has his own objective. Will an agent with a given objective comply with a given social law? As satisfaction of the objective depends upon whether or not the other agents in the system comply, this is a game-theoretic scenario. Key problems here include: does there exist a social law all agents would be better off complying with (as opposed to not complying)? Does there exist a social law that is a Nash implementation, in the sense that complying forms a Nash equilibrium? Related computational issues are discussed.
The assumption that the designer has one objective is a useful abstraction, but for some applications it is not sophisticated enough. In some situations, the designer may have multiple (possibly conflicting) objectives, with different priorities. Moreover, social laws, as well as bringing benefits, may also have implementation costs: imposing a social law often cannot be done at zero cost. In this lecture, I extend the model of social laws to take into account both the fact that the designer of a social law may have multiple differently valued objectives, and that the implementation of a social law is not cost-neutral. In this setting, designing a social law becomes an optimisation problem, in which a designer must take into account both the benefits and costs of a social law. I show how the problem of designing an optimal social law can be formulated as an integer linear program and solved by standard tools. As usual, I will discuss computational issues. In this lecture I will also consider the design of normative systems as a social choice process. Again shifting the focus from the designer to the agents, assuming that agents have sets of (possibly conflicting, differently valued) goals, I define a number of social choice functions that we might wish to implement, and characterise their computational complexity. I then consider possible mechanisms to implement these functions with a particular focus on manipulability.
In this lecture I look more closely at how we can use formal logic to reason about social laws. The perspective is twofold. First, I discuss how variants of deontic logic can be used to reason about different social laws in the context of a multi-agent system, e.g., allowing us to say that something is permitted in one social law but forbidden in another. I also introduce and discuss a symbolic model representation language for implementing social laws. This language lets us write a description of the desired behaviour of a multi-agent system separately from its possible behaviour, and allows us to study the effect of different norms on the same system. Second, I show how standard logics can be extended in order to be able to frame not only the problems discussed in lecture 2 as model checking problems, but also more sophisticated problems such as those involving robustness properties.
Strategic reasoning under imperfect information
January 31, 2012 15:30-17:00 Room 2010, 20F
The specification logics CTL and ATL introduced in lecture 1 and used in the other lectures implicitly assume that agents have perfect information about the state of the system. In this lecture I discuss how these logics can be extended to specify and verify multi-agent systems where agents have imperfect information. I introduce modal epistemic logic, and discuss how it can be combined with ATL. This combination provides a rich formalism for reasoning about knowledge and strategic ability, but also raises a number of conceptual problems. I will discuss these problems, proposed solutions, and implications for social laws.