- Lecture 3
Dealing with non-compliance
- December 27, 2011 15:30-17:00 Room 2001, 20F
- In many cases it might be that some agents choose not to comply with a given social law. There are many possible causes of non-compliance; it could be deliberate because the agent does not consider it to be in his best interest to comply, or it could be that a component in the system fails. In this lecture I discuss how to analyse the properties of a social law under possible non-compliance. In particular, I look at how robust the social law is, and try to identify the agents that are most important for the correct functioning of the system. We say that the social law is robust if the objective is still achieved if only a small number of the agents choose not to comply. Key problems here are: which agents are necessary, in the sense that the objective does not hold unless they comply? Does there exist a social law that is robustly feasible in the sense that compliance of a given group (or number) of agents is sufficient to ensure the objective? I further analyse the relative importance of agents by employing power indices, such as the Banzhaf index, to measure the influence an agent has on satisfaction of the objective in terms of choosing to comply with the social law or not. For example, I discuss how we can ensure that power is distributed evenly amongst the agents in a system, so as to avoid bottlenecks or single points of failure, or to understand where the key risks or vulnerabilities in a social law lie. Computational issues are discussed.
- Lecture 4
Coordinating self-interested agents
- January 10, 2012 15:30-17:00 Room 2001, 20F
I look more closely at one particular type of possible non-compliance: deliberate non-compliance by rational, self-interested agents. Thus we shift from the perspective of the designer to the perspective of the agent, and assume that also each agent has his own objective. Will an agent with a given objective comply with a given social law? As satisfaction of the objective depends upon whether or not the other agents in the system comply, this is a game-theoretic scenario. Key problems here include: does there exist a social law all agents would be better off complying with (as opposed to not complying)? Does there exist a social law that is a Nash implementation, in the sense that complying forms a Nash equilibrium? Related computational issues are discussed.
- Lecture 5
Social laws design as an optimisation problem, and as amechanism design problem
January 17, 2012 15:30-17:00 Room 2010, 20F
The assumption that the designer has one objective is a useful abstraction, but for some applications it is not sophisticated enough. In some situations, the designer may have multiple (possibly conflicting) objectives, with different priorities. Moreover, social laws, as well as bringing benefits, may also have implementation costs: imposing a social law often cannot be done at zero cost. In this lecture, I extend the model of social laws to take into account both the fact that the designer of a social law may have multiple differently valued objectives, and that the implementation of a social law is not cost-neutral. In this setting, designing a social law becomes an optimisation problem, in which a designer must take into account both the benefits and costs of a social law. I show how the problem of designing an optimal social law can be formulated as an integer linear program and solved by standard tools. As usual, I will discuss computational issues. In this lecture I will also consider the design of normative systems as a social choice process. Again shifting the focus from the designer to the agents, assuming that agents have sets of (possibly conflicting, differently valued) goals, I define a number of social choice functions that we might wish to implement, and characterise their computational complexity. I then consider possible mechanisms to implement these functions with a particular focus on manipulability.
- Lecture 6
Reasoning about social laws
January 24, 2012 15:30-17:00 Room 2010, 20F
- In this lecture I look more closely at how we can use formal logic to reason about social laws. The perspective is twofold. First, I discuss how variants of deontic logic can be used to reason about different social laws in the context of a multi-agent system, e.g., allowing us to say that something is permitted in one social law but forbidden in another. I also introduce and discuss a symbolic model representation language for implementing social laws. This language lets us write a description of the desired behaviour of a multi-agent system separately from its possible behaviour, and allows us to study the effect of different norms on the same system. Second, I show how standard logics can be extended in order to be able to frame not only the problems discussed in lecture 2 as model checking problems, but also more sophisticated problems such as those involving robustness properties.
- Lecture 7
- Strategic reasoning under imperfect information
January 31, 2012 15:30-17:00 Room 2010, 20F
- The specification logics CTL and ATL introduced in lecture 1 and used in the other lectures implicitly assume that agents have perfect information about the state of the system. In this lecture I discuss how these logics can be extended to specify and verify multi-agent systems where agents have imperfect information. I introduce modal epistemic logic, and discuss how it can be combined with ATL. This combination provides a rich formalism for reasoning about knowledge and strategic ability, but also raises a number of conceptual problems. I will discuss these problems, proposed solutions, and implications for social laws.