Advances in Preference Handling

Multidisciplinary Working Group affiliated to EURO

Windmill on Thera

This year’s meeting of the working group will be at ADT 2021, which is planned for Toulouse, France on Nov. 3-5. Abstracts are due by April 30.

Apr 7, 2021

EXPLANATIONS AND PREFERENCES

LinkedIn Group on Preference Handling
Mail to the Working Group on Advances in Preference Handling

Explainable artificial intelligence (XAI) has become a popular topic and we may ask which kinds of interconnections may exist between preferences and explanations. Explanations have been considered in artificial intelligence since a long time as an intelligent system should not only be able to solve problems, but also capable to explain its solutions. Explanations are particularly important if the solution proposed by an intelligent system is not convenient for the human user. The explanation should allow the user to identify elements on which the solution is based. The user can then provide additional information to the system that alters some of these elements and that will trigger the generation of a new solution in turn. Explanations thus have a clear purpose and should permit a revision of the problem.

As this idea is quite general, it has been pursued in many areas of artificial intelligence and applied to different kinds of systems. Many different kinds of explanations have been proposed as well as methods to compute them. Some approaches provide ad-hoc methods for finding explanations, whereas other approaches provide an explicit formulation of an explanation problem. Given a problem to solve and a solution to this problem, we can indeed formulate a second problem that consists in finding an explanation for the given solution of the first problem.

And this brings us back to the initial question about possible interconnections between preferences and explanations. Many problems have multiple solutions. A convenient way to choose one of the solutions is to define a preference relation over the solution space. We can apply this same principle to explanation problems. Indeed, there may be many explanations and a convenient way to choose one of the explanations is to define a preference relation over the space of possible explanations. This leads to the notion of preferred explanation and many approaches to explainable AI use some criterion to evaluate the explanations with the purpose of choosing a best one.

Rather than using a static criterion such as size or cost, a user may provide preference information about the elements of the explanations which is then used to define a preference relation over explanations. If the explanation is not convenient the user may provide additional preference information, thus triggering the generation of a new explanation. This leads to an even richer user interaction than sketched above. Given an explanation, the user may either provide additional information concerning the original problem, which will trigger the generation of a new solution, or additional information about the corresponding explanation problem, which will trigger the generation of a new explanation.

So far, we have seen that preferences may be used to choose explanations, thus leading to a first interconnection between the two topics. A second interconnection is obtained by doing the inverse, namely by using explanations for problems that involve preferences. For example, the original problem may be a decision-making problem. Its solution consists in a decision that is a best one according to given preferences. As described above, a user may ask for an explanation of this solution of the decision-making problem. This explanation will give the reasons why the other decisions have not been chosen. If some other decision is preferred to the chosen decision, then the explanation should describe why the choice of this other decision was infeasible. If some other decision is less preferred than the chosen decision, then the explanation should simply state this preference. Finally, if some other decision is incomparable to the chosen decision or there is indifference between these two decisions, then the explanation will reveal that the system preferred one of these alternatives. If the proposed decision is inconvenient for the user, then the user may change some of the preferences reported in the explanation and thus trigger the recommendation of a new decision by the system.

Preferences thus are crucial elements of explanations of optimality and identify the kind of information that the user may change if the proposed decision is inconvenient. However, not all the preferences reported in explanations may be revisable by the user as these preferences may have been entailed by more elementary preferences. It should be noted that in that case we encounter yet another problem, namely that of deriving a preference relation from elementary preference information according to a preference model. As before, the user may ask for explanations for the derived preferences. For example, she or he may ask why one decision is preferred to another one. The explanation should identify elementary preferences, which can then be changed by the user if the derived preference is not convenient.

This very general discussion shows that there are many interconnections between explanations and preferences and this on several levels. Given some problem, we have seen that we may define a preference relation over its solution space and that deriving this preference relation from user-specified information may be another problem. Furthermore, we have seen that we can set up an explanation problem if a solution of the original problem is given. Hence, we can define an explanation problem and a preference-modeling problem for each original problem and we can repeat this principle for these dependent problems as well. The literature on preference handling shows some concrete examples for those interconnections between explanations and preferences and we hope that more will be explored at some of the future events.

This year, the working group on Advances in Preference Handing will meet at the Seventh International Conference on Algorithmic Decision Theory ADT 2021, which will be held on November 3-5, 2021 in Toulouse, France. The program chairs are David Rios Insua and Dimitris Fotakis and the local organizing committee consists of Umberto Grandi, Sylvie Doutre, Laurent Perrussel, Pascale Zaraté, and Rachael Colley. The abstract submission deadline is April 30, 2021 and the full papers are due one week later. As the previous editions, ADT 2021 will cover a broad range of topics including computational social choice, decision analysis, game theory, machine learning and adversarial machine learning, multi-agent systems, multiple criteria decision aiding, optimization, preference modeling, risk analysis and adversarial risk analysis, utility theory, and more.

Whereas ADT 2021 is planned as a physical meeting, most of the other events of interest to the preference handling community will be held as virtual events. This includes the 20th International Conference on Autonomous Agents and Multi-agent Systems AAMAS 2021 on May 3-7, 2021 as well as the Fourth AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society AIES 2021 on May 19-21, 2021.

Moving forward in time, the next two events are planned as hybrid events allowing a physical presence and an online presence. This format has been chosen for the 8th International Workshop on Computational Social Choice COMSOC-2021, which will be held in Haifa, Israel on June 7-10, 2021. But also larger conferences will try out this format. As such, the 31st European Conference on Operational Research EURO 2021 will be held on July 11-14, 2021 in Athens, Greece as a hybrid conference.

Some weeks later, the 30th International Joint Conference on Artificial Intelligence IJCAI 2021 is planned to be held in Montreal, Canada on August 21-26, 2021.

We will conclude this article with a brief report about last year’s events. Due to the pandemic, all events have been held digitally, including the 12th Multidisciplinary Workshop on Advances in Preference Handling M-PREF 2020, which had been planned for Santiago de Compostela, Galicia, Spain, and the fifth workshop on “From Multiple Criteria Decision Aid to Preference Learning” DA2PL 2020, which had been planned for Trento, Italy.

M-PREF 2020 has been held on August 29, 2020 as an ECAI 2020 workshop and we are grateful to the ECAI organizers and their partners for their great support and for making the workshop participation free of charges. Running this workshop digitally was a new experience and a quite demanding task, but thanks to a good preparation, excellent talks, and interesting questions by the audience, everything went well. You may find a detailed report about the workshop in the December issue of the IFORS Newsletter. ECAI also featured a nice social program, providing many insights into Santiago de Compostela and the Spanish region of Galicia, thus giving us all the desire to visit this wonderful city and region once this will be possible again.

CALENDAR

MAY 2021

JUNE 2021

JULY 2021

AUG 2021

NOV 2021

LinkedIn Group on Preference Handling
Mail to the Working Group on Advances in Preference Handling