Advances in Preference Handling

Multidisciplinary Working Group affiliated to EURO

Rising sun

Human-centered AI needs to be able to provide advices that meet user preferences even if those preferences are changing. This may require a revision of some of the choices.

Mail to the Working Group on Advances in Preference Handling
LinkedIn Group on Preference Handling

Oct 5, 2019

HOW TO REVISE THE CHOICES?

CALENDAR

OCT 2019

DEC 2019

AUG 2020

Decision policies are everywhere in our society of the early 21st century. They are used for a large variety of tasks ranging from loan approval, shopping recommendation, simple query-answering as used in chatbots up to the configuration of cars and PCs. They may be based on explicit rules, be extrapolated from data, or be formulated as constrained optimization problems. However even if those policies differ in their mathematical nature, they all have one thing in common: They map each case to a single decision in a way that only depends on the characteristics of the case. This deterministically made decision may be convenient or not. In the following we discuss what happens if this decision is not convenient for everybody.

Let us consider a scenario where such a policy will make a personalized decision such as recommending a promotional item. We here suppose that no explicit preference information is supplied as input of the policy. Instead we suppose that the policy has been conceived as a personalized policy and thus has our preference information baked in:

  • If the policy is based on rules, then each rule must recommend a preferred decision for all the cases to which it applies. 
  • If the policy has been extrapolated from data, then those data need to combine cases with preferred decisions for those cases.
  • If the policy is formulated as a constrained optimization problem, then the optimization objective must be based on our preferences.

According to this approach, the policy should respect our preferences. This method works if the user preferences are exhaustively known and invariant over time. Indeed, economists usually assume that user preferences are fixed and that only the available options may change. However the elicited preferences may be incorrect or incomplete. Preferences may be conditional and depend on hidden factors that may change. Moreover, our preferences themselves may change over longer time periods. Hence, even a personalized policy may make recommendations that we do not like or that we do not like any more. What to do? Is it possible to revise the decision in light of new preference information?

As the policy does not accept any complaint or objection from the user as input, it would be necessary to revise the whole policy such that it no longer maps the given case to the undesired choice, but to our now preferred decision. Revising the policy based on a single user critique will require different techniques depending on the mathematical nature of the policy:

  • If the policy is formulated as a constrained optimization problem, it is sufficient to revise the objective.
  • If it has been extrapolated from data, new data need to be acquired and the learning process has to be redone from the beginning.
  • If it is represented by rules, some of the rules need to be revised or overridden by more specific rules.

In all cases, revising the policy appears to be a costly process even if a single user preference has been added or changed. It should be noted that the relationship between the preferences and the policy is complex. The preferences have been “baked” into the policy in some way, but it is not evident which part of the policy is impacted by which preference. As a consequence, the change of a single preference may impact several parts of the policy and this in a way that is far from being evident.

Although personalized policies may thus respect our preferences, they are difficult to revise when our preferences evolve. A better approach consists in using decision-making methods that take explicit preference statements as input and that are able to revise their decisions if new preferences are acquired. The idea is to add flexibility to decision-making methods such that they can react to user feed-back and change their recommendations as a response. Instead of revising a fixed decision-making method, we want a flexible decision-making method which has the capability of revising decisions.

Of course, such a flexible decision-making method needs a representation of preferences as well as ways to acquire them from the user and to reason about them in order to recommend decisions that best satisfy these preferences. This exactly is the topic of preference handling. Preference-based methods are able to take user feedback in form of new preference statements into account, to adapt their internal preference representations correspondingly, and to make revised recommendations in turn. They are thus able to react to user feedback immediately without going through a costly policy revision process. Hence, this revision capability is a major advantage of preference-based methods over fixed policies like those listed in the beginning of the article. It is worth to highlight this capability when presenting preference-based methods to a general audience.

With this in mind, we are giving a rapid outlook on the forthcoming events related to preference handling. First up is the 6th International Conference on Algorithmic Decision Theory ADT 2019, which will be held at Duke University, Durham, NC, USA on October 25-27, 2019. The program covers topics such as computational social choice, decision analysis, game theory, multi-agent systems, multi-criteria decision aiding, multi-objective optimization, preference aggregation, and preference elicitation.

Preference handling is also a topic at major conferences about Artificial Intelligence. The first one in 2020 is the Thirty-Fourth AAAI Conference on Artificial Intelligence AAAI-20, which will be held on February 7-12 in New York, USA. Some months later, the 24th European Conference on Artificial Intelligence ECAI 2020 will follow on June 8-12 August 29-September 2 in Santiago de Compostela, a historical city in Galicia, Spain. Abstracts for the ECAI main program are expected by November 15 and full papers by November 19, 2019. One month later, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence IJCAI-PRICAI 2020 will be organized on July 11-17 in Yokohama, Japan. Abstracts are due by January 15, 2020 and full papers by January 21, 2020.

Furthermore, there are several specialized conferences of interest to the preference handling community. Several research groups including Lamsade and Dimacs will organize a workshop on Social Responsibility of Algorithms SRA 2019 on December 12-13, 2019 at the University Paris Dauphine. The workshop will investigate ethical questions concerning fairness, trust, privacy, and liability with regard to algorithmic decision making. Another topic is explicability and interpretability of algorithms. These topics will also be the focus of the Third AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society AIES 2020, which constitutes a multi-disciplinary effort to address ethical questions of AI systems by bringing experts from computer science and diverse social sciences together. AIES will be held on Feb 7-8, 2020 in New York, USA in conjunction with AAAI. Submission are expected by Nov 4, 2019.

Finally, we would like to mention the 8th International Workshop on Computational Social Choice COMSOC-2020 which will take place at the Weizmann Institute in Rehovot on July 26-29, 2020. The paper submission deadline is on April 1, 2020.

LinkedIn Group on Preference Handling
Mail to the Working Group on Advances in Preference Handling