Advances in Preference Handling

Multidisciplinary Working Group affiliated to EURO

Basketball basket

The next meeting of the working group will be at ADT 2019, which will be held in Durham, NC, USA on Oct. 25-27, 2019. Abstracts are due by 14 April.

Mail to the Working Group on Advances in Preference Handling
LinkedIn Group on Preference Handling

Mar 2, 2019

MACHINE LEARNING: WHERE ARE THE CHOICES?

CALENDAR

MAY 2019

JUNE 2019

AUG 2019

OCT 2019

The world wide web touches a profound elementary human need, the need of receiving, emitting, and displaying information. This came unexpected. Technologists often struggle to realize their dreams and do not pose questions about the possible societal impact of their technology that is obtained once the technology is working. As such, even the inventors of the world wide web do not know why it has such a deep impact.

While we are still trying to understand the reason of this profound societal impact, the next technological wave is arriving. Multi-layer perceptrons had been rebranded into deep neural networks and started to produce unexpected high accuracy when trained with huge amounts of data in the right way. Seven years ago, only few mastered this technology and nowadays many industrial and political leaders are putting the topic of machine learning on their agenda. While the world wide web has given new communication capabilities to the individual, machine learning promises to provide new capabilities to those who govern. 

Machine learning is applied with great success to classification and prediction tasks. But does this mean that it could be used for decision making in the same way? In this article, we argue that there is a fundamental difference in using machine learning for predicting behavior and for making decisions. Whereas the technology as such may be agnostic with respect to this concern, the societal aspects such as accountability are not. Indeed, the technology in its current form is not accountable as it neither envisions the consequences of decisions, nor allows an easy revision of the decisions.

What are the reasons for this?

Classification is a task where a given case is mapped to a category. It may happen that the system produces a wrong result. If a user criticizes this result, all what this user can do is to provide the expected classification of this case. This new information will allow a relearning of the classifier such that it provides a correct classification of the case. However, the user will not need to understand why the system has produced a wrong result. It is the task of the designer of the learning algorithm to make sure that the learned system behaves in the intended way. This is due to the fact that classification is considered a task that is unaffected by any preferences the user may have. After all, a photo either shows a cat or does not show a cat, meaning that no preference information might influence the classification result.

The situation is different for decision making. A policy that makes a decision for a given case needs to respect the preferences of multiple stakeholders. Information about those preferences is usually incomplete, meaning that the decision-making process should be interactive. Indeed, it may be sheer impossible to get a complete picture of the preferences of each stakeholder. Those preferences may not even exist a-priori, but may be constructed during the decision making process. Consequently, decision making needs to be revisable and it differs from classification in this aspect.

Two scenarios may arise when a decision is made for a particular case:

  • All stakeholder are satisfied by the recommendation and accept it.
  • One or more stakeholders are not satisfied and ask for explanations. Basically they want to know the choices on which the recommendation is based such that they can challenge some of those choices. For example, they may argue that there are better choices and provide the reason for this in form of preferences between choices.

This is the essence of accountability: if results are not convenient, a different action should have been taken. It requires the ability to change the choices. Accountability is one of the topics addressed by the AI, Ethics, and Society Conference, among others, and is also at the heart of many ongoing initiatives such as the Ethics Guidelines for Trustworthy AI proposed by the EU High-level Expert Group on AI.

Changing the choices is relatively easy if the system explicitly makes choices. However, the now popular neural networks just apply a function which has been learned from a huge amount of data. Explanations should spot the elements that can be changed if the result is not satisfactory. So which elements in the neural network can be changed? Which are the elements that are based on a choice?

Of course there are numerous choices at different levels that finally determine the recommendation made by a neural network. A lot of those choices are made by the designer of the network. This includes choices about the architecture, the form of the mathematical model, and the hyper-parameters. Other choices are made by the learning algorithm (e.g. the direction of the gradient descent). Finally, there are the choices that inform the learning algorithm what to learn. We are mainly interested in the latter choices.

As far as supervised learning is concerned, these choices about the expected behavior are in the data:

  • There are tasks where the data labels have the character of observations of something that is defined elsewhere. As a consequence, the learning will result into a classifier that has the character of an empirical law.
  • There are other tasks where the data labels have the character of choices. As a consequence, the learning will result into a decision policy. 

Choices that are made for some fixed case may change, whereas observations for a fixed case should not change. As such, the second learning task is subject of change, whereas the first one is not. In particular, if the neural network does not produce the expected behavior it will not be sufficient to change the choices of the learning algorithm and those of the designer, but it will be necessary to change the choices in the data. In other words, as long as the data are wrong, the learning algorithm will not be able to produce any neural network that disregards these wrong data. Hence, it will not be possible to change the recommendations made by the learning algorithm if it is not possible to identify the choices in the data that caused this recommendation.

How to find those choices? This an important question, which could be addressed by analysing machine learning from a decision-theoretic perspective!

With this in mind, we give a rapid outlook on this year’s event related to preference handling. First of all, we would like to mention the 6th International Conference on Algorithmic Decision Theory ADT 2019, which will serve as this years’s meeting of the working group on preference handling. ADT 2019 will be held at Duke University, Durham, NC, USA on October 25-27, 2019. The conference organizers are Vince Conitzer, Sasa Pekec, Alexis Tsoukiàs, and Brent Venable. The conference covers a vast range of topics related to algorithmic aspects of decision theory such as Argumentation Theory, Computational Social Choice, Decision Analysis, Game Theory, Machine Learning, Multi-agent Systems, Multiple Criteria Decision Aiding, Risk Management, and Utility Theory. The submission deadline for abstracts and titles is April 14, 2019 (extended) and full papers are expected one week later.

There are a couple of other events related to preference handling. In our last blog entry, we have already mentioned the 18th International Conference on Autonomous Agents and Multiagent Systems AAMAS 2019 which will be held on May 13-17 in Montreal, Canada. The 30th European Conference on Operational Research EURO 2019 will be held on June 23-26, 2019 in Dublin, Ireland. The 28th International Joint Conference on Artificial Intelligence IJCAI-19 will be held in August 10-16 2019, Macao, China. New on the list is the Thirty-Fourth AAAI Conference on Artificial Intelligence AAAI-20, which will be held from Feb 7 to Feb 12, 2020 in New York, NY USA.


LinkedIn Group on Preference Handling
Mail to the Working Group on Advances in Preference Handling