This year, the working group will meet at the M-PREF 2020 workshop in Santiago de Compostela, Spain on August 29. Please submit your papers by March 6.
Scientific inquiry is concerned with the discovery of universally valid theories. These can be mathematical theories and universality can be established by proofs. Or these can be theories about the world and the inquiry needs to go through a process that seeks to show experimentally that the theory is wrong. The theory is adopted if it is able to explain the observed phenomena and all efforts to falsify it have failed. However, as soon as some new observations succeed to falsify the theory, the theory is abandoned.
Scientific knowledge thus needs to be universally valid, but does this hold for human knowledge in general? Is human reasoning scientifically rigorous?
Of course, everybody knows that this is not the case and many methods have been explored in the field of knowledge representation to deal with the particular character of human knowledge and the fact that human reasoning may result in wrong conclusions in certain cases. Human reasoning is inherently based on assumptions, on short-cuts, on simplifications, which are valid in many situations, but not all. As such, human reasoning may rule out possibilities that are rare, too complex to explore, or in conflict with beliefs.
All these imperfections do not matter as long as this knowledge permits one to solve daily problems. However, the limited validity of this knowledge is revealed in situations where conclusions contradict each other or contradict the facts. Will then humans throw away this knowledge as it is happening in science and search for a more accurate knowledge?
Usually, the knowledge is used for solving some daily problem and those problems call for an immediate solution. If they cannot be solved by using the existing knowledge, a reflection process starts and seeks to identify the assumptions that are causing the failure, to retract some of those assumptions, and to search for alternative solutions that do not use the discarded assumptions. The purpose of this reflection process is to diagnose and to repair. The repair consists in discarding the problematic knowledge within the current situation. Once the exceptional situation is over, this knowledge might be used again. Hence, human reasoning is able to use knowledge in a dynamic and flexible way.
This capability of discarding knowledge depending on the context is often forgotten in the current debate between proponents of symbolic AI and proponents of artificial neural networks. The latter are critiquing the rigidness of logical rules, which manipulate symbolic structures, and say that these need to be replaced by continuous functions defined over vector spaces.
From a purely formal point of view, these two approaches are completely opposite to each other: the logical approach represents composable descriptions by making them independent of all context, whereas the neural network approach takes all the available context into account when producing a result. Hence, the logical approach lacks the capability of taking context into account, whereas the neural network approach lacks the capability of treating composability. There is a research program that addresses the limits of neural networks, but is there also a research program that addresses the limits of the logical approach?
The answer is yes, but this program mainly addresses theoretical questions and does not sufficiently test those approaches on real-world problems. This research is carried out in the field of knowledge representation and reasoning and studies methods for dealing with assumptions and exceptions. This covers methods for defeasible reasoning known as non-monotonic logic as well as methods for argumentation.
Whereas these basic capabilities have been well explored and understood, they have not sufficiently been brought to practice and tested. Those tests are necessary to refine those theories and to address supplementary questions that need to be answered for successful application. One such question is to find efficient algorithms for conducting the diagnosis and the repair, but progress has been made on this side.
Moreover, there are questions asking for additional knowledge. If multiple assumptions are in conflict, which one should be discarded in which situation? Preferences between assumptions provide an answer. For examples, preferences have been used with much success in non-monotonic logic to express which default rules are more important than which other default rules, thus causing the retraction of the less preferred default rules if several default rules are in conflict with each other.
These preferences constitute higher-order knowledge and need to be acquired in addition to the domain knowledge. Hierarchies of knowledge? Shouldn’t a single formalism be sufficient? Well even if we strive for simplicity, the one-size-fits-all formalism also imposes a strong constraint on the representation and reasoning capabilities. After all, why should one want to limit things if the combination of different methods opens up new possibilities that cannot be achieved by a single one?
Whereas preferences for defeasible reasoning have been studied extensively, it would be important to test those methods on real-world problems. The multidisciplinary workshop on advances in preference handling provides an opportunity to present such a kind of investigation and to gather feed-back from the researchers of the field.
The next event in this series is the 12th multidisciplinary workshop on Advances in Preferences Handling M-PREF 2020. It will be organized by Markus Endres, Ulrich Junker, and Andreas Pfandler and held in Santiago de Compostela, Spain on June 8 August 29, 2020 in conjunction with ECAI 2020. As for the previous editions, the workshop will address all computational aspects of preference handling and also serve as the yearly meeting of the working group on Advances in Preference Handling. Submissions are expected by March 6, 2020. Please consult the workshop web site for details.
Later in the year, Vincent Mousseau and Andrea Passerini will organize the fifth workshop on “From Multiple Criteria Decision Aid to Preference Learning” DA2PL'2020 on November 5-6, 2020 in Trento, Italy. The submission deadline is August 22, 2020. This workshop will bring together researchers from decision analysis and machine learning and allow them to discuss topics such as data-driven preference modeling and elicitation as well as the usage of decision-analytical methods for machine learning.