About  |  Study  |  Members  |  Projects  |  Publications  |  Events  |  Search  |  Internals  |  Contact & Imprint

ICCL Summer School 2010  -  Course Program

Cognition, Language and Neural Computation

Jerome Feldman    (ICSI, Berkeley, USA)

This course will present an overview of a Unified Cognitive Science, integrating all levels from language and thought through detailed neurobiology. The key computational methodology involves structured connectionist models as the bridge between symbolic behavior and neural computation. The text book will be four years old in 2010 and the course will mostly focus on newer developments.

Slides: Overview (docx)

dresden1 (ppt)
dresden2 (ppt)
dresden3 (pptx)
dresden4 (ppt)
dresden5 (pptx)

Other materials:

The Binding Problem(s) (pdf)
Embodied meaning in a neural theory of language (pdf)
Mind Changes (pdf) (*** UPDATE 7.9.2010 ***)
A Neural Theory of Language (pdf)
Embodied language, ... (pdf)
Reasoning about actions ... (pdf)

Neuro-Symbolic Cognitive Reasoning

Artur d'Avila Garcez    (City University London, UK)

Three notable hallmarks of intelligent cognition are the ability to draw rational conclusions, the ability to make plausible assumptions, and the ability to generalise from experience. Although human cognition often involves the interaction of these three abilities, in artificial intelligence they are typically studied in isolation. In our research programme, we seek to integrate the three abilities within neural computation, offering a unified framework for learning and reasoning that exploits the parallelism and robustness of connectionism. A neural network can be the machine for computation, inductive learning, and effective reasoning, while logic provides rigour, modularity, and explanation capability to the network. We call such systems, combining a connectionist learning component with a logical reasoning component, "neural-symbolic learning systems". In this course, I review the work on neural-symbolic learning systems, starting with logic programming, which has already provided contributions to problems in bioinformatics and engineering. I then look at how to represent modal logic and other forms of non-classical reasoning in neural networks. The model consists of a network ensemble, each network representing the knowledge of an agent (or possible world) in a particular time-point. Ensembles may be seen as in different levels of abstraction so that networks may be fibered onto (combined with) other networks to form a modular structure combining different logical systems or, for example, object-level and meta-level knowledge. Networks may also be combined to represent (and learn) relations between objects, with interesting applications in graph mining and link analysis in biology and social networks. We claim that this quite powerful yet simple structure offers a basis for an expressive yet computationally tractable cognitive model of integrated reasoning and robust learning. The material is part of the book "neural-symbolic cognitive reasoning", Springer 2009.


Lecture 1 (pdf)
Lecture 2 (pdf)
Lecture 3 (pdf)
Lecture 4 (pdf)
Lecture 5 (pdf)

Connectionist Model Generation

Steffen Hölldobler    (Technische Universität Dresden)

The course will focus on connectionist models for deductive processes. After reviewing existing approaches like McCulloch and Pitts networks and their relationship to finite automata, Hopfield networks and their relationship to the propositional satisfiability problem as well as spreading activation networks and reflexive reasoning, the core method will be studies in detail. The core method, which is an acronym for connectionist model generation using recurrent networks with feed-forward core, is based on three main ideas. The semantics of logic programs can be defined by means of immediate consequence operators. If such operators are continuous, then they can be approximated by feed-forward connectionist networks. Moreover, if the networks are turned into recurrent ones and initialized with the empty interpretation, then they admit a stable state, which corresponds to the least model of the underlying logic program. Finally, if the operators are contractions, then the recurrent networks admit a unique stable state independently of the initial interpretation. In the final part, the core method is applied to human reasoning problems and it is shown that it is cognitively adequate.


Introduction (pdf)
History (pdf)
Propositional CORE (pdf)
ReasoningCORE (pdf)
First-order CORE (pdf)

Complex Networks of Mindful Entities

Luís Moniz Pereira    (Universidade Nova de Lisboa, Portugal)

In this course we want to understand and explain how some social collective behavior emerges from individuals agents' cognitive abilities, in communities where individuals are nodes of complex adaptive networks which self-organize as a result of the aforementioned individuals agents' cognition. We need to investigate which different cognitive abilities impinge on the emergence of population properties and, as a result, what are the cognitive capacities required to determine the emergence of a given collective social behavior. As such, the key innovation consists in the articulation of the two distinct levels of simulation, individual and societal, and in their combined dynamics. This must be achieved both at the modeling level and at the computational implementation levels.

Slides and lecturing materials (updated: evening of August 30):

Main file
why_we_cooperate.htm (HTML)

Audio recordings:

lecture_1.dss    lecture_2.dss    lecture_3.dss    lecture_4.dss    lecture_5.dss

Supporting papers

IDT evolution.pdf

Cognitive Complexity in Deductive Reasoning

Marco Ragni    (University of Freiburg)

The ability to gain new insights from given knowledge is certainly one of the most fundamental cognitive abilities of humans and a central aspect in modern psychological research. From a formal perspective, questions concerning the difficulty in human reasoning - the cognitive complexity - are becoming more and more important. Artificial Intelligence offers methods for representing different cognitive theories explaining the human deduction process, for example, the mental model theory, the theory of mental logic, and probabilistic theories.
In this course I will first introduce the three main psychological theories attempting to explain the human deduction process as a basis. This will form the foundation for reviewing important results and effects in reasoning from the well-known context-effect in conditional reasoning over fallacies and illusions in reasoning (Johnson-Laird \& Yang, 2002) to relational complexity (Halford et al., 1998). A special emphasis will be laid on important examples demonstrating reasoning difficulty and fallacies.

Slides and other materials:

Some problems to solve
Lecture 1 (pdf)
Lecture 2 (pdf)
Lecture 3 (pdf)
Lecture 4 (pdf)

Computing Event Structures with Logic Programs

Fritz Hamm / Fabian Schlotterbeck    (Universität Tübingen)

This course will cover planning formalisms as applied to human cognition. After a brief general introduction to planning formalisms we will focus on the approach in van Lambalgen & Hamm (2005) which argues that human conscious conception of time is based on our planning abilities. The formalism used is an event calculus considered as a constraint logic program which computes event structures and their temporal relations. We will discuss applications of this calculus in Artificial Intelligence, Cognitive Psychology and Formal Semantics. If time allows we will introduce some alternatives and possible generalizations of this approach; e.g. answer set programming.

Slides (preliminary versions, subject to further improvements):

Answer Sets (pdf)
Event Calculus (pdf)
The Suppression Task (pdf)
Graphics (pdf)
Handout: Yale Shooting Scenario (pdf)

Workshop Presentations