%0 Journal Article
%D 2010
%T Extracting Reduced Logic Programs from Artificial Neural Networks
%A Jens Lehmann
%A Sebastian Bader
%A Pascal Hitzler
%K artificial neural networks
%K reduced logic programs
%X Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are *black boxes*. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as *simple* as possible - where *simple* is being understood in some clearly defined and meaningful way.
%G eng
%0 Journal Article
%J Neurocomputing
%D 2008
%T Connectionist Model Generation: A First-Order Approach
%A Sebastian Bader
%A Steffen Holldobler
%A Pascal Hitzler
%K Connectionist Model Generation
%K First-Order Logic Programs
%K Neural-Symbolic Integration
%K Recurrent RBF Networks
%X Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core.We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
%B Neurocomputing
%P 2420-2432
%G eng
%0 Book Section
%D 2007
%T The Core Method: Connectionist Model Generation for First-Order Logic Programs
%A Sebastian Bader
%A Steffen Holldobler
%A Andreas Witzel
%A Pascal Hitzler
%K Artificial Intelligence
%X In Artificial Intelligence, knowledge representation studies the formalisation of knowledge and its processing within machines. Techniques of automated reasoning allow a computer system to draw conclusions from knowledge represented in a machine-interpretable form. Recently, ontologies have evolved in computer science as computational artefacts to provide computer systems with a conceptual yet computational model of a particular domain of interest. In this way, computer systems can base decisions on reasoning about domain knowledge, similar to humans. This chapter gives an overview on basic knowledge representation aspects and on ontologies as used within computer systems. After introducing ontologies in terms of their appearance, usage and classification, it addresses concrete ontology languages that are particularly important in the context of the Semantic Web. The most recent and predominant ontology languages and formalisms are presented in relation to each other and a selection of them is discussed in more detail.
%G eng
%0 Conference Paper
%B Twentieth International Joint Conference on Artificial Intelligence, IJCAI-07
%D 2007
%T A Fully Connectionist Model Generator for Covered First-Order Logic Programs
%A Sebastian Bader
%A Steffen Holldobler
%A Andreas Witzel
%A Pascal Hitzler
%X We present a fully connectionist system for the learning of first-order logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feed-forward network and train the network using the examples. This results in the learning of first-order knowledge while damaged or noisy data is handled gracefully.
%B Twentieth International Joint Conference on Artificial Intelligence, IJCAI-07
%C Hyderabad, India
%P 666-671
%G eng
%0 Conference Paper
%B Eighteenth International Florida Artificial Intelligence Research Symposium Conference
%D 2005
%T Computing First-Order Logic Programs by Fibring Artificial Neural Networks
%A Sebastian Bader
%A Artur S. D'A. Garcez
%A Pascal Hitzler
%X The integration of symbolic and neural-network-based artificial intelligence paradigms constitutes a very challenging area of research. The overall aim is to merge these two very different major approaches to intelligent systems engineering while retaining their respective strengths. For symbolic paradigms that use the syntax of some first-order language this appears to be particularly difficult. In this paper, we will extend on an idea proposed by Garcez and Gabbay (2004) and show how first-order logic programs can be represented by fibred neural networks. The idea is to use a neural network to iterate a global counter n. For each clause C_{i} in the logic program, this counter is combined (fibred) with another neural network, which determines whether C_{i} outputs an atom of level *n* for a given interpretation *I*. As a result, the fibred network computes the singlestep operator T_{P} of the logic program, thus capturing the semantics of the program.
%B Eighteenth International Florida Artificial Intelligence Research Symposium Conference
%C Clearwater Beach, Florida, USA
%G eng
%0 Book Section
%D 2005
%T Dimensions of Neural-Symbolic Integration - A Structured Survey
%A Sebastian Bader
%A Pascal Hitzler
%G eng
%0 Conference Paper
%D 2005
%T Extracting Reduced Logic Programs from Artificial Neural Networks
%A Jens Lehmann
%A Sebastian Bader
%X Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are *black boxes*. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as *simple* as possible - where *simple* is being understood in some clearly defined and meaningful way.
%G eng
%0 Conference Paper
%B Ontology Learning as a Use Case for Neural-Symbolic Integration
%D 2005
%T Ontology Learning as a Use Case for Neural-Symbolic Integration
%A Sebastian Bader
%A Pascal Hitzler
%A Artur S. D'Avila Garcez
%X We argue that the field of neural-symbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning - as occuring in the context of semantic technologies - provides such an application scenario with potential for success and high impact on neural-symbolic integration.
%B Ontology Learning as a Use Case for Neural-Symbolic Integration
%G eng
%0 Conference Paper
%B Third International Conference on Information
%D 2004
%T The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence
%A Sebastian Bader
%A Steffen Holldobler
%A Pascal Hitzler
%X Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.
%B Third International Conference on Information
%C Tokyo, Japan
%G eng
%0 Journal Article
%J Journal of Applied Logic
%D 2004
%T Logic Programs, Iterated Function Systems, and Recurrent Radial Basis Function Networks
%A Sebastian Bader
%A Pascal Hitzler
%K iterated functions
%X Graphs of the single-step operator for first-order logic programs -displayed in the real plane - exhibit self-similar structures known from topological dynamics, i.e. they appear to be *fractals*, or more precisely, attractors of iterated function systems. We show that this observation can be made mathematically precise. In particular, we give conditions which ensure that those graphs coincide with attractors of suitably chosen iterated function systems, and conditions which allow the approximation of such graphs by iterated function systems or by fractal interpolation. Since iterated function systems can easily be encoded using recurrent radial basis function networks, we eventually obtain connectionist systems which approximate logic programs in the presence of function symbols.
%B Journal of Applied Logic
%P 273- 300
%G eng