Abstracts de Technical Reports/Technical Reports Abstracts


  • #DECSAI-93-02-xx, A. Cano, J.E. Cano and S. Moral, Simulation algorithms for convex sets of probabilities, (31 pages) , 1993. (full paper)

    Propagation of convex sets of probabilities in dependence structures poses additional computational problems on top of the propagation of a single probability distribution. If we want to calculate the maximum and minimum of the conditional probabilities for a concrete event, we have to search on the space of the possible probability distributions. In a directed acyclic graph this space has a combinatorial size in function of the specification of the problem. This makes the computation very hard in the general case. In this report we proposes different simulation algorithms based on Gibbs sampling techniques which will allow to obtain good approximations to the solutions. It will be assumed that the graph is such that a simple probabilistic propagation is computationally feasible.

  • #DECSAI-94214, L.M. de Campos, Independency Relationships in Singly Connected Networks, (29 pages) 1994.

    Graphical structures such as causal networks or Markov networks are very useful tools for representing irrelevance or independency relationships. Singly connected networks are important specific cases where there is no more than one undirected path connecting each pair of variables. The aim of this paper is to investigate the kind of properties that a dependency model must verify in order to be equivalent to a singly connected graph structure, either via the d-separation criterion for directed acyclic graphs or via the separation criterion for undirected graphs. The main results are the characterizations of those dependency models which are isomorphic to singly connected graphs, as well as the development of efficient algorithms for learning singly connected graph representations of dependency models.

  • #DECSAI-950219, Luis M. de Campos, Juan F. Huete , Efficient Algorithms for Learning Simple Belief Networks (10 pages) July, 1995. (full paper)

    Keywords: Belief Networks, Independence Relationships, Simple Graphs, Learning algorithms.

    Belief networks are graphical structures able to represent dependence and independence relationships among variables in a given domain of knowledge. We focus in the problem of automatic learning of these structures from data, and restrict our study to an specific type of belief networks: Simple Graphs. We develop an efficient algorithm that recovers simple graphs using only zero and first order conditional independence relationships, which overcomes some of the practical difficulties of the existing algorithms.

  • #DECSAI-95-02-27, Luis M. de Campos, Serafín Moral, Removing partial inconsistency in valuation based systems, (24 pages) September, 1995. (full paper)

    This paper presents an abstract definition of partial inconsistency and the operator to remove it: normalization. When there is partial inconsistency in the combination of two pieces of information this partial inconsistency is propagated to all the information in our system degrading it. To avoid this effect, it is necessary to apply the normalization. Four different formalisms are studied as particular cases of the axiomatic framework presented in this paper: probability theory, infinitesimal probabilities, possibility theory, and symbolic evidence theory. It is shown how in one of these theories (probability) the normalization is not an important problem: a property which is verified in this case gives rise to the equivalence of all different normalization estrategies. Things are very different for the other three theories: there are a number of different normalization procedures. he main objective of this paper will be to determine conditions based on general principles indicating how and when the normalization operator should be applied.

  • #DECSAI-95-02-42, Luis D. Hernández, S. Moral , Mixing exact and importance sampling algorithms in dependence graphs, (19 pages) December, 1995. (full paper)

    In this paper a new algorithm for the propagation of probabilities in junction trees is presented. It is based on an hybrid methodology. Given a junction tree, some of the nodes carry out and exact calculation, and the other an approximation by Monte-Carlo methods. A general class of importance sampling algorithms is used for the Monte-Carlo estimation. The basic algorithm and some variations of it are presented. An experimental evaluation is carried out, comparing their performance with the well known likelihood weighting approximated algorithm.

  • #DECSAI-96-02-01, Serafín Moral, Nic Wilson , Importance sampling Monte-Carlo algorithms for the calculation of Dempster-Shafer belief, (18 pages) January, 1996. (full paper)

    This paper presents importance sampling Monte-Carlo algorithms for the calculation of belief functions combination. When the conflict between the evidence is not very high a simple Monte-Carlo algorithm can produce good quality estimations. For the case of highly conflicting evidences a Markov chain Monte-Carlo algorithm was also proposed. In this paper, a new class of importance sampling based algorithms is presented. The performance of them is compared by experimental tests.

  • #DECSAI-960204, Luis M. de Campos, Independency Relationships and Learning Algorithms for Singly Connected Networks, (30 pages) February, 1996. (full paper)

    Graphical structures such as causal networks or Markov networks are very useful tools for representing irrelevance or independency relationships, and they may be used to efficiently perform reasoning tasks. Singly connected networks are important specific cases where there is no more than one undirected path connecting each pair of variables. The aim of this paper is to investigate the kind of properties that a dependency model must verify in order to be equivalent to a singly connected graph structure, as a way of driving automated discovery and construction of singly connected networks in data. The main results are the characterizations of those dependency models which are isomorphic to singly connected graphs (either via the d-separation criterion for directed acyclic graphs or via the separation criterion for undirected graphs), as well as the development of efficient algorithms for learning singly connected graph representations of dependency models.

  • #DECSAI-96-02-11, Silvia Acid, Luis M. de Campos, BENEDICT: An Algorithm for Learning Probabilistic Belief Networks, (14 pages) March, 1996. (full paper)

    We develop a system that, given a database containing instances of the variables in a domain of knowledge, captures many of the dependence relationships constrained by those data, and represents them as a graph. If we enlarge this graph by adding numerical values, corresponding to the conditional probabilities of every node given its parents nodes, we obtain a belief network. The resultant belief network provides an insight into the probabilistic dependencies that exist among the variables and it can be used as an expert system to perform tasks of diagnosis, classification, decision making,... To obtain the network structure, we have designed a new learning algorithm, called BENEDICT, which has been implemented and incorporated as a module within the system. The numerical component, i.e., the conditional probability tables, are estimated directly from the database. We have tested the system on databases generated from simulated networks by using probabilistic sampling, including an extensive database, corresponding to the well-known Alarm Monitoring System. These databases were used as inputs for the learning module, and the networks obtained, compared with the originals, wre consistently similar.

  • #DECSAI-96-02-14, Silvia Acid, Luis M. de Campos, An algorithm for finding minimum d-separating sets in belief networks, (22 pages) March, 1996. (full paper)

    The criterion commonly used in directed acyclic graphs (dags) for testing graphical independence is the well-known d-separation criterion. It allows us to build graphical representations of dependency models (usually probabilistic dependency models) in the form of belief networks, which make possible an easy interpretation and management of independence relationships, without reference to numerical parameters (conditional probabilities). In this paper we study the following combinatorial problem: to find the minimum d-separating set for two nodes in a dag. This set would represent the minimum information necessary to prevent these two nodes to influence each other. The solution of this basic problem and some of its extensions can be useful in several ways, as we will see later. Our solution is a based on a two-steps process: first, we reduce the original problem to the simpler one of finding a minimum separating set in an undirected graph, and second, we develop an algorithm for solving it.

  • #DECSAI-960225, Luis M. de Campos, Characterizations of Decomposable Dependency Models, (10 pages) , 1996. (full paper)

    Decomposable models are dependency models having the characteristic of being representable by means of both undirected and directed graphical models, as well as a number of important and useful properties. Decomposable models can be characterized as the kind of dependency models that are isomorphic to chordal graphs. The aim of this paper is to find characterizations of decomposable models in terms of independence relationships. This is achieved by adding a single property to the well-known set of axioms characterizing dependency models isomorphic to undirected graphs.

  • #DECSAI-960227, Luis M. de Campos, Juan F. Huete, On the Use of Independence Relationships for Learning Simplified Belief Networks, (23 pages) June, 1996. (full paper)

    Belief networks are graphical structures able to represent dependence and independence relationships among variables in a given domain of knowledge. We focus in the problem of automatic learning of these structures from data, and restrict our study to a specific type of belief networks: Simple Graphs, i.e., directed acyclic graphs where every pair of nodes with a common direct child have no common ancestor nor is one an ancestor of the other. Our study is based on an analysis of the independences that can be represented by means of simple graphs. This theoretical study, which includes new characterizations of simple graphs in terms of independence relationships, is the basis of an efficient algorithm that recovers simple graphs using only zero and first order conditional independence tests, thus overcoming some of the practical difficulties of the existing algorithms.

  • #DECSAI-97-02-06, Luis D. Hernández, Serafín Moral and Antonio Salmerón , A Monte-Carlo algorithm for probabilistic stratified simulation techniques, (31 pages) January, 1997. (full paper)

    Keywords: Belief networks, simulation, importance sampling, stratified sampling, approximated precomputation

    A class of Monte-Carlo algorithms for probability propagation in belief networks is given. The simulation is based on a two steps procedure. The first one is a node deletion technique to calculate the a posteriori distribution on a variable, with the particularity that when exact computations are too costly, they are carried out in an approximate way. In the second step, the computations done in the first one are used to obtain random configurations for the variables of interest. These configurations are weighted according to the importance sampling methodology. Different particular algorithms are obtained depending on the approximation procedure used in the first step and in the way of obtaining the random configurations. In this last case, a stratified sampling technique is used, which has been adapted to be applied tto very large networks without problems with rounding errors.

  • #DECSAI-97-02-07, Serafín Moral and José del Sagrado , Aggregating Imprecise Probabilities (27 pages) February, 1997. (full paper)

    Keywords: Imprecise probabilities, convex sets, experts' opinions aggregation, discounting opinions.

    Methods to aggregate convex sets of probabilities are proposed. Source reliability is taken into account by transforming the given information and making it less precise. An important property of the aggregation will be that the precision of the result will depend on the initial compatibility of sources. Special attention will be paid to the particular case of probability intervals giving adaptations of the aggregation procedures.

  • #DECSAI-97-02-08, Luis D. Hernández, Serafín Moral, Inference with idempotent valuations, (21 pages) February, 1997. (full paper)

    Valuation based systems verifying an idempotent property are studied. A partial order is defined between the valuations giving them a lattice structure. Then, two different strategies are introduced to represent valuations: as infimum of the most informative valuations or as supremum of the least informative ones. It is studied how to carry out computations with both representations in an efficient way. The particular cases of finite sets and convex polytopes are considered.

  • #DECSAI-97-02-31, Peter Walley, Serafín Moral, Upper Probabilities based only in the Likelihood Function, (32 pages) December, 1997. (full paper)

    In the problem of parametric statistical inference with a finite parameter space, we study some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior probabilities. The rules satisfy the likelihood principle and a basic consistency principle (``avoiding sure loss"), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile two general approaches to statistical inference: likelihood inference and coherent inference.

  • #DECSAI-98-02-20, A. Salmerón, A. Cano and S. Moral, Importance Sampling in Bayesian Networks Using Probabilty Trees, (36 pages) June, 1998. (full paper)

    In this paper a new Monte-Carlo algorithm for the propagation of probabilities in Bayesian networks is proposed. This algorithm has two stages: in the first one an approximate propagation is carried out by means of a deletion sequence of the variables. In the second stage a sample is obtained using as sampling distribution the calculations of the first step. The different configurations of the sample are weighted according to the importance sampling technique. One important characteristic of the procedure is the use of probability trees to store and approximate probability potentials, showing a significative difference with respect to the case in which potentials are represented by means of tables.

  • #DECSAI-98-02-25, Peter Walley, Reconciling Frequentist Properties with the Likelihood Principle, (45 pages) October, 1998. (full paper)

    For the general problem of parametric statistical inference, several frequentist principles are formulated, including principles which apply to the problems of hypothesis testing, set estimation, predictive inference and conditional inference. These principles give a frequentist interpretation to measures of uncertainty and they guarantee that, whatever the true parameter value, statistical procedures have little chance of producing misleading inferences. The frequentist principles are shown to be compatible with the likelihood principle and with principles of coherence. Two general methods which satisfy both the likelihood and frequentist principles are studied. One method produces posterior upper and lower probabilities from a very large set of prior probability measures. The second method derives inferences from a renormalised version of the observed likelihood function. Because inferences from the two methods encompass a wide range of frequentist, likelihood and Bayesian inferences, they are conservative and they have relatively low power. More powerful methods can be obtained by weakening the frequentist principles and making weak assumptions about the sampling model. The example of Bernoulli trials is used to demonstrate that there are methods of inference which satisfy the likelihood principle, are coherent, and have good frequentist properties under a wide range of sampling models.

  • #DECSAI-99-02-01, L.M. de Campos, J.A. Gámez, S. Moral, Simplifying explanations in Bayesian belief networks, (10 pages) January, 1999. (full paper)

    Abductive inference in Bayesian belief networks is intended as the process of generating the K most probable configurations given and observed evidence. These configurations are called explanations and in most of the approaches found in the literature, all the explanations have the same number of literals. In this paper we study how to simplify the explanations in such a way that the resulting configurations are still accounting for the observed facts.

  • #DECSAI-990212, Luis M. de Campos, Juan F. Huete, Approximating causal orderings for Bayesian networks using genetic algorithms and simulated annealing, (9 pages) May, 1999. (full paper)

    The structure of a Bayesian network depends heavily on the ordering of its variables: given any ordering it is always possible to build a Bayesian network (i.e., a minimal Independence map of the corresponding dependence model) whose arcs are consistent with the initial ordering; however, the topology of the network, and therefore the number of conditional independence relationships that may be explicitly represented can vary greatly from one ordering to another. As a sparse representation is always preferable to a denser representation of the same model, the task of determining the ordering giving rise to the network with minimum number of arcs is important. Finding such an optimal ordering for the variables in a Bayesian network is a complex problem, which requires using a lot of information about the model. In this work we propose methods to obtain a good approximation to the optimal ordering, using only partial information. More precisely, we only use conditional independence relationships of order zero and one, and search for the ordering which best preserves this information. The search process will be guided by genetic algorithms and simulated annealing.

  • #DECSAI-00-02-01, Andrés Cano, Serafín Moral and Antonio Salmerón, Penniless Propagation in Join Trees, (32 pages) February, 2000. (full paper)

    This paper presents non-random algorithms for approximate computation in Bayesian networks. They are based on the use of probability trees to represent probability potentials, using the Kullback-Leibler cross entropy as a measure of the error of the approximation. Different alternatives are presented and tested in some experiments with difficult propagation problems. The results show how it is possible to find good approximations in short time with respect to Hugin algorithm.

  • #DECSAI-00-02-02, Luis M. de Campos, Jose A. G\'amez and Serafín Moral , Partial Abductive Inference in Bayesian Belief Networks: An Evolutionary Computation Approach by Using Problem Specific Genetic Operators, (34 pages) February, 2000. (full paper)

    Abductive inference in Bayesian belief networks (BBNs) is intended as the process of generating the K most probable configurations given an observed evidence. When we are only interested in a subset of the network variables, this problem is called partial abductive inference. Both problems are NP-hard, and so exact computation is not always possible. In this paper a genetic algorithm is used to perform partial abductive inference in BBNs. The main contribution is the introduction of new genetic operators specificly designed for this problem. By using these genetic operators we try to take advantage of the calculations previously carried out, when a new individual is being evaluated. The algorithm is tested using the alarm belief network and compared with a previous genetic algorithm based on classical genetic operators. From the experimental results we can conclude that the new genetic operators improve the behaviour of the operators previously used.

  • #DECSAI-00-02-14, A. Cano, S. Moral, Using probability trees to compute marginals with imprecise probabilities, (43 pages) May, 2000. (full paper)

    This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of probabilities associated to the intervals.

  • #DECSAI-00-02-15, A. Cano, S. Moral, A. Salmerón, Lazy Evaluation in Penniless Propagation over Join Trees, (25 pages) July, 2000. (full paper)

    In this paper, we investigate the application of the ideas behind Lazy propagation to the Penniless propagation scheme. In addition to the use of probability trees to represent and approximate potentials, both in the messages and in the nodes of the join tree, those potentials are not combined to obtain the joint potential over a node of the join tree or over a message. Rather, those joint potentials are represented in a factorized way, and the combinations are postponed until they are compulsory for the deletion of a variable. Here we test two variations of the basic Lazy scheme. One is based in keeping a hash table of combined potentials so that computations are not repeated. The other one consists in using heuristics to determine an order of combination of a list of potentials.