In [1]:
%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt
import os

Creating your first Bayesian Network with pyAgrum

(This example is based on an OpenBayes [closed] website tutorial)

A Bayesian network (BN) is composed of random variables (nodes) and their conditional dependencies (arcs) which, together, form a directed acyclic graph (DAG). A conditional probability table (CPT) is associated with each node. It contains the conditional probability distribution of the node given its parents in the DAG:

Such a BN allows to manipulate the joint probability $P(C,S,R,W)$   using this decomposition :
$P(C,S,R,W)=\prod_X P(X | Parents_X) = P(C) \cdot P(S | C) \cdot P(R | C) \cdot P(W | S,R)$

Imagine you want to create your first Bayesian network, say for example the 'Water Sprinkler' network. This is an easy example. All the nodes are Boolean (only 2 possible values). You can proceed as follows.

Import the pyAgrum package

In [2]:
import pyAgrum as gum

Create the network topology

Create the BN

The next line creates an empty BN network with a 'name' property.

In [3]:
bn=gum.BayesNet('WaterSprinkler')
print(bn)
BN{nodes: 0, arcs: 0, domainSize: 1, parameters: 0, compression ratio: 100-10^-inf% }

Create the variables

pyAgrum(aGrUM) provides 3 types of variables :

  • LabelizedVariable
  • RangeVariable
  • DiscretizedVariable
In this tutorial, we will use LabelizedVariable, which is a variable whose domain is a finite set of labels. The next line will create a variable named 'c', with 2 values and described as 'cloudy?', and it will add it to the BN. The value returned is the id of the node in the graphical structure (the DAG). pyAgrum actually distinguishes the random variable (here the labelizedVariable) from its node in the DAG: the latter is identified through a numeric id. Of course, pyAgrum provides functions to get the id of a node given the corresponding variable and conversely.

In [4]:
c=bn.add(gum.LabelizedVariable('c','cloudy ?',2))
print(c)
0

You can go on adding nodes in the network this way. Let us use python to compact a little bit the code:

In [5]:
s, r, w = [ bn.add(name, 2) for name in "srw" ] #bn.add(name, 2) === bn.add(gum.LabelizedVariable(name, name, 2))
print (s,r,w)
print (bn)
1 2 3
BN{nodes: 4, arcs: 0, domainSize: 16, parameters: 8, compression ratio: 50% }

Create the arcs

Now we have to connect nodes, i.e., to add arcs linking the nodes. Remember that c and s are ids for nodes:

In [6]:
bn.addArc(c,s)

Once again, python can help us :

In [7]:
for link in [(c,r),(s,w),(r,w)]:
    bn.addArc(*link)
print(bn)
BN{nodes: 4, arcs: 4, domainSize: 16, parameters: 18, compression ratio: -12% }

pyAgrum provides tools to display bn in more user-frendly fashions.
Notably, pyAgrum.lib is a set of tools written in pyAgrum to help using aGrUM in python. pyAgrum.lib.notebook adds dedicated functions for iPython notebook.

In [8]:
import pyAgrum.lib.notebook as gnb
bn
Out[8]:
G c c s s c->s r r c->r w w s->w r->w

Create the probability tables

Once the network topology is constructed, we must initialize the conditional probability tables (CPT) distributions. Each CPT is considered as a Potential object in pyAgrum. There are several ways to fill such an object.

To get the CPT of a variable, use the cpt method of your BayesNet instance with the variable's id as parameter.

Now we are ready to fill in the parameters of each node in our network. There are several ways to add these parameters.

Low-level way

In [9]:
bn.cpt(c).fillWith([0.5,0.5])
Out[9]:
c
0
1
0.50000.5000

Most of the methods using a node id will also work with name of the random variable.

In [10]:
bn.cpt("c").fillWith([0.4,0.6])
Out[10]:
c
0
1
0.40000.6000

Using the order of variables

In [11]:
bn.cpt(s).var_names
Out[11]:
['c', 's']
In [12]:
bn.cpt(s)[:]=[ [0.5,0.5],[0.9,0.1]]

Then $P(S | C=0)=[0.5,0.5]$
and $P(S | C=1)=[0.9,0.1]$.

In [13]:
print(bn.cpt(s)[1])
[ 0.9  0.1]

The same process can be performed in several steps:

In [14]:
bn.cpt(s)[0,:]=0.5 # equivalent to [0.5,0.5]
bn.cpt(s)[1,:]=[0.9,0.1]
In [15]:
bn.cpt(w).var_names
Out[15]:
['r', 's', 'w']
In [16]:
bn.cpt(w)[0,0,:] = [1, 0] # r=0,s=0
bn.cpt(w)[0,1,:] = [0.1, 0.9] # r=0,s=1
bn.cpt(w)[1,0,:] = [0.1, 0.9] # r=1,s=0
bn.cpt(w)[1,1,:] = [0.01, 0.99] # r=1,s=1

Using a dictionnary

This is probably the most convenient way:

In [17]:
bn.cpt(w)[{'r': 0, 's': 0}] = [1, 0]
bn.cpt(w)[{'r': 0, 's': 1}] = [0.1, 0.9]
bn.cpt(w)[{'r': 1, 's': 0}] = [0.1, 0.9]
bn.cpt(w)[{'r': 1, 's': 1}] = [0.01, 0.99]
bn.cpt(w)
Out[17]:
w
s
r
0
1
0
0
1.00000.0000
1
0.10000.9000
0
1
0.10000.9000
1
0.01000.9900

The use of dictionaries is a feature borrowed from OpenBayes. It facilitates the use and avoid common errors that happen when introducing data into the wrong places.

In [18]:
bn.cpt(r)[{'c':0}]=[0.8,0.2]
bn.cpt(r)[{'c':1}]=[0.2,0.8]

Input/output

Now our BN is complete. It can be saved in different format :

In [19]:
print(gum.availableBNExts())
bif|dsl|net|bifxml|o3prm|uai

We can save a BN using BIF format

In [20]:
gum.saveBN(bn,os.path.join("out","WaterSprinkler.bif"))
In [21]:
with open(os.path.join("out","WaterSprinkler.bif"),"r") as out:
    print(out.read())
network "WaterSprinkler" {
   property software aGrUM;
}

variable c {
   type discrete[2] {0, 1};
}

variable s {
   type discrete[2] {0, 1};
}

variable r {
   type discrete[2] {0, 1};
}

variable w {
   type discrete[2] {0, 1};
}

probability (c) {
   default 0.4 0.6;
}
probability (s | c) {
   (0) 0.5 0.5;
   (1) 0.9 0.1;
}
probability (r | c) {
   (0) 0.8 0.2;
   (1) 0.2 0.8;
}
probability (w | s, r) {
   (0, 0) 1 0;
   (1, 0) 0.1 0.9;
   (0, 1) 0.1 0.9;
   (1, 1) 0.01 0.99;
}


In [22]:
bn2=gum.loadBN(os.path.join("out","WaterSprinkler.bif"))
out/WaterSprinkler.bif:3: 27 : warning : Warning : Properties are not supported yet
   property software aGrUM;
                          ^

We can also save and load it in other formats

In [23]:
gum.saveBN(bn,os.path.join("out","WaterSprinkler.net"))
with open(os.path.join("out","WaterSprinkler.net"),"r") as out:
    print(out.read())
bn3=gum.loadBN(os.path.join("out","WaterSprinkler.net"))
net {
  name = WaterSprinkler;
  software = "aGrUM ";
  node_size = (50 50);
}

node c {
   states = (0 1 );
   label = "c";
   ID = "c";
}

node s {
   states = (0 1 );
   label = "s";
   ID = "s";
}

node r {
   states = (0 1 );
   label = "r";
   ID = "r";
}

node w {
   states = (0 1 );
   label = "w";
   ID = "w";
}

potential (c) {
   data = (  0.4 0.6);
}
potential ( s | c   ) {
   data = 
   ((   0.5   0.5)
   (   0.9   0.1));
}
potential ( r | c   ) {
   data = 
   ((   0.8   0.2)
   (   0.2   0.8));
}
potential ( w | s   r   ) {
   data = 
   (((   1   0)
   (   0.1   0.9))
   ((   0.1   0.9)
   (   0.01   0.99)));
}



Inference in Bayesian Networks

We have to choose an inference engine to perform calculations for us. Two inference engines are currently available in pyAgrum:

  • LazyPropagation: an exact inference method that transforms the Bayesian network into a hypergraph called a join tree or a junction tree. This tree is constructed in order to optimize inference computations.
  • Gibbs : an approximate inference engine using the Gibbs sampling algorithm to generate a sequence of samples from the joint probability distribution.
In [24]:
ie=gum.LazyPropagation(bn)

Inference without evidence

In [25]:
ie.makeInference()
print (ie.posterior(w))
<w:0> :: 0.33328 /<w:1> :: 0.66672

In our BN, $P(W) = [ 0.3529,\ \ 0.6471]$

With notebooks, it can be viewed as an HTML table

In [26]:
ie.posterior(w)
Out[26]:
w
0
1
0.33330.6667

Inference with evidence

Suppose now that you know that the sprinkler is on and that it is not cloudy, and you wonder what Is the probability of the grass being wet, i.e., you are interested in distribution $P(W|S=1,C=0)$.
The new knowledge you have (sprinkler is on and it is not cloudy) is called evidence. Evidence is entered using a dictionary. When you know precisely the value taken by a random variable, the evidence is called a hard evidence. This is the case, for instance, when I know for sure that the sprinkler is on. In this case, the knowledge is entered in the dictionary as 'variable name':label

In [27]:
ie.setEvidence({'s':0, 'c': 0})
ie.makeInference()
ie.posterior(w)
Out[27]:
w
0
1
0.82000.1800

When you have incomplete knowledge about the value of a random variable, this is called a soft evidence. In this case, this evidence is entered as the belief you have over the possible values that the random variable can take, in other words, as P(evidence|true value of the variable). Imagine for instance that you think that if the sprinkler is off, you have only 50% chances of knowing it, but if it is on, you are sure to know it. Then, your belief about the state of the sprinkler is [0.5, 1] and you should enter this knowledge as shown below. Of course, hard evidence are special cases of soft evidence in which the beliefs over all the values of the random variable but one are equal to 0.

In [28]:
ie.setEvidence({'s': [0.5, 1], 'c': [1, 0]})
ie.makeInference()
ie.posterior(w) # using gnb's feature
Out[28]:
w
0
1
0.32800.6720

the pyAgrum.lib.notebook utility proposes certain functions to graphically show distributions.

In [29]:
%matplotlib inline
gnb.showProba(ie.posterior(w))
In [30]:
gnb.showPosterior(bn,{'s':1,'c':0},'w')

inference in the whole Bayes net

In [31]:
gnb.showInference(bn,evs={})

inference with evidence

In [32]:
gnb.showInference(bn,evs={'s':1,'c':0})

inference with soft and hard evidence

In [33]:
gnb.showInference(bn,evs={'s':1,'c':[0.3,0.9]})

inference with partial targets

In [34]:
gnb.showInference(bn,evs={'c':[0.3,0.9]},targets={'c','w'})
In [ ]: