pyAgrum on notebooks
☰  relevanceReasoning

# Relevance Reasoning with pyAgrum¶

Relevance reasoning is the analysis of the influence of evidence on a Bayesian Network.

In this notebook we will explain what is relevance reasoning and how to do it using pyAgrum.

In [1]:
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb

import time
import os
%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt


## Multiple inference¶

In the well known 'alarm' BN, how to analyze the influence on 'VENTALV' of a soft evidence on 'MINVOLSET' ?

In [2]:
bn=gum.loadBN(os.path.join("res","alarm.dsl"))
gnb.showBN(bn,size="6")


We propose to draw the plot of the posterior of 'VENTALV' for the evidence : $$\forall x \in [0,1], e_{MINVOLSET}=[0,x,0.5]$$

To do so, we perform a large number of inference and plot the posteriors.

In [3]:
K=1000
r=range(0,K)
xs=[x/K for x in r]

def getPlot(xs,ys,K,duration):
p=plot(xs,ys)
legend(p,[bn.variableFromName('VENTALV').label(i)
for i in range(bn.variableFromName('VENTALV').domainSize())],loc=7);
title('VENTALV ({} inferences in {} s)'.format(K,duration));
ylabel('posterior Probability');
xlabel('Evidence on MINVOLSET : [0,x,0.5]');


## First try : classical lazy inference¶

In [4]:
tf=time.time()
ys=[]
for x in r:
ie=gum.LazyPropagation(bn)
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta1=time.time()-tf
getPlot(xs,ys,K,delta1)


## Second try : classical variable elimination¶

One can note that we just need one posterior. This is a case where VariableElimination should give best results.

In [5]:
tf=time.time()
ys=[]
for x in r:
ie=gum.VariableElimination(bn)
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta2=time.time()-tf
getPlot(xs,ys,K,delta2)


pyAgrum give us a function gum.getPosterior to do this same job more easily.

In [6]:
tf=time.time()
ys=[gum.getPosterior(bn,{'MINVOLSET':[0,x/K,0.5]},'VENTALV').tolist()
for x in r]
getPlot(xs,ys,K,time.time()-tf)


## Last try : optimized Lazy propagation with relevance reasoning and incremental inference¶

Optimized inference in aGrUM can use the targets and the evidence to optimize the computations. This is called relevance reasonning.

Moreover, if the values of the evidence change but not the structure of the query (same nodes as target, same nodes as hard evidence, same nodes as soft evidence), inference in aGrUM may re-use some of the computations from a query to another. This is called incremental inference.

In [7]:
tf=time.time()
ie=gum.VariableElimination(bn)
ys=[]
for x in r:
ie.chgEvidence('MINVOLSET',[0,x/K,0.5])
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta3=time.time()-tf
getPlot(xs,ys,K,delta3)

In [8]:
print("Mean duration of a lazy propagation            : {:5.3f}ms".format(1000*delta1/K))
print("Mean duration of a variable elimination        : {:5.3f}ms".format(1000*delta2/K))
print("Mean duration of an optimized lazy propagation : {:5.3f}ms".format(1000*delta3/K))

Mean duration of a lazy propagation            : 3.922ms
Mean duration of a variable elimination        : 0.641ms
Mean duration of an optimized lazy propagation : 0.658ms


# How it works¶

In [9]:
bn=gum.fastBN("Y->X->T1;Z2->X;Z1->X;Z1->T1;Z1->Z3->T2")
ie=gum.LazyPropagation(bn)

gnb.sideBySide(bn,
bn.cpt("X"),
gnb.getJunctionTree(bn),
captions=["BN","potential","Junction Tree"])

X
Y
Z2
Z1
0
1
0
0
0
0.18490.8151
1
0.59030.4097
0
1
0.46530.5347
1
0.49450.5055
0
0
1
0.68490.3151
1
0.36750.6325
0
1
0.73550.2645
1
0.47190.5281
BN
potential
Junction Tree

### aGrUM/pyAgrum use as much as possible techniques of relevance reasonning to reduce the complexity of the inference.

In [10]:
ie.setEvidence({"X":0})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=["","Join tree optimized for hard evidence on X"])

 G Y Y X X Y->X T1 T1 X->T1 Z2 Z2 Z2->X Z1 Z1 Z1->X Z1->T1 Z3 Z3 Z1->Z3 T2 T2 Z3->T2 hard evidenceXtarget(s) all Lazy Propagation on this BN Evidence and targets
Join tree optimized for hard evidence on X
In [11]:
ie.updateEvidence({"X":[0.1,0.9]})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=["","Join tree optimized for soft evidence on X"])

 G Y Y X X Y->X T1 T1 X->T1 Z2 Z2 Z2->X Z1 Z1 Z1->X Z1->T1 Z3 Z3 Z1->Z3 T2 T2 Z3->T2 soft evidenceXtarget(s) all Lazy Propagation on this BN Evidence and targets
Join tree optimized for soft evidence on X
In [12]:
ie.updateEvidence({"Y":0,"X":0,3:[0.1,0.9],"Z1":[0.4,0.6]})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=["","Join tree optimized for hard evidence on X and Y, soft on Z2 and Z1"])

 G Y Y X X Y->X T1 T1 X->T1 Z2 Z2 Z2->X Z1 Z1 Z1->X Z1->T1 Z3 Z3 Z1->Z3 T2 T2 Z3->T2 hard evidenceY, Xsoft evidenceZ2, Z1target(s) all Lazy Propagation on this BN Evidence and targets
Join tree optimized for hard evidence on X and Y, soft on Z2 and Z1
In [13]:
ie.setEvidence({"X":0})
ie.setTargets({"T1","Z1"})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=["","Join tree optimized for hard evidence on X and targets T1,Z1"])

 G Y Y X X Y->X T1 T1 X->T1 Z2 Z2 Z2->X Z1 Z1 Z1->X Z1->T1 Z3 Z3 Z1->Z3 T2 T2 Z3->T2 hard evidenceXtarget(s)T1, Z1 Lazy Propagation on this BN Evidence and targets
Join tree optimized for hard evidence on X and targets T1,Z1
In [14]:
ie.updateEvidence({"Y":0,"X":0,3:[0.1,0.9],"Z1":[0.4,0.6]})

gnb.sideBySide(ie,
gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=["","Join tree optimized for hard evidence on X and targets T1,Z1"])

 G Y Y X X Y->X T1 T1 X->T1 Z2 Z2 Z2->X Z1 Z1 Z1->X Z1->T1 Z3 Z3 Z1->Z3 T2 T2 Z3->T2 hard evidenceY, Xsoft evidenceZ2, Z1target(s)T1, Z1Joint target(s)[Z2, Z1, T1] Lazy Propagation on this BN Evidence and targets
Join tree optimized for hard evidence on X and targets T1,Z1
In [15]:
ie.makeInference()
ie.jointPosterior(["Z2","Z1","T1"])

Out[15]:
Z1
Z2
T1
0
1
0
0
0.00160.0197
1
0.07410.3838
0
1
0.00370.0171
1
0.16730.3329
In [16]:
ie.jointPosterior(["Z2","Z1"])

Out[16]:
Z1
Z2
0
1
0
0.00530.0367
1
0.24130.7166
In [17]:
# this will not work
# ie.jointPosterior(["Z3","Z1"])
# UndefinedElement:  no joint target containing {4,5}could be found

In [18]:
ie.addJointTarget(["Z2","Z1"])
gnb.sideBySide(ie,
gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=['','JoinTree'])

 G Y Y X X Y->X T1 T1 X->T1 Z2 Z2 Z2->X Z1 Z1 Z1->X Z1->T1 Z3 Z3 Z1->Z3 T2 T2 Z3->T2 hard evidenceY, Xsoft evidenceZ2, Z1target(s)T1, Z1Joint target(s)[Z2, Z1, T1] Lazy Propagation on this BN Evidence and targets
JoinTree
In [ ]: