pyAgrum on notebooks
☰  ApproximateInference

# Approximate inference in aGrUM (pyAgrum)¶

There are several approximate inference for BN in aGrUM (pyAgrum). They share the same API than exact inference.

• Loopy Belief Propagation : LBP is an approximate inference that uses exact calculous methods (when the BN os a tree) even if the BN is not a tree. LBP is a special case of inference : the algorithm may not converge and even if it converges, it may converge to anything (but the exact posterior). LBP however is fast and usually gives not so bad results.
• Sampling inference : Sampling inference use sampling to compute the posterior. The sampling may be (very) slow but those algorithms converge to the exac distribution. aGrUM implements :
• Montecarlo sampling,
• Weighted sampling,
• Importance sampling,
• Gibbs saampling.
• Finally, aGrUM propose the so-called 'loopy version' of the sampling algorithms : the idea is to use LBP as a Dirichlet prior for the sampling algorithm. A loopy version of each sampling algorithm is proposed.
In [1]:
import os

%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt

def unsharpen(bn):
"""
Force the parameters of the BN not to be a bit more far from 0 or 1
"""
for nod in bn.nodes():
bn.cpt(nod).translate(bn.maxParam() / 10).normalizeAsCPT()

def compareInference(ie,ie2,ax=None):
"""
compare 2 inference by plotting all the points from (posterior(ie),posterior(ie2))
"""
exact=[]
appro=[]
errmax=0
for node in bn.nodes():
# potentials as list
exact+=ie.posterior(node).tolist()
appro+=ie2.posterior(node).tolist()
errmax=max(errmax,
(ie.posterior(node)-ie2.posterior(node)).abs().max())

if errmax<1e-10: errmax=0
if ax==None:
ax=plt.gca() # default axis for plt

ax.plot(exact,appro,'ro')
ax.set_title("{} vs {}\n {}\nMax error {:2.4} in {:2.4} seconds".format(
str(type(ie)).split(".")[2].split("_")[0][0:-2], # name of first inference
str(type(ie2)).split(".")[2].split("_")[0][0:-2], # name of second inference
ie2.messageApproximationScheme(),
errmax,
ie2.currentTime())
)

In [2]:
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
unsharpen(bn)

ie=gum.LazyPropagation(bn)
ie.makeInference()

In [3]:
gnb.showBN(bn,size='8')


### First, an exact inference.¶

In [4]:
gnb.showInference(bn,size="18") # using LazyPropagation by default
print(ie.posterior("KINKEDTUBE"))

<KINKEDTUBE:TRUE> :: 0.116667 /<KINKEDTUBE:FALSE> :: 0.883333


# Gibbs Inference

### Gibbs inference with default parameters

Gibbs inference iterations can be stopped

• by the value of error (epsilon)
• by the rate of change of epsilon (MinEpsilonRate)
• by the number of iteration (MaxIteration)
• by the duration of the algorithm (MaxTime)
In [5]:
ie2=gum.GibbsSampling(bn)
ie2.setEpsilon(1e-2)
gnb.showInference(bn,engine=ie2,size="18")
print(ie2.posterior("KINKEDTUBE"))
print(ie2.messageApproximationScheme())
compareInference(ie,ie2)