Click here to hide/show the list of notebooks.
  pyAgrum on notebooks   pyAgrum jupyter
☰  klForBns 
pyAgrum 0.16.2   
Zipped notebooks   
generation: 2019-10-02 10:58  

Creative Commons License
This pyAgrum's notebook is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

In [1]:
import os

%matplotlib inline

from pylab import *
import matplotlib.pyplot as plt

Initialisation

  • importing pyAgrum
  • importing pyAgrum.lib tools
  • loading a BN
In [2]:
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb

Create a first BN : bn

In [3]:
bn=gum.loadBN(os.path.join("res","asia.bif"))
# randomly re-generate parameters for every Conditional Probability Table
bn.generateCPTs() 
bn
Out[3]:
G visit_to_Asia? visit_to_Asia? tuberculosis? tuberculosis? visit_to_Asia?->tuberculosis? tuberculos_or_cancer? tuberculos_or_cancer? tuberculosis?->tuberculos_or_cancer? positive_XraY? positive_XraY? tuberculos_or_cancer?->positive_XraY? dyspnoea? dyspnoea? tuberculos_or_cancer?->dyspnoea? lung_cancer? lung_cancer? lung_cancer?->tuberculos_or_cancer? smoking? smoking? smoking?->lung_cancer? bronchitis? bronchitis? smoking?->bronchitis? bronchitis?->dyspnoea?

Create a second BN : bn2

In [4]:
bn2=gum.loadBN(os.path.join("res","asia.bif"))
bn2.generateCPTs()
bn2
Out[4]:
G visit_to_Asia? visit_to_Asia? tuberculosis? tuberculosis? visit_to_Asia?->tuberculosis? tuberculos_or_cancer? tuberculos_or_cancer? tuberculosis?->tuberculos_or_cancer? positive_XraY? positive_XraY? tuberculos_or_cancer?->positive_XraY? dyspnoea? dyspnoea? tuberculos_or_cancer?->dyspnoea? lung_cancer? lung_cancer? lung_cancer?->tuberculos_or_cancer? smoking? smoking? smoking?->lung_cancer? bronchitis? bronchitis? smoking?->bronchitis? bronchitis?->dyspnoea?

bn vs bn2 : different parameters

In [5]:
gnb.sideBySide(bn.cpt(3),bn2.cpt(3),
              captions=["a CPT in bn","same CPT in bn2"])
positive_XraY?
tuberculos_or_cancer?
0
1
0
0.51350.4865
1
0.36580.6342
positive_XraY?
tuberculos_or_cancer?
0
1
0
0.20840.7916
1
0.47730.5227
a CPT in bn
same CPT in bn2

Exact and (Gibbs) approximated KL-divergence

In order to compute KL-divergence, we just need to be sure that the 2 distributions are defined on the same domain (same variables, etc.)

Exact KL

In [6]:
g1=gum.ExactBNdistance(bn,bn2)
print(g1.compute())
{'klPQ': 2.998589759908435, 'errorPQ': 0, 'klQP': 2.534769888639164, 'errorQP': 0, 'hellinger': 0.8452231099587589, 'bhattacharya': 0.4419232829610945, 'jensen-shannon': 0.43612137405177476}

If the models are not on the same domain :

In [7]:
bn_different_domain=gum.loadBN(os.path.join("res","alarm.dsl"))

# g=gum.BruteForceKL(bn,bn_different_domain) # a KL-divergence between asia and alarm ... :(
#
# would cause
#---------------------------------------------------------------------------
#OperationNotAllowed                       Traceback (most recent call last)
#
#OperationNotAllowed: this operation is not allowed : KL : the 2 BNs are not compatible (not the same vars : visit_to_Asia?)

Gibbs-approximated KL

In [8]:
g=gum.GibbsBNdistance(bn,bn2)
g.setVerbosity(True)
g.setMaxTime(120)
g.setBurnIn(5000)
g.setEpsilon(1e-7)
g.setPeriodSize(500)
In [9]:
print(g.compute())
print("Computed in {0} s".format(g.currentTime()))
{'klPQ': 3.07072983730804, 'errorPQ': 0, 'klQP': 2.311620545587494, 'errorQP': 0, 'hellinger': 0.8330372487066457, 'bhattacharya': 0.4533404216877611, 'jensen-shannon': 0.4241746155502716}
Computed in 3.631138 s
In [10]:
print("--")

print(g.messageApproximationScheme())
print("--")

print("Temps de calcul : {0}".format(g.currentTime()))
print("Nombre d'itérations : {0}".format(g.nbrIterations()))

p=plot(g.history(), 'g')
--
stopped with epsilon=1e-07
--
Temps de calcul : 3.631138
Nombre d'itérations : 203000

Animation of Gibbs KL

Since it may be difficult to know what happens during approximation algorithm, pyAgrum allows to follow the iteration using animated matplotlib figure

In [11]:
g=gum.GibbsBNdistance(bn,bn2)
g.setMaxTime(60)
g.setBurnIn(500)
g.setEpsilon(1e-7)
g.setPeriodSize(5000)
In [12]:
gnb.animApproximationScheme(g) # logarithmique scale for Y
g.compute()
Out[12]:
{'klPQ': 2.9898389048329075,
 'errorPQ': 0,
 'klQP': 2.5815216365200375,
 'errorQP': 0,
 'hellinger': 0.8481365593676411,
 'bhattacharya': 0.4383083282199043,
 'jensen-shannon': 0.43902246565305747}
In [ ]: