pyAgrum on notebooks
☰  DirichletPriorAndWeigthedDatabase
 pyAgrum 0.15.1 generation: 2019-06-16 19:06

# Dirichlet prior as database¶

BNLearner gives access of many priors for the parameters and structural learning. One of them is the Dirichlet prior which needs a a prior for every possible parameter in a BN. aGrUM/pyAgrum allows to use a database as a source of Dirichlet prior.

In [1]:
%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt

import os

import pyAgrum as gum
import pyAgrum.lib.notebook as gnb

sizePrior=30000
sizeData=20000

import os
#the bases will be saved in "out/dirichlet_database.csv" and "out/observaition_database.csv"
dirichletDatabase=os.path.join("out","dirichlet_database.csv")
obsDatabase=os.path.join("out","observation_database.csv")


## generating databases for Dirichlet prior and for the learning¶

In [2]:
bnPrior = gum.fastBN("A->B;C;D")
bnData = gum.fastBN("A->B->C->D")
bnData.cpt("B").fillWith([0.99,0.01,
0.01,0.99])
bnData.cpt("C").fillWith([0.9,0.1,
0.1,0.9])
bnData.cpt("D").fillWith([0.9,0.1,
0.1,0.9])
bnPrior.cpt("B").fillWith(bnData.cpt("B"))

gum.generateCSV(bnPrior, dirichletDatabase, sizePrior, with_labels=True,random_order=True)

gum.generateCSV(bnData, obsDatabase, sizeData, with_labels=True,random_order=False)

gnb.sideBySide(bnData,bnPrior,
captions=[f"Database ({sizeData} cases)",f"Prior ({sizePrior} cases)"])

 G A A B B A->B C C B->C D D C->D G A A B B A->B C C D D Database (20000 cases) Prior (30000 cases)

## Learning databases¶

In [3]:
# bnPrior is used to give the variables and their domains
learnerData = gum.BNLearner(obsDatabase)
learnerPrior = gum.BNLearner(dirichletDatabase)
learnerData.useScoreBIC()
learnerPrior.useScoreBIC()
gnb.sideBySide(learnerData.learnBN(),learnerPrior.learnBN(),
captions=["Learning from Data","Learning from Prior"])

 G A A B B B->A C C C->B D D C->D G D D C C B B A A A->B Learning from Data Learning from Prior

## Learning with Dirichlet prior¶

Now we use the Dirichlet prior. In order to have an idea of the influence of the priori, we change the weights of Data and Prior from [0,1] to [1,0] using a $ratio \in [0,1]$.

In [4]:
def learnWithRatio(ratio):
# bnPrior is used to give the variables and their domains

learner = gum.BNLearner(obsDatabase, bnPrior)
learner.useAprioriDirichlet(dirichletDatabase,ratio*sizePrior)
learner.setDatabaseWeight((1-ratio)*sizeData)
learner.useScoreBIC() # or another score with no included prior
return learner.learnBN()

ratios=[0.0,0.01,0.05,0.2,0.5,0.8,0.9,0.95,0.99,1.0]
bns=[learnWithRatio(r) for r in ratios]
gnb.sideBySide(*bns,
captions=[*[f"with ratio {r}<br/> [datasize : {r*sizePrior+(1-r)*sizeData}]" for r in ratios]])

 G A A B B B->A C C C->B D D C->D G A A B B B->A C C C->B D D D->B D->C G A A B B B->A C C C->B D D D->B D->C G A A B B B->A C C B->C D D B->D C->A D->C G A A B B B->A C C B->C C->A D D D->B D->C G A A B B B->A C C B->C D D B->D D->C G A A B B B->A D D B->D C C C->B C->D G A A B B B->A C C B->C D D B->D D->C G A A B B B->A C C D D G A A B B A->B C C D D with ratio 0.0 [datasize : 20000.0] with ratio 0.01 [datasize : 20100.0] with ratio 0.05 [datasize : 20500.0] with ratio 0.2 [datasize : 22000.0] with ratio 0.5 [datasize : 25000.0] with ratio 0.8 [datasize : 28000.0] with ratio 0.9 [datasize : 29000.0] with ratio 0.95 [datasize : 29500.0] with ratio 0.99 [datasize : 29900.0] with ratio 1.0 [datasize : 30000.0]

The BNs learned when mixing the 2 data sources look much more complex than the data and the Dirichlet structures (with $ratio \in [0.01,0.99]$). It may seem odd. However, if one looks at the mutual information,

In [5]:
gnb.sideBySide(*[gnb.getInformation(bn) for bn in bns],
captions=[*[f"with ratio {r}<br/> [datasize : {r*sizePrior+(1-r)*sizeData}]" for r in ratios]],
valign="bottom")

 G A A B B B->A C C C->B D D C->D 0.91543640488916880.9693492682088517 G A A B B B->A C C C->B D D D->B D->C 0.91995939687898190.9723953627819312 G A A B B B->A C C C->B D D D->B D->C 0.93636011312899910.9827356466868581 G A A B B B->A C C B->C D D B->D C->A D->C 0.97796584450605680.9998313337920353 G A A B B B->A C C B->C C->A D D D->B D->C 0.96707102950141110.9996482954594157 G A A B B B->A C C B->C D D B->D D->C 0.88607442623175330.9781899893665527 G A A B B B->A D D B->D C C C->B C->D 0.8529913190700460.9657804746131072 G A A B B B->A C C B->C D D B->D D->C 0.83563884443573890.9588922450547136 G A A B B B->A C C D D 0.82141568114478050.9530989876398499 G A A B B A->B C C D D 0.81781558843502690.9516143737209777 with ratio 0.0 [datasize : 20000.0] with ratio 0.01 [datasize : 20100.0] with ratio 0.05 [datasize : 20500.0] with ratio 0.2 [datasize : 22000.0] with ratio 0.5 [datasize : 25000.0] with ratio 0.8 [datasize : 28000.0] with ratio 0.9 [datasize : 29000.0] with ratio 0.95 [datasize : 29500.0] with ratio 0.99 [datasize : 29900.0] with ratio 1.0 [datasize : 30000.0]

It is obvious that these arcs represent weak and spurious correlations due to mixing probabilities (see Wellman et Peacock (99)) that become weaker when the weight of the prior increases.

Another way to look at the mixing is to plot the Kullback-Leibler divergence between the learned BNs and the 2 templates ($bnData$ and $bnPrio$r)

In [6]:
def kls(i):
kl=gum.ExactBNdistance(bnPrior,bns[i])
y1=kl.compute()
kl=gum.ExactBNdistance(bnData,bns[i])
y2=kl.compute()
return y1['klPQ'],y2['klPQ'],y1['klQP'],y2['klQP']

fig=figure(figsize=(10,6))

x=ratios
y1,y2,y3,y4=zip(*[kls(i) for i in range(len(ratios))])
ax.plot(x,y1,label="M-projection with bnPrior")
ax.plot(x,y3,label="I-projection with bnPrior")
ax.plot(x,y2,label="M-projection with bnData")
ax.plot(x,y4,label="I-projection with bnData")
ax.set_xticks(ratios)
ax.tick_params(rotation=90)
ax.set_xlabel("weight ratio between data and prior")
ax.set_ylabel("KL")
ax.legend(bbox_to_anchor=(0.15, 0.88, 0.7, .102), loc=3,ncol=2, mode="expand", borderaxespad=0.)
t=ax.set_title("Weight ratio's Impact on KLs")


We can use other divergences (or distances)

In [7]:
def distances(i):
kl=gum.ExactBNdistance(bnPrior,bns[i])
y1=kl.compute()
kl=gum.ExactBNdistance(bnData,bns[i])
y2=kl.compute()
return y1['hellinger'],y2['hellinger'],y1['bhattacharya'],y2['bhattacharya'],y1['jensen-shannon'],y2['jensen-shannon']

fig=figure(figsize=(10,6))

x=ratios
y1,y2,y3,y4,y5,y6=zip(*[distances(i) for i in range(len(ratios))])
ax.plot(x,y1,label="Hellinger with bnPrior")
ax.plot(x,y3,label="Bhattacharya with bnPrior")
ax.plot(x,y5,label="Jensen-Shannon with bnPrior")
ax.plot(x,y2,label="Hellinger with bnData")
ax.plot(x,y4,label="Bhattacharya with bnData")
ax.plot(x,y6,label="Jensen-Shannon with bnData")
ax.set_xticks(ratios)
ax.tick_params(rotation=90)
ax.set_xlabel("weight ratio between data and prior")
ax.set_ylabel("distances")
ax.legend(bbox_to_anchor=(0.15, 0.85, 0.7, .102), loc=3,ncol=2, mode="expand", borderaxespad=0.)
t=ax.set_title("Weight ratio's Impact on distances")


Less informative but still possible, we can trace the scores (precision, etc.) from a gum.lib.gumComparator (see 07-ComparingBN for more)

In [8]:
import pyAgrum.lib.bn_vs_bn as gcm

def scores(i):
cmp=gcm.GraphicalBNComparator(bnPrior,bns[i])
y1=cmp.scores()
cmp=gcm.GraphicalBNComparator(bnData,bns[i])
y2=cmp.scores()
return y1['recall']   ,y2['recall'],y1['precision'],y2['precision'],y1['fscore'],y2['fscore'],y1['dist2opt'] ,y2['dist2opt']

fig=figure(figsize=(20,6))

x=ratios
y1,y2,y3,y4,y5,y6,y7,y8=zip(*[scores(i) for i in range(len(ratios))])
ax1.plot(x,y1,label="recall with bnPrior")
ax1.plot(x,y3,label="precision with bnPrior")
ax1.plot(x,y5,label="fscore with bnPrior")
ax1.plot(x,y7,label="dist2opt with bnPrior")

ax2.plot(x,y2,label="recall with bnData")
ax2.plot(x,y4,label="precision with bnData")
ax2.plot(x,y6,label="fscore with bnData")
ax2.plot(x,y8,label="dist2opt with bnData")

ax1.set_xticks(ratios)
ax1.tick_params(rotation=90)
ax1.set_xlabel("weight ratio between data and prior")
ax1.set_ylabel("KL")
ax1.legend(bbox_to_anchor=(0.15, 0.88, 0.7, .102), loc=3,ncol=2, mode="expand", borderaxespad=0.)
ax1.set_title("Weight ratio's Impact on scores")

ax2.set_xticks(ratios)
ax2.tick_params(rotation=90)
ax2.set_xlabel("weight ratio between data and prior")
ax2.set_ylabel("KL")
ax2.legend(bbox_to_anchor=(0.15, 0.88, 0.7, .102), loc=3,ncol=2, mode="expand", borderaxespad=0.)
ax2.set_title("Weight ratio's Impact on scores");


### Weighted database and records¶

Database can be weighted as done above. But you can also fix the weight record by record in the database. Note that the weight of database is the sum of all weights for each record. And then

learner.setDatabaseWeight(2.5)


is equivalent to

siz=learner.nbRows()
for i in range(siz):
learner.setRecordWeight(i,2.5/siz)

In [9]:
bn=gum.fastBN("X->Y")

#the base will be saved in basefile="out/dataW.csv"
basefile=os.path.join("out","dataW.csv")


In the 2 next cells, we compute the parameters of bn using 2 bases : in the next cell, the base contains 8 rows of weight 1. In the next one, the base contains only 4 rows but two of them have different weights. The sum of the weights in this second base is 8 as well ...

So the parameters are exactly the same.

In [10]:
data="""X,Y
1,0
0,1
0,1
0,0
1,0
0,1
1,1
0,1
"""
with open(basefile,"w") as src:
src.write(data)
learner=gum.BNLearner(basefile)
bn1=learner.learnParameters(bn.dag())
gnb.sideBySide(bn1.cpt("X"),bn1.cpt("Y"))

X
0
1
0.62500.3750
Y
X
0
1
0
0.20000.8000
1
0.66670.3333
In [11]:
data="""X,Y
0,0
1,0
0,1
1,1
"""
with open(basefile,"w") as src:
src.write(data)
learner=gum.BNLearner(basefile)

learner.setRecordWeight(1,2.0)
learner.setRecordWeight(2,4.0)

bn2=learner.learnParameters(bn.dag())
gnb.sideBySide(bn2.cpt("X"),bn2.cpt("Y"))

X
0
1
0.62500.3750
Y
X
0
1
0
0.20000.8000
1
0.66670.3333
In [ ]: