import pandas
import os
import math
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
from pyAgrum.lib.bn2roc import showROC_PR
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix
import pandas as pd
This notebook is an introduction to the Kaggle titanic challenge. The goal here is not to produce the best possible classifier, at least not yet, but to show how pyAgrum and Bayesian networks can be used to easily and quickly explore and understand data.
To undestand this notebook, basic knowledge of Bayesian networks is required. If you are looking for an introduction to pyAgrum, check this notebook.
This notebook present three different Bayesien Networks techniques to answer the Kaggle Titanic challenge. The first approach we will answer the challenge without using the training set and we will only use our prior knowledge about shipwrecks. In the second approach we will only use the training set with pyAgrum's machine learning algorithms. Finally, in the third approach we will use both prior knowledge about shipwrecks and machine learning.
Before we start, some disclaimers about aGrUM and pyAgrum.
aGrUM is a C++ library designed for easily building applications using graphical models such as Bayesian networks, influence diagrams, decision trees or Markov decision processes.
pyAgrum is a Python wrapper for the C++ aGrUM library. It provides a highlevel interface to the part of aGrUM allowing to create, handle and make computations into Bayesian networks. The module mainly is an application of the SWIG interface generator. Customwritten code is added to simplify and extend the aGrUM API.
Both projects are open source and can be freely downloaded from aGrUM's gitlab repository or installed using pip or anaconda.
If you have questions, remarks or suggestions, feel free to ask us on info@agrum.org.
traindf=pandas.read_csv('res/titanic/train.csv')
testdf=pandas.merge(pandas.read_csv('res/titanic/test.csv'),
pandas.read_csv('res/titanic/gender_submission.csv'),
on="PassengerId")
This merges both the test base with the fact that a passager has survived or not.
for k in traindf.keys():
print(f'{k}: {len(traindf[k].unique())}')
PassengerId: 891 Survived: 2 Pclass: 3 Name: 891 Sex: 2 Age: 89 SibSp: 7 Parch: 7 Ticket: 681 Fare: 248 Cabin: 148 Embarked: 4
Looking at the number of unique values for each variable is necessary since Bayesian networks are discrete models. We will want to reduce the domain size of some discrete varaibles (like age) and discretize continuous variables (like Fare).
For starters you can filter out variables with a large number of values. Choosing a large number will have an impact on performances, which boils down to how much CPU and RAM you have at your disposal. Here, we choose to filter out any variable with more than 10 different outcomes.
for k in traindf.keys():
if len(traindf[k].unique())<=15:
print(k)
Survived Pclass Sex SibSp Parch Embarked
This leaves us with 6 variables, not much but still enough to learn a Bayesian network. Will just add one more variable by reducing the cardinality of the Age variable.
testdf=pandas.merge(pandas.read_csv('res/titanic/test.csv'),
pandas.read_csv('res/titanic/gender_submission.csv'),
on="PassengerId")
def forAge(row):
try:
age = float(row['Age'])
if age < 1:
#return '[0;1['
return 'baby'
elif age < 6:
#return '[1;6['
return 'toddler'
elif age < 12:
#return '[6;12['
return 'kid'
elif age < 21:
#return '[12;21['
return 'teen'
elif age < 80:
#return '[21;80['
return 'adult'
else:
#return '[80;200]'
return 'old'
except ValueError:
return np.nan
def forBoolean(row, col):
try:
val = int(row[col])
if row[col] >= 1:
return "True"
else:
return "False"
except ValueError:
return "False"
def forGender(row):
if row['Sex'] == "male":
return "Male"
else:
return "Female"
testdf
PassengerId  Pclass  Name  Sex  Age  SibSp  Parch  Ticket  Fare  Cabin  Embarked  Survived  

0  892  3  Kelly, Mr. James  male  34.5  0  0  330911  7.8292  NaN  Q  0 
1  893  3  Wilkes, Mrs. James (Ellen Needs)  female  47.0  1  0  363272  7.0000  NaN  S  1 
2  894  2  Myles, Mr. Thomas Francis  male  62.0  0  0  240276  9.6875  NaN  Q  0 
3  895  3  Wirz, Mr. Albert  male  27.0  0  0  315154  8.6625  NaN  S  0 
4  896  3  Hirvonen, Mrs. Alexander (Helga E Lindqvist)  female  22.0  1  1  3101298  12.2875  NaN  S  1 
...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ...  ... 
413  1305  3  Spector, Mr. Woolf  male  NaN  0  0  A.5. 3236  8.0500  NaN  S  0 
414  1306  1  Oliva y Ocana, Dona. Fermina  female  39.0  0  0  PC 17758  108.9000  C105  C  1 
415  1307  3  Saether, Mr. Simon Sivertsen  male  38.5  0  0  SOTON/O.Q. 3101262  7.2500  NaN  S  0 
416  1308  3  Ware, Mr. Frederick  male  NaN  0  0  359309  8.0500  NaN  S  0 
417  1309  3  Peter, Master. Michael J  male  NaN  1  1  2668  22.3583  NaN  C  0 
418 rows × 12 columns
When pretreating data, you will want to wrap your changes inside a function, this will help you keep track of your changes and easily compare them.
def pretreat(df):
if 'Survived' in df.columns:
df['Survived'] = df.apply(lambda row: forBoolean(row, 'Survived'), axis=1)
df['Age'] = df.apply(forAge, axis=1)
df['SibSp'] = df.apply(lambda row: forBoolean(row, 'SibSp'), axis=1)
df['Parch'] = df.apply(lambda row: forBoolean(row, 'Parch'), axis=1)
df['Sex'] = df.apply(forGender, axis=1)
droped_cols = [col for col in ['PassengerId', 'Name', 'Ticket', 'Fare', 'Cabin'] if col in df.columns]
df = df.drop(droped_cols, axis=1)
df = df.rename(index=str, columns={'Sex': 'Gender', 'SibSp': 'Siblings', 'Parch': 'Parents'})
df.dropna(inplace=True)
return df
traindf = pandas.read_csv('res/titanic/train.csv')
testdf = pandas.merge(pandas.read_csv('res/titanic/test.csv'),
pandas.read_csv('res/titanic/gender_submission.csv'),
on="PassengerId")
traindf = pretreat(traindf)
testdf = pretreat(testdf)
We will need to save this intermediate learning database, since pyAgrum accepts only files as inputs. As a rule of thumb, save your CSV using comma as separators and do not quote values when you plan to use them with pyAgrum.
import csv
traindf.to_csv('res/titanic/post_train.csv', index=False)
testdf.to_csv('res/titanic/post_test.csv', index=False)
In some cases, we might not have any data to learn from. In such cases, we can rely on experts to provide correlation between variables and conditional probabilities.
It can be simpler to start with a simple topography, leaving room to add more complexe correlations as the model is confonted aginst data. Here, we will use three hypothesis:
The first assumption results in the following DAG for our Bayesian network:
bn = gum.BayesNet("Surviving Titanic")
bn =gum.fastBN("Age{babytoddlerkidteenadultold}<Survived{FalseTrue}>Gender{FemaleMale};Siblings{FalseTrue}<Survived>Parents{FalseTrue}")
print(bn.variable("Survived"))
print(bn.variable("Age"))
print(bn.variable("Gender"))
print(bn.variable("Siblings"))
print(bn.variable("Parents"))
bn
Survived:Labelized({FalseTrue}) Age:Labelized({babytoddlerkidteenadultold}) Gender:Labelized({FemaleMale}) Siblings:Labelized({FalseTrue}) Parents:Labelized({FalseTrue})
Hypothesis two and three can help us define the parameters for this Bayesian network. Remember that we assume that we do not have any data to learn from. So we will use simple definition such as "a women is 10 times more likeliy to survive than a man". We can then normalize the values to obtain a proper conditional probability distribution.
This technique may not be the most precise or scientifically sounded, it however has the advantage to be easy to use.
bn.cpt('Survived')[:] = [100, 1]
bn.cpt('Survived').normalizeAsCPT()
bn.cpt('Survived')



0.9901  0.0099 
bn.cpt('Age')[{'Survived':0}] = [ 1, 1, 1, 10, 10, 1]
bn.cpt('Age')[{'Survived':1}] = [ 10, 10, 10, 1, 1, 10]
bn.cpt('Age').normalizeAsCPT()
bn.cpt('Age')





 

0.0417  0.0417  0.0417  0.4167  0.4167  0.0417  
0.2381  0.2381  0.2381  0.0238  0.0238  0.2381 
bn.cpt('Gender')[{'Survived':0}] = [ 1, 1]
bn.cpt('Gender')[{'Survived':1}] = [ 10, 1]
bn.cpt('Gender').normalizeAsCPT()
bn.cpt('Gender')

 

0.5000  0.5000  
0.9091  0.0909 
bn.cpt('Siblings')[{'Survived':0}] = [ 1, 10]
bn.cpt('Siblings')[{'Survived':1}] = [ 10, 1]
bn.cpt('Siblings').normalizeAsCPT()
bn.cpt('Siblings')

 

0.0909  0.9091  
0.9091  0.0909 
bn.cpt('Parents')[{'Survived':0}] = [ 1, 10]
bn.cpt('Parents')[{'Survived':1}] = [ 10, 1]
bn.cpt('Parents').normalizeAsCPT()
bn.cpt('Parents')

 

0.0909  0.9091  
0.9091  0.0909 
Now we can start using the Bayesian network and check that our hypothesis hold.
gnb.showInference(bn,size="10")
We can see here that most passengers (99% of them) will not survive and that we have almost as much women (50.4%) as men (49.6%). The majority of passengers are either teenagers or adults. Finally, most passenger had siblings or parents aboard.
Recall that we have not use any data to learn the Bayesian Netork's parameters and our expert did not have any knowledge about the passengers aboard the Titanic.
gnb.showInference(bn,size="10", evs={'Survived':'False'})
gnb.showInference(bn,size="10", evs={'Survived':'True'})
Here, we can see that our second and third hypothesis hold since when we enter envidence that a passenger survived, it is more likely to be a woman with no siblings or parents. On the contrary, if we observe that a passenger did not survive we can see that it is more likely to be a man with siblings or parents.
gnb.showInference(bn,size="10", evs={'Survived':'True', 'Gender':'Male'})
gnb.showInference(bn,size="10", evs={'Gender':'Male'})
This validates our first hypothesis: if we know that a passenger survived or not, then evidence about that passenger does not changes our belief about other variables. On the contrary, if we do not know if a passenger survived, then evidence about the passenger will change our belief about other variables, including the fact that he or she survived or not.
ie=gum.LazyPropagation(bn)
def init_belief(engine):
# Initialize evidence
for var