Fact Validation Ensemble Learner
The vision of this project is to explore the possibility to train a supervised machine learning algorithm based on the results of several fact validation approaches.
To achieve this vision this project offers:
- A software which can automatically
- Validate a dataset on multiple fact validation approaches
- Use the results of the fact validation approaches to train a supervised machine learning algorithm
- Validate the dataset on the trained machine learning model
- Two datasets that can be used for evaluation
- Analysis: Simple script to plot diagrams based on the data in Evaluation/Overview
- Evaluation: The software saves results to this directory. It also contains preliminaries results of our experiments.
- Datasets: Dataset with simple example. You can find the documentation here.
- Software: Software for exploring the vision
git clone https://github.com/saschaTrippel/favel
cd favel/Software
- To conduct an experiment with the software execute the following steps:
- Create a directory inside the Evaluation directory.
The name of the directory is the name of the experiment
Example:favel/Evaluation/experiment42
- Create a configuration file
favel.conf
inside the experiment directory.
The configuration file defines the set of fact validation approaches and the machine learning algorithm.
A basic configuration file is can be found here.
For more advanced configuration options look here.
Example:favel/Evaluation/experiment42/favel.conf
- Execute the software.
For the software to be able to use fact validation approaches, these approaches might have to be started manually.
An exaustive description how to run the software can be found in the following section.
Results will be saved to the favel/Evaluation/ directory.
Example:python3 favel/Software/Favel.py -d favel/FinalDataset_Hard -e experiment42
- Create a directory inside the Evaluation directory.
python3 Favel.py [options]
-e EXPERIMENT, --experiment EXPERIMENT
name of the experiment, corresponds with the name of the experiment folder in theEvaluation
directory-b EXPERIMENT, --batch EXPERIMENT
name of the experiment, corresponds with the name of the experiment folder in theEvaluation
directory. Experiment will be run in batch mode, meaning that an experiment will be executed with every subset of the specified set of fact validation approaches.-d DATA, --data DATA
path to the dataset to validate-w, --write
write everything to disk. If this flag is set, all possible outputs are written to disk. This includes models, normalizers, predicate encoders, and dataframes. If the flag is not set, only the overview is written to disk.-c, --containers
automatically Start/Stop containers which encapsulate the fact validation approaches.-a, --automl
To use the autoML system instead of the manual algorithm selection.
python3 -m unittest
More informations about included datasets here
- https://github.com/saschaTrippel/knowledgestream offers multiple algorithms