Evaluation
Lettuce comes with benchmarking tools for mapping tasks. All the code for running the benchmarks is found in here.
component | description |
---|---|
evaltypes | The framework uses abstract base classes and some generics to ensure the right metrics are calculated for the right pipelines. The classes used for the framework are defined here |
eval_data_loaders | Data are loaded into pipelines through data loaders that provide the right interface for metrics and pipelines |
metrics | Metrics for pipeline results are defined here |
pipelines | When running an evaluation, a pipeline generates some prediction from an input source term |
eval_tests | A pipeline test is a pipeline and the metrics used to score that pipeline |