Evaluation

Lettuce comes with benchmarking tools for mapping tasks. All the code for running the benchmarks is found in here.

componentdescription
evaltypesThe framework uses abstract base classes and some generics to ensure the right metrics are calculated for the right pipelines. The classes used for the framework are defined here
eval_data_loadersData are loaded into pipelines through data loaders that provide the right interface for metrics and pipelines
metricsMetrics for pipeline results are defined here
pipelinesWhen running an evaluation, a pipeline generates some prediction from an input source term
eval_testsA pipeline test is a pipeline and the metrics used to score that pipeline