

see our preprint SchÀlte et al., bioRxiv 2021
Proof: Yep.
Proof: Yep.
In practice: Train regression model $s: y \mapsto \lambda(\theta) = (\theta^1,\ldots,\theta^k)$.
from pyabc import *
distance: Distance = AdaptivePNormDistance(
sumstat=ModelSelectionPredictorSumstat(
predictors=[
LinearPredictor(),
GPPredictor(kernel=['RBF', 'WhiteKernel']),
MLPPredictor(hidden_layer_sizes=(50, 50, 50)),
],
),
scale_function=rmse,
pre=[lambda x: x, lambda x: x**2],
par_trafo=[lambda y: y, lambda y: y**2],
)
pyabc.rtfd.io/en/develop/examples/informative.html
sensitivity weights re-prioritize data points, in particular identifying uninformative ones
applicable to outlier-corrupted data