This algorithm, based on the confromal inference framework, is used to obtain prediction intervals (quantify uncertainty) when working with panel data in a regression setting.
You can install the package using pip:
pip install lpci
The package implements the Longitudinal Prediction Conformal Inference (LPCI) algorithm presented by Devesh Batra, Salvatore Mercuri & Raad Khraishi in the paper "Conformal Predictions for Longitudinal Data" (https://arxiv.org/abs/2310.02863).
The authors prove that the LPCI method asymptotically ensures both longitudinal conditional coverage and marginal cross-sectional coverage. In theory, with sufficient data points, both types of coverage should be at least equal to the confidence level.
Below we provide a mathematical overview, much of which draws directly from Batra et al. (2023). For those seeking a guide to practical implementation, please refer to notebook.
We consider a dataset consisting of observations
-
$$Y_t^{(g)} \in \mathbb{R}$$ is a continuous scalar representing the target variable for group$g$ at time$t$ . -
$$X_t^{(g)} \in \mathbb{R}^d$$ consists of d-dimensional features associated with group$g$ at time$t$ .
Data points are exchangeable if the joint probability distribution is invariant to any permutation of them. This assumption is typically what allows the conformal prediction framework to theoretically prove that the coverage guarantees are met and are at least equal to the specified confidence level. Nonetheless, in a panel data setting where we have temporal dependence, the exchageability assumption does not hold. The LPCI is a framework where we can provide asymptotic coverage guarantees - cross-sectional (marginal) and longitudinal (conditional) - beyond the exchageability assumption. The authors make the reasonable assumption that the groups are exchangeable.
The LPCI algorithm uses the split or inductive conformal inference method, that is, data should be seperated into three sets: training, calibration and test. The main idea is to use the non-conformity score (e.g. residuals) in the calibration set to obtain uncertainty intervals for the test points.
Furthermore, similar to other conformal prediction methods, the approach is model-agnostic i.e. it can be applied irrespective of the algorithm used to generate point predictions.
The general procedure is as follows:
Split the dataset into three sets - training, calibration and testing:
Train a model
Compute the non-confomity score for each observation in the calibration set; in our case, the residuals by default. The non-conformity score is a measure of how unusual or strange a prediction is according to the previous examples in the training data.
The non-conformity score (residuals) for each observation in the calibration set will form the target variable for training a Quantile Random Forest and generating prediction intervals.
The prepare_df
method enables three types of features to be generated:
Lagged residuals are generated according to a specified window size w. Two options are available:
- Simple Lags: Directly using lags of past residuals over the specified window.
Using simple lags means that, for each group
- Exponential Smoothing: Optionally, the package can compute exponentially-weighted mean residuals for each group.
If exponential smoothing is true, for each group
For details on how
Unique group identifiers should also be included for each group. The only method currently supported is one-hot encoding.
Additional exogenous features can be included if they are relevant to the modeling task.
This step involves training a Quantile Regression Forest (QRF) model to estimate conditional quantiles of the residuals. The QRF is trained on the features generated in step 3.
Typically, machine-learning algorithms produce only a point prediction (e.g. the mean). Conversely, QRF's generate a prediction of the conditional quantiles for a given input. Our use case implements a QRF to model the distribution of
The main finding of Batra et al. (2023) is that the estimated quantiles
To optimize the QRF model, hyperparameters are tuned using standard Cross-Validation or the PanelSplit package (https://github.com/4Freye/panelsplit) which ensures robust tuning while avoiding information leakage.
Once the QRF is trained, prediction intervals are constructed for the test set. Recall that we already have point predictions for all observations in the test set:
We then obtain quantile estimates of the non-conformity score using the trained QRF for each test point as:
The intervals combine point predictions from the base model with these quantile estimates.
For a test point t, the prediction interval is:
-
$\widetilde{Q}_{t,p}^{(g)}$ and$\widetilde{Q}_{t,1-\alpha+\beta}^{(g)}$ : lower and upper bound of the quantiles estimated by the QRF.
The
In conformal inference, coverage measures how often the true outcome Y falls within the predicted intervals. Besides overall coverage, in panel (longitudinal) data, there are another two main types of coverage useful to assess uncertainty quantification performance:
-
Cross-sectional (marginal) coverage: Measures coverage across groups g for a fixed time point t. Intuitively, the fraction of different groups whose outcomes lie within the prediction intervals for each time stamp should be at least
$(1-\alpha)$ . -
Longitudinal (conditional) coverage: Focuses on how well the intervals capture outcomes over time for each individual group. Specifically, within each group g, the fraction of times the actual outcome lies within the interval (conditional on the features π) should be at least
$(1-\alpha)$ .
The conformal intervals
for all
Cross-sectional coverage is marginal over the groups for a fixed given (large enough) time-point.
We say that the conformal intervals
Longitudinal coverage is asymptotic in t and conditional over the temporal dimension.
Work on LPCI algorithm started at EconAI in September 2024.
We welcome contributions from the community! Whether itβs reporting a bug, suggesting a feature, improving documentation, or submitting a pull request, your input is highly valued.
This project is licensed under the MIT License. See the LICENSE file for details.