A Comparative Evaluation of TabNet-GPR and GNN-LSTM for Subseasonal Fire Radiative Power
DOI:
https://doi.org/10.13021/jssr2025.5327Abstract
Wildfire prediction models remain unreliable for subseasonal forecasting, contributing to 8.9 million acres burned in the U.S. in 2024. To combat this, two pipelines were compared for predicting Fire Radiative Power (FRP): a spatio-temporal Graph Neural Network (GNN) + Long Short-Term Memory (LSTM) model and a data-driven TabNet neural network model + Gaussian Process Regressor (GPR). Bayesian Optimization was applied to both pipelines to improve forecasts with minimal preprocessing, using 2-day lagged inputs from satellite, weather, and fire data. The GNN + LSTM modeled spatial and temporal patterns via graph embeddings and recursive rollout. Each node represented a fixed location using historical Latitude and Longitude data. Meanwhile, the TabNet + GPR pipeline used residual learning to refine predictions. Models were evaluated on a ~1GB sample using MAE, RMSE, and R² against three baseline models: persistence, climatology, and linear regression. The TabNet + GPR pipeline reduced MAE by 16% compared to the best baseline (0.1408 vs. 0.1668) and RMSE by 70.3% over default TabNet (0.2840 vs. 0.9758). While R² was low (max 0.0567) and unstable (NaN in the full pipeline), it consistently delivered the lowest prediction errors, proving that residual learning improves tabular forecasting with reduced preprocessing. The GNN + LSTM pipeline achieved the highest R² at 0.1332, slightly above linear regression (0.1320) and far better than persistence (–0.3344), but its MAE was 2.5636, over 15× worse than optimized TabNet. This stemmed from node sparsity and GraphSAGE’s static graph construction. TabNet + GPR achieved the lowest MAE/RMSE; GNN + LSTM had highest R². Results were limited significantly by computational constraints. Next steps include dataset expansion to improve R², quantile regression implementation for uncertainty, and replacement of GPR with LightGBM to lighten residual learning and boost efficiency.
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.