Neural Partially Linear Additive Model Unveiled

Higher Education Press

Interpretability has drawn increasing attention in machine learning. Partially linear additive models provide an attractive middle ground between the simplicity of generalized linear model and the flexibility of generalized additive model, and are important models for addressing the two interpretability problems of feature selection and structure discovery. The existing partially linear additive models still have various imperfect performances in terms of fitting ability.

To solve the problems, a research team lad by Han LI published their new research on 15 December 2024 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team combined the neural network and the partially linear additive model to proposed a Neural Partially Linear Additive Model (NPLAM), which can automatically distinguishes insignificant, linear, and nonlinear features. On the one hand, neural network construction fits data better than spline function under the same parameter amount; on the other hand, learnable gate design and sparsity regular-term maintain the ability of feature selection and structure discovery.

In the research, they analyze the hypothesis space and optimization problem of the partially linear additive model, and employ neural networks to build sub-models for each nonlinear feature without defining basis functions, which are much more efficient with little accuracy loss. In order to judge important features and linear feature, they introduce learnable feature selection gate and structure discovery gate, and use a lasso penalization to address three forms of model selection: deciding which features are at all relevant in the model, deciding which of those features should be fit linearly versus nonlinearly, and encouraging sparsity of all weights in a neural network. In theory, they establish the sample complexity error bounds of the proposed model by Rademacher complexity. they have provided empirical evidence via some experiments, which shows that the proposed model can tackle the interpretability problem by introducing double gates and lasso penalty, and has excellent performance.

Future work can focus on introducing the interaction of input features to improve the expressiveness of the neural partially linear additive model and designing a new neural partially linear additive model that takes feature ranking into account.

DOI: 10.1007/s11704-023-2662-3

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.