Aux-Drop: Handling Haphazard Inputs in Online Learning Using Auxiliary Dropouts
Permanent link
https://hdl.handle.net/10037/33194Date
2023Type
Journal articleTidsskriftartikkel
Peer reviewed
Abstract
Many real-world applications based on online learning produce streaming data that is haphazard in nature, i.e., contains missing features, features becoming obsolete in time, the appearance of new features at later points in time and a lack of clarity on the total number of input features. These challenges make it hard to build a learnable system for such applications, and almost no work exists in deep learning that addresses this issue. In this paper, we present Aux-Drop, an auxiliary dropout regularization strategy for online learning that handles the haphazard input features in an effective manner. Aux-Drop adapts the conventional dropout regularization scheme for the haphazard input feature space ensuring that the final output is minimally impacted by the chaotic appearance of such features. It helps to prevent the co-adaptation of especially the auxiliary and base features, as well as reduces the strong dependence of the output on any of the auxiliary inputs of the model. This helps in better learning for scenarios where certain features disappear in time or when new features are to be modelled. The efficacy of Aux-Drop has been demonstrated through extensive numerical experiments on SOTA benchmarking datasets that include Italy Power Demand, HIGGS, SUSY and multiple UCI datasets. The code is available at https://github.com/Rohit102497/Aux-Drop.
Description
Source at https://www.jmlr.org/tmlr/index.html.
Publisher
Transactions on Machine Learning Research (TMLR)Citation
Agarwal R, Prasad DK, Horsch A, Gupta. Aux-Drop: Handling Haphazard Inputs in Online Learning Using Auxiliary Dropouts. Transactions on Machine Learning Research (TMLR). 2023Metadata
Show full item recordCollections
Copyright 2023 The Author(s)