Author:
Kauschke Sebastian,Fürnkranz Johannes
Abstract
In this work we present classifier patching, an approach for adapting an existing black-box classification model to new data. Instead of creating a new model, patching infers regions in the instance space where the existing model is error-prone by training a classifier on the previously misclassified data. It then learns a specific model to determine the error regions, which allows to patch the old model’s predictions for them. Patching relies on a strong, albeit unchangeable, existing base classifier, and the idea that the true labels of seen instances will be available in batches at some point in time after the original classification. We experimentally evaluate our approach, and show that it meets the original design goals. Moreover, we compare our approach to existing methods from the domain of ensemble stream classification in both concept drift and transfer learning situations. Patching adapts quickly and achieves high classification accuracy, outperforming state-of-the-art competitors in either adaptation speed or accuracy in many scenarios.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Rule Mining for Correcting Classification Models;2023 IEEE International Conference on Data Mining (ICDM);2023-12-01
2. DeepPatch: A Patching-Based Method for Repairing Deep Neural Networks;2023 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest);2023-05
3. Safe-by-Repair: A Convex Optimization Approach for Repairing Unsafe Two-Level Lattice Neural Network Controllers;2022 IEEE 61st Conference on Decision and Control (CDC);2022-12-06
4. Cost-Effective Transfer Learning for Data Streams;2022 IEEE International Conference on Data Mining (ICDM);2022-11
5. Minimal Multi-Layer Modifications of Deep Neural Networks;Lecture Notes in Computer Science;2022