Abstract
AbstractExtreme multi-label classification (XMC) refers to supervised multi-label learning involving hundreds of thousands or even millions of labels.
In this paper, we develop a suite of algorithms, called , which generalizes the notion of label representation in XMC, and partitions the labels in the representation space to learn shallow trees.
We show three concrete realizations of this label representation space including: (i) the input space which is spanned by the input features, (ii) the output space spanned by label vectors based on their co-occurrence with other labels, and (iii) the joint space by combining the input and output representations. Furthermore, the constraint-free multi-way partitions learnt iteratively in these spaces lead to shallow trees.
By combining the effect of shallow trees and generalized label representation, achieves the best of both worlds—fast training which is comparable to state-of-the-art tree-based methods in XMC, and much better prediction accuracy, particularly on tail-labels. On a benchmark Amazon-3M dataset with 3 million labels, outperforms a state-of-the-art one-vs-rest method in terms of prediction accuracy, while being approximately 200 times faster to train. The code for is available at https://github.com/xmc-aalto/bonsai.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference50 articles.
1. Agrawal, R., Gupta, A., Prabhu, Y., & Varma, M. (2013). Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In World Wide Web conference.
2. Babbar, R., & Schölkopf, B. (2017). Dismec: Distributed sparse machines for extreme multi-label classification. In International conference on web search and data mining (pp. 721–729).
3. Babbar, R., & Schölkopf, B. (2019). Data scarcity, robustness and extreme multi-label classification. Machine Learning, 108(8–9), 1329–1351.
4. Babbar, R., Partalas, I., Gaussier, E., & Amini, M.R. (2013). On flat versus hierarchical classification in large-scale taxonomies. In Advances in neural information processing systems (pp. 1824–1832).
5. Babbar, R., Metzig, C., Partalas, I., Gaussier, E., & Amini, M.R. (2014). On power law distributions in large-scale taxonomies. In ACM SIGKDD explorations newsletter (pp. 47–56).
Cited by
86 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献