Affiliation:
1. IBM Research Europe Zurich Switzerland
2. Queen's University Belfast Belfast UK
3. Syngenta Crop Protection AG Basel Switzerland
Abstract
AbstractPatents show how technology evolves in most scientific fields over time. The best way to use this valuable knowledge base is to use efficient and effective information retrieval and searches for related prior art. Patent classification, that is, assigning a patent to one or more predefined categories, is a fundamental step towards synthesizing the information content of an invention. To this end, architectures based on Transformers, especially those derived from the BERT family have already been proposed in the literature and they have shown remarkable results by setting a new state‐of‐the‐art performance for the classification task. Here, we study how domain adaptation can push the performance boundaries in patent classification by rigorously evaluating and implementing a collection of recent transfer learning techniques, for example, domain‐adaptive pretraining and adapters. Our analysis shows how leveraging these advancements enables the development of state‐of‐the‐art models with increased precision, recall, and F1‐score. We base our evaluation on both standard patent classification datasets derived from patent offices‐defined code hierarchies and more practical real‐world use‐case scenarios containing labels from the agrochemical industrial domain. The application of these domain adapted techniques to patent classification in a multilingual setting is also examined and evaluated.
Reference26 articles.
1. WIPO.2023. Accessed January 10 2023.https://www.wipo.int/portal/en/index.html
2. USPTO.2023. Accessed January 10 2023.https://www.uspto.gov
3. U.S. Patent Statistics Chart Calendar Years 1963–2020; 2023. Accessed January 10 2023.https://www.uspto.gov/web/offices/ac/ido/oeip/taf/us_stat.htm
4. WIPO.Guide to the International Patent Classification; 2022.https://www.wipo.int/publications/en/details.jsp?id=4593&plang=EN
5. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks