Radio galaxy zoo: towards building the first multipurpose foundation model for radio astronomy with self-supervised learning

Author:

Slijepcevic Inigo V1ORCID,Scaife Anna M M12ORCID,Walmsley Mike1ORCID,Bowles Micah1,Wong O Ivy345ORCID,Shabala Stanislav S56ORCID,White Sarah V7ORCID

Affiliation:

1. Department of Physics and Astronomy, University of Manchester , M13 9PL, Manchester , UK

2. The Alan Turing Institute , Euston Road, London NW1 2DB , UK

3. CSIRO Space and Astronomy , PO Box 1130, Bentley, WA 6102 , Australia

4. ICRAR-M468, University of Western Australia , Crawley, WA 6009 , Australia

5. ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) , Australia

6. School of Natural Sciences , Private Bag 37, University of Tasmania, Hobart, TAS 7001 , Australia

7. Department of Physics and Electronics, Rhodes University , PO Box 94, Grahamstown 6140 , South Africa

Abstract

Abstract In this work, we apply self-supervised learning with instance differentiation to learn a robust, multipurpose representation for image analysis of resolved extragalactic continuum images. We train a multi-use model which compresses our unlabelled data into a structured, low dimensional representation which can be used for a variety of downstream tasks (e.g. classification, similarity search). We exceed baseline supervised Fanaroff–Riley classification performance by a statistically significant margin, with our model reducing the test set error by up to half. Our model is also able to maintain high classification accuracy with very few labels, with only $7.79{{\ \rm per\ cent}}$ error when only using 145 labels. We further demonstrate that by using our foundation model, users can efficiently trade off compute, human labelling cost and test set accuracy according to their respective budgets, allowing for efficient classification in a wide variety of scenarios. We highlight the generalizability of our model by showing that it enables accurate classification in a label scarce regime with data from the new MIGHTEE survey without any hyperparameter tuning, where it improves upon the baseline by $\sim 8{{\ \rm per\ cent}}$. Visualizations of our labelled and un-labelled data show that our model’s representation space is structured with respect to physical properties of the sources, such as angular source extent. We show that the learned representation is scientifically useful even if no labels are available by performing a similarity search, finding hybrid sources in the RGZ DR1 data set without any labels. We show that good augmentation design and hyperparameter choice can help achieve peak performance, while emphasizing that optimal hyperparameters are not required to obtain benefits from self-supervised pre-training.

Funder

Alan Turing Institute

Publisher

Oxford University Press (OUP)

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3