1. Alonso O (2019) The practice of crowdsourcing. Synth Lect Inf Concepts Retr Serv 11(1):1–149
2. Bennett J, Lanning S, et al (2007) The Netflix prize. In: Proceedings of KDD cup and workshop, New York, NY, USA., vol 2007, p 35
3. Braschler M, Peters C (2004) Cross-language evaluation forum: objectives, results, achievements. Inf Retr 7(1–2):7–31
4. Cleverdon CW (1991) The significance of the Cranfield tests on index languages. In: Bookstein A, Chiaramella Y, Salton G, Raghavan VV (eds) Proceedings of the 14th annual international ACM SIGIR conference on research and development in information retrieval. Chicago, Illinois, USA, October 13–16, 1991 (Special Issue of the SIGIR Forum), ACM, pp 3–12.
https://doi.org/10.1145/122860.122861
5. Downie JS, Hu X, Lee JH, Choi K, Cunningham SJ, Hao Y (2014) Ten years of MIREX (music information retrieval evaluation exchange): reflections, challenges and opportunities. In: Wang H, Yang Y, Lee JH (eds) Proceedings of the 15th international society for music information retrieval conference, ISMIR 2014, Taipei, Taiwan, October 27–31, 2014, pp 657–662.
http://www.terasoft.com.tw/conf/ismir2014/proceedings/T119_342_Paper.pdf