1. Bai, T., Li, Y., Shen, Y., Zhang, X., Zhang, W., Cui, B.: Transfer learning for bayesian optimization: a survey. arXiv preprint arXiv:2302.05927 (2023)
2. Balandat, M., et al.: Botorch: a framework for efficient monte-carlo bayesian optimization. Adv. Neural. Inf. Process. Syst. 33, 21524–21538 (2020)
3. Bansal, A., Stoll, D., Janowski, M., Zela, A., Hutter, F.: JAHS-bench-201: a foundation for research on joint architecture and hyperparameter search. Adv. Neural. Inf. Process. Syst. 35, 38788–38802 (2022)
4. Chen, Y., et al.: Learning to learn without gradient descent by gradient descent. In: International Conference on Machine Learning, pp. 748–756. PMLR (2017)
5. Chen, Y., et al.: Towards learning universal hyperparameter optimizers with transformers. Adv. Neural. Inf. Process. Syst. 35, 32053–32068 (2022)