1. Bai, H., et al.: Segatron: segment-aware transformer for language modeling and understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 12526–12534 (2021)
2. Chang, C.J., Lee, C.Y., Yang, Y.H.: Variable-length music score infilling via XLNET and musically specialized positional encoding. arXiv preprint arXiv:2108.05064 (2021)
3. Dai, S., Jin, Z., Gomes, C., Dannenberg, R.B.: Controllable deep melody generation via hierarchical music structure representation. Cornell University. arXiv:2109.00663 (2021)
4. Dai, S., Zhang, H., Dannenberg, R.B.: Automatic analysis and influence of hierarchical structure on melody, rhythm and harmony in popular music. Cornell University. arXiv:2010.07518 (2020)
5. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019)