1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467 (2016)
2. Agrawal, A., Amos, B., Barratt, S., Boyd, S., Diamond, S., Kolter, J.Z.: Differentiable convex optimization layers. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates Inc., Glasgow (2019)
3. Agrawal, A., Barratt, S., Boyd, S., Busseti, E., Moursi, W.M.: Differentiating Through a Cone Program. arXiv preprint arXiv:1904.09043 (2019)
4. Amos, B., Kolter, J.Z.: Optnet: differentiable optimization as a layer in neural networks. In: International Conference on Machine Learning, PMLR, pp. 136–145 (2017)
5. Bengio, Y.: Using a financial training criterion rather than a prediction criterion. Int. J. Neural Syst. 8(04), 433–443 (1997)