Abstract
Time-dependent flow fields are typically generated by a computational fluid dynamics method, which is an extremely time-consuming process. However, the latent relationship between the flow fields is governed by the Navier–Stokes equations and can be described by an operator. We therefore train a deep operator network (DeepONet) to learn the temporal evolution between flow snapshots. Once properly trained, given a few consecutive snapshots as input, the network has a great potential to generate the next snapshot accurately and quickly. Using the output as a new input, the network iterates the process, generating a series of successive snapshots with little wall time. Specifically, we consider two-dimensional flow around a circular cylinder at Reynolds number 1000 and prepare a set of high-fidelity data using a high-order spectral/hp element method as ground truth. Although the flow fields are periodic, there are many small-scale features in the wake flow that are difficult to generate accurately. Furthermore, any discrepancy between the prediction and the ground truth for the first snapshots can easily accumulate during the iterative process, which eventually amplifies the overall deviations. Therefore, we propose two alternative techniques to improve the training of DeepONet. The first one enhances the feature extraction of the network by harnessing the “multi-head non-local block.” The second one refines the network parameters by leveraging the local smooth optimization technique. Both techniques prove to be highly effective in reducing the cumulative errors, and our results outperform those of the dynamic mode decomposition method.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献