A non-linear non-intrusive reduced order model of fluid flow by Auto-Encoder and self-attention deep learning methods
dc.contributor.author | Fu, Rui | |
dc.contributor.author | Xiao, Dunhui | |
dc.contributor.author | Navon, I. M. | |
dc.contributor.author | Fang, F. | |
dc.contributor.author | Yang, Liang | |
dc.contributor.author | Wang, Chengyuan | |
dc.contributor.author | Cheng, Sibo | |
dc.date.accessioned | 2023-04-14T15:13:54Z | |
dc.date.available | 2023-04-14T15:13:54Z | |
dc.date.issued | 2023-04-03 | |
dc.description.abstract | This paper presents a new nonlinear non-intrusive reduced-order model (NL-NIROM) that outperforms traditional proper orthogonal decomposition (POD)-based reduced order model (ROM). This improvement is achieved through the use of auto-encoder (AE) and self-attention based deep learning methods. The novelty of this work is that it uses stacked auto-encoder (SAE) network to project the original high-dimensional dynamical systems onto a low dimensional nonlinear subspace and predict fluid dynamics using an self-attention based deep learning method. This paper introduces a new model reduction neural network architecture for fluid flow problem, as well as, a linear non-intrusive reduced order model (L-NIROM) based on POD and self-attention mechanism. In the NL-NIROM, the SAE network compresses high-dimensional physical information into several much smaller sized representations in a reduced latent space. These representations are expressed by a number of codes in the middle layer of SAE neural network. Then, those codes at different time levels are trained to construct a set of hyper-surfaces using self-attention based deep learning methods. The inputs of the self-attention based network are previous time levels' codes and the outputs of the network are current time levels' codes. The codes at current time level are then projected back to the original full space by the decoder layers in the SAE network. The capability of the new model, NL-NIROM, is demonstrated through two test cases: flow past a cylinder, and a lock exchange. The results show that the NL-NIROM is more accurate than the popular model reduction method namely POD based L-NIROM. | en_UK |
dc.identifier.citation | Fu R, Xiao D, Navon IM, et al., (2023) A non-linear non-intrusive reduced order model of fluid flow by Auto-Encoder and self-attention deep learning methods. International Journal for Numerical Methods in Engineering, Volume 124, Issue 13, July 2023, pp. 3087-3111 | en_UK |
dc.identifier.issn | 0029-5981 | |
dc.identifier.uri | https://doi.org/10.1002/nme.7240 | |
dc.identifier.uri | https://dspace.lib.cranfield.ac.uk/handle/1826/19484 | |
dc.language.iso | en | en_UK |
dc.publisher | Wiley | en_UK |
dc.rights | Attribution-NonCommercial 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | * |
dc.subject | auto-encoder | en_UK |
dc.subject | deep learning | en_UK |
dc.subject | non-intrusive reduced order model | en_UK |
dc.subject | self-attention | en_UK |
dc.title | A non-linear non-intrusive reduced order model of fluid flow by Auto-Encoder and self-attention deep learning methods | en_UK |
dc.type | Article | en_UK |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Auto-Encoder_and_self-attention_deep_learning_methods-2023.pdf
- Size:
- 9.03 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.63 KB
- Format:
- Item-specific license agreed upon to submission
- Description: