Please login to view abstract download link
Nonlinear finite element analysis (FEA) has been extensively implemented in numerical analysis methodology for a long time. With the development of data-driven computing, material models defined by various machine learning approaches are suggested. Combining both data-driven material models with the implicit FEA drastically increases the computational cost, which means maximizing the efficiency of the programme and vectorizing the overall process for parallel processing with hardware acceleration becomes essential. This research suggests constructing a Python-based nonlinear FEA programme with the Gaussian Process Regression (GPR) material model using GPU computation technology. The GPU can handle the 32-bit single-precision floating-point with simple arithmetic matrix operation fast and efficiently compared to the CPU computation. However, the 64-bit double-precision floating-point operation halves the computational performance, and the limited amount of dedicated memory space of GPU limits the applicability. As a result, adjusting the floating-point precision alongside a sparse matrix with a block-sparse kernel is suggested. The accelerated double precision analysis with vectorized code structure showed much faster computational performance than the non-accelerated double precision sequential code structure while returning the same analysis result. Furthermore, mixed precision implementation with vectorized code structure improved the computing performance even further, but the analysis result showed some error compared to the double precision analysis.