zum Inhalt springen

EXASTEEL Description

Recent poster summarizing some main aspects of the EXASTEEL project.

Motivation

The macroscopic behavior of advanced high strength steel is governed by the complex interactions of the individual constituents on the microscale. Simulations of challenging multiphase steel structural problems as the deep-drawing of automotive parts require accurate predictive material models incorporating the material's microstructure evolution during the thermomechanical process.

 

Description and Recent Results

To obtain an overview of recent results, we refer to several posters which have been presented in recent years:

EXASTEEL-II Poster

Poster NIC Symposium 2016

Poster I: Materials Chain 2016

Poster II: Materials Chain 2016

Poster NIC Symposium 2018

Poster I: CoSaS 2018

Poster II: CoSaS 2018

Poster ISC 2018

 

Radical Scale Briding FE² - Framework

The FE²-method, cf. e.g. [1], is a direct multiscale method and provides a suitable numerical tool for radical scale bridging. Here, the macroscopic deformation problem is resolved without considering the microstructure of the steel material and all function evaluations in the Gaussian points are replaced by microscopic boundary value problems on a Representative Volume Element (RVE). The macroscopic and the many microscopic problems are coupled by averaging processes and the macroscopic deformation.

 

Mechanical Modeling at the Microscale

At the microscale Representative Volume Elements (RVEs), see [2], are considered and the material behavior of the individual constituents is described by a finite plasticity model. In order to incorporate initial hardening distributions phase transformations of the original austenitic inclusions to martensite need to be modeled accurately and thus a crystallographically motivated model is planned to be developed in the line of [3].

To derive at a suitable microscopic model an approach for the homogenization of different lattice orientations has to be developed.

 

Parallel Application Software

Parallelization on several levels will be used to accomplish the scale bridging. The level with the highest granularity is the parallel solution of the many highly nonlinear RVE problems. The remaining orders of magnitude will be bridged by ultra-scalable solvers. The legacy application software FEAP will be used for the FE technology. A strong collaboration of all PIs is essential to create the correct infrastructure for all following steps. New ultra-scalable solvers for nonlinear problems will profit from an earlier DFG project on parallel nonlinear structural mechanics. The solvers will be based on FETI (Finite Element Tearing and Interconnecting) approaches thus reducing communication compared to other DD methods.

 

Solvers - Nonlinear, nonoverlapping DD

Increased local work reduces communication and the need for synchronization and thus also increase latency tolerance. Also facilitates the implementation of fault tolerance strategies. A successful overlapping nonlinear DD approach is known as ASPIN [6]. We, however, concentrate on nonoverlapping DD because of potentially smaller communication costs. We developed nonlinear FETI-DP and BDDC methods which successfully scaled up to hundreds of thousands of cores.

 

Performance Engineering, Profiling and Optimization

We implement a structured performance engineering approach. A diagnostic performance model enforces better understanding of the code and HW properties. Performance measurements will be performed using LIKWID [7]. Early insights into performance limiting factors will enable algorithmic and software redesign. Strategies for fault tolerance will be studied.

 

Links

References

  • [1] Schröder, J. [2000], Habitilation thesis.
  • [2] Schröder, J.; Balzani, D.; Brands, D. [2010], A. App. Math.
  • [3] Turteltaub, S. & Suiker, A.S.J. [2006], Int. J. Sol. Struct.
  • [4] Klawonn, A. & Rheinbach, O. [2010], ZAMM
  • [5] Baker, A.; Falgout, R.; Kolev, T.; Meier-Yang, U., [2012], in Springer
  • [6] Cai, X.; Keyes, D.E. [2002], SIAM J. Sci.Comput.
  • [7] Treibig, J.;Hager, G.; Wellein, G. [2010], PSTI2010