Nanotechnology

Machine learning brings material modeling to a new era


July 10, 2023

(Nanowerk News) The arrangement of electrons in matter, known as electronic structure, plays an important role in fundamental but also applied research such as drug design and energy storage. However, the lack of simulation techniques that offer high fidelity and scalability over different timescales and lengths has long been a barrier to advancing this technology.

Researchers from the Center for Advanced Systems Understanding (CASUS) at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Görlitz, Germany, and Sandia National Laboratories in Albuquerque, New Mexico, USA, have now pioneered a machine learning-based simulation method that replaces computer simulation techniques. traditional electronic structure. Their Material Learning Algorithm (MALA) software stack allows access to previously unattainable long scales.

The research has been published in npj Computing Materials (“Predicting electronic structures at any length scale with machine learning”). Deep learning simulation footage of over 10,000 beryllium atoms. The distribution of electrons in this material is visualized as a cloud of red (delocalized electrons) and blue (electrons located close to the atomic nucleus) dots. This simulation is not feasible using conventional DFT calculations. Thanks to MALA, it was completed in about 5 minutes using only 150 central processing units. Graphical filters have been used to increase the clarity of the simulation. The white areas on the edges are also caused by the filter. The schematic in the background hints at how deep the learning goes. (Image: HZDR/CASUS)

Electrons are very important elementary particles. Their quantum mechanical interactions with each other and with atomic nuclei give rise to many phenomena observed in chemistry and materials science. Understanding and controlling the electronic structure of matter provides insight into molecular reactivity, energy structure and transport within planets, and failure mechanisms of materials.

Scientific challenges are increasingly being addressed through computational modeling and simulation, leveraging the power of high-performance computing. However, a significant obstacle to achieving realistic simulations with quantum precision is the lack of predictive modeling techniques that combine high accuracy with scalability across multiple length and time scales. Classical atomistic simulation methods can handle large and complex systems, but the omission of the electron quantum structure limits their applicability.

In contrast, simulation methods that do not rely on assumptions such as empirical modeling and parameter fitting (first principle methods) provide high fidelity but are computationally demanding. For example, density function theory (DFT), a first principles method widely used, exhibits cubic scaling with system size, limiting its predictive ability at small scales.

Hybrid approach based on deep learning

The research team is now presenting a new simulation method called the Materials Learning Algorithms (MALA) software stack. In computer science, a software stack is a collection of algorithms and software components combined to create a software application to solve a specific problem. Lenz Fiedler, Ph.D. student and lead developer of MALA at CASUS, explains, “MALA integrates machine learning with a physics-based approach to predict the electronic structure of materials. MALA uses a hybrid approach, using an established machine learning method called deep learning to accurately predict local quantities, complemented by a physics algorithm to quantify the sum of global importance.”

The MALA software stack takes the arrangement of atoms in space as input and generates fingerprints known as bispectral components, which encode the spatial arrangement of atoms around Cartesian lattice points. Machine learning models in MALA are trained to predict electronic structures based on this atomic environment. A significant advantage of MALA is the ability of its machine learning models to be independent of system size, allowing them to be trained on data from small systems and deployed at any scale.

In their publication, the research team showcases the extraordinary effectiveness of this strategy. They achieve accelerations of more than 1,000 times for smaller system sizes, consisting of several thousand atoms, compared to conventional algorithms. Next, the team demonstrated MALA’s ability to accurately perform electronic structure calculations on a large scale, involving more than 100,000 atoms. Notably, this achievement was achieved with modest computational effort, revealing the limitations of conventional DFT codes.

Attila Cangi, Acting Head of the Department of Materials in Extreme Conditions at CASUS, explains: “As system size increases and more atoms are involved, DFT calculations become impractical, while MALA’s speed advantage continues to grow. MALA’s main breakthrough lies in its ability to operate in the environment local atoms, enabling accurate numerical predictions that are minimally affected by system size. This breakthrough achievement opens up computational possibilities that were once thought unattainable.”

A push for applied research is expected

Cangi aims to push the boundaries of electronic structure computation by leveraging machine learning: “We anticipate that MALA will trigger a transformation in electronic structure computation, as we now have a method to simulate much larger systems at unprecedented speed. In the future, researchers will be able to address a variety of societal challenges based on significantly improved baselines, including developing new vaccines and new materials for energy storage, conducting large-scale simulations of semiconductor devices, studying material defects, and exploring chemical reactions to convert the atmospheric greenhouse gas carbon dioxide into climate-friendly minerals.”

In addition, the MALA approach is well suited for high performance computing (HPC). As system size increases, MALA enables independent processing on the computing network it uses, effectively leveraging the resources of the HPC, particularly the graphics processing unit. Siva Rajamanickam, staff scientist and parallel computing expert at Sandia National Laboratory, explains, “The MALA algorithm for electronic structure calculations maps well to modern HPC systems with distributed accelerators. The ability to decompose jobs and execute in parallel different network points across different accelerators makes MALA is an ideal match for scalable machine learning on HPC resources, delivering unparalleled speed and efficiency in electronic structure calculations.”

In addition to development partners HZDR and Sandia National Laboratories, MALA is already employed by institutions and companies such as the Georgia Institute of Technology, North Carolina A&T State University, Sambanova Systems Inc., and Nvidia Corp.





Source link

Related Articles

Back to top button