
How computers and artificial intelligence have evolved together
Co-design, namely designing software and hardware simultaneously, is one way to meet the computing power demands of today’s artificial intelligence applications. The compiler, which translates instructions from one representation to another, is a key piece of the puzzle. A group of researchers at the Chinese Academy of Sciences encapsulated existing compiler technologies in a deep learning co-design and proposed their own framework, Buddy Compiler.
Credit: Hongbin Zhang et al.
Co-design, namely designing software and hardware simultaneously, is one way to meet the computing power demands of today’s artificial intelligence applications. The compiler, which translates instructions from one representation to another, is a key piece of the puzzle. A group of researchers at the Chinese Academy of Sciences encapsulated existing compiler technologies in a deep learning co-design and proposed their own framework, Buddy Compiler.
Group review paper published June 19 in Intelligent ComputingJournal of Science Partners.
While others have summarized optimization, hardware architecture, co-design approaches, and compilation techniques, no one has discussed deep learning systems from a compilation technology perspective for co-design. Researchers study deep learning from this angle because they believe that “compilation technology can bring more opportunities to co-design and thus be able to achieve better performance and power requirements of deep learning systems.”
This review covers five topics:
- A history of deep learning and co-design
- Deep learning and co-design now
- Compile technology for deep learning co-design
- Current issues and future trends
- Your compiler
A history of deep learning and co-design
Since the 1950s, neural networks have gone through many ups and downs leading to the explosive growth of deep learning applications and research today. Co-design began in the 1990s and has since been adopted in a wide variety of fields, evolving from manual work to computer-assisted design and finally to complex processes involving modeling, simulation, optimization, synthesis and testing. Since 2020, a networking model called a transformer has had great success: ChatGPT is a chatbot built using a “generative pre-trained transformer”. Today’s AI applications such as ChatGPT are hitting new performance barriers that require more hardware-software co-design.
Deep learning and co-design now
Deep learning breakthroughs stem from the use of multiple layers and a large number of parameters, which significantly increases the computational demands for training and inference. As a result, relying solely on software-level optimization, it becomes difficult to achieve a reasonable execution time. To address this, both industry and academia have turned to domain-specific hardware solutions, which aim to achieve the required performance through a collaborative effort between hardware and software, known as hardware-software co-design. Recently, a comprehensive system has emerged, consisting of deep learning frameworks, high-performance libraries, domain-specific compilers, programming models, hardware flows, and co-design techniques. These components collectively contribute to increasing the efficiency and effectiveness of deep learning systems.
Compile technology for deep learning co-design
There are two popular ecosystems used to build compilers for deep learning: the tensor virtual machine, known as TVM, and the multi-level intermediate representation, known as MLIR. This ecosystem uses a different strategy, with TVM serving as the end-to-end deep learning builder and MLIR acting as the building infrastructure. Meanwhile, in the realm of hardware architectures tailored for deep learning workloads, there are two main types: streaming architectures and compute engine architectures. The hardware design toolflow associated with this architecture also embraces new compilation techniques to drive progress and innovation. The combination of deep learning compilers and hardware compilation techniques presents new opportunities for deep learning co-design.
Current issues and future trends
With performance requirements increasing too fast for processor development to keep up, effective co-design is critical. The problem with co-design is that there is no one way to do it, there is no unified co-design framework or abstraction. If multiple layers of abstraction are required, efficiency decreases. It’s very difficult to adapt compilers for a specific domain. Unifying ecosystems are forming, but the underlying causes of fragmentation remain. The solution to this problem will be an extensible modular unifying framework.
Your compiler
Contributors to the Buddy Compiler project are “committed to building a scalable and flexible hardware and software co-design ecosystem.” Ecosystem modules will include a compiler framework, a compiler platform as a service, a framework of reference, a domain-specific architectural framework, and co-design modules. The last two modules are still in progress.
The authors foresee the ongoing development of a compilation ecosystem that will help unify the work being done in the fast-growing and somewhat fragmented field of deep learning.
The authors of the review are Hongbin Zhang, Mingjie Xing, Yanjun Wu, and Chen Zhao of the Institute of Software, Chinese Academy of Sciences.
Journal
Intelligent Computing
DOI
10.34133/computing.0040
Research methods
Literature Review
Research Subjects
Not applicable
Article title
Compiler Technology in Deep Learning Co-Design: Survey
Article Publication Date
19-Jun-2023
COI statement
The authors declare that they have no competing interests.