Название | Multi-Processor System-on-Chip 2 |
---|---|
Автор произведения | Liliana Andrade |
Жанр | Зарубежная компьютерная литература |
Серия | |
Издательство | Зарубежная компьютерная литература |
Год выпуска | 0 |
isbn | 9781119818403 |
PE7_2 Electrical engineering: power components and/or systems
Foreword
Ahmed JERRAYA
Cyber Physical Systems Programs, CEATech, Grenoble, France
Multi-core and multi-processor SoC (MPSoC) concepts started in the late 1990s, mainly to mitigate the complexity of application-specific integrated circuits (ASICs) and to bring some flexibility. The integration of instruction-set processors into ASIC design aimed both to structure the architecture and to allow for programmability. The concept was adopted for general-purpose CPU and GPU in the second phase. Among the pioneers of MPSoC design, we can list the MPA architecture from ST that used eight specific cores to implement MPEG4 in 1998. This evolved 10 years later to give rise to MPPA, the Kalray’s general-purpose MPSoC architecture. Another pioneer is the emotion engine from Sony that used five cores (two DSP and three RISC) to build the application processor for the PlayStation (PS2). This also evolved and later converged to bring the CELL architecture (developed jointly by Sony, IBM and Toshiba) in 2005. In 2000, Lucent announced Daytona (quad SPARC V8), and in 2001, Philips designed the famous Viper architecture that combined a MIPS architecture and a DSP (Trimedia). In 2004, TI introduced the OMAP architecture that combined an ARM and a DSP. Using MPSoC to build specific architectures is continuing, and almost every SoC produced today is a multi (or many) core architecture. An important evolution took place in 2005 with the ARM MPCore, the first general-purpose quad core. This was followed by several commercial, general-purpose multi-cores, including Intel Core Duo Pentium, AMD Opteron, Niagra Spark, the Cell processor (8 Cell cores + PowerPC, ring network).
MPSoC started a new computing era, but brought a twofold challenge: building multi-core HW that can be used easily by SW designers, and building distributed SW that fully exploits HW capabilities. To deal with these challenges, the design communities from Academia and Industry began a series of conferences and workshops to rethink classical distributed computing. The study of new methods, models and tools to deal with these new distributed HW and SW architectures generated new concepts, such as the interconnect architectures called network-on-chip (NoC). The MPSoC Forum, created in 2001, was the first interdisciplinary forum that brought together the leading thinkers from the different fields to design multi-core and multi-processor SoC. Over the last 20 years, MPSoC was a unique opportunity for me to meet so many of the world’s top researchers and to communicate with them in person, in addition to enjoying the high-quality conference programs. The confluence of academic and industrial perspectives, and hardware and software, makes MPSoC not “yet another conference”. I have learned how emerging SW and HW design technologies and architectures can benefit from advanced semiconductor manufacturing technologies to build energy-efficient multi-core architectures that can serve advanced computing (image, vision and cloud) and distributed networked systems. This book, in two volumes (Architectures and Applications), was published to celebrate the 20th anniversary of MPSoC with outstanding contributions from previous MPSoC events.
This first volume on architectures covers the key components of MPSoC: processors, memory, interconnect and interfaces.
Acknowledgments
Liliana ANDRADE and Frédéric ROUSSEAU
Université Grenoble Alpes, CNRS, Grenoble INP, TIMA, 38000 Grenoble, France
The editors are indebted to the MPSoC community who made this book possible. First of all, they acknowledge the societies that supported this project. EDAA and IEEE/CAS partially funded the organization of the first two events. Since its creation, IEEE/CEDA has sponsored the event. Industrial sponsors played a vital role in keeping MPSoC alive for the last 20 years; special thanks to Synopsys, Arteris, ARM, XILINX and Socionext. The event was created by a nucleus of several people who now form the steering committee (Ahmed Jerraya, Hannu Tenhunen, Marilyn Wolf, Masaharu Imai and Hiroto Yasuura). A larger group has, for the last 20 years, been working to form the community (Nicolas Ventroux, Jishen Zhao, Tsuyoshi Isshiki, Frédéric Rousseau, Anca Molnos, Gabriela Nicolescu, Hiroyuki Tomiyama, Masaaki Kondo, Hiroki Matsutani, Tohru Ishihara, Pierre-Emmanuel Gaillardon, Yoshinori Takeuchi, Tom Becnel, Frédéric Pétrot, Yuan Xie, Koji Inoue, Masaaki Kondo, Hideki Takase and Raphaël David). The editors would like to acknowledge the outstanding contribution of the MPSoC speakers, and especially those who contributed to the chapters of this book. Finally, the editors would like to thank the people who participated in the careful reading of this book (Breytner Fernandez and Bruno Ferres).
1
From Challenges to Hardware Requirements for Wireless Communications Reaching 6G
Stefan A. DAMJANCEVIC1, Emil MATUS1, Dmitry UTYANSKY2, Pieter VAN DER WOLF3 and Gerhard P. FETTWEIS1, 4
1 Vodafone Chair Mobile, Communications Systems, TU Dresden, Germany
2 Synopsys, Saint Petersburg, Russia
3 Solutions Group, Synopsys, Inc., Eindhoven, The Netherlands
4 Barkhausen Institut, TU Dresden, Germany
Over the past few decades, we have seen rapid innovation in wireless communications. In particular, the IEEE 802.11 and 3GPP standardization organizations have driven data rates into the Gb/s range, enabling modern life, at home, at work and on the road. Societies of today have become dependent on this important infrastructure. The basis of this development is an infrastructure based on electronic circuits, which are driven, at heart, by very advanced multi-processor system-on-chip engines.
Firstly, we want to deliver a vision on what is to be expected from 6G, the next innovation wave of cellular technology. Cellular 1G was about delivering analogue voices, and digital 2G fixed it. The intention of 3G was to deliver data, but only 4G made it proficiently available for the service requirements. With 5G, we see the advent of the Tactile Internet, i.e. connecting remote controls not as point-to-point solutions but via a network. Will 6G be just a “fix” of issues left unsolved? We believe 6G will deliver truly more than this, and will also require many more sophisticated signal processing tasks.
Secondly, we want to analyze the computational tasks of 5G and beyond baseband processing, as well as their specification requirements. It becomes clear that the heterogeneity of computation cannot be mapped efficiently onto a homogeneous processor array. Instead, we need to find the right architecture for the right task. The reader is introduced to an extremely varied set of requirements as the services dramatically differ in terms of latency, data rate and reliability.
Thirdly, we want to give perspective and a sense of scale required from hardware for the previously determined corner workloads. We do this by implementing an example beyond 5G algorithm generalized frequency division multiplexing, with regard to those workloads, on a prototype software programmable single-instruction, multiple-data wide vector processing machine. Workloads alone are not sufficient to deduce adequate requirements for hardware (HW). To bridge the gap between workloads and HW requirements, we need to know how the 6G candidate waveform modulation translates workloads into HW requirements.
Conclusively, the performance–cost analysis shows a need for