Description |
1 online resource (416 p.). |
Series |
Issn Ser. |
|
Issn Ser.
|
Note |
Description based upon print version of record. |
Contents |
Intro -- Hardware Accelerator Systems for Artificial Intelligence and Machine Learning -- Copyright -- Contents -- Contributors -- Preface -- Chapter One: Introduction to hardware accelerator systems for artificial intelligence and machine learning -- 1. Introduction to artificial intelligence and machine learning in hardware acceleration -- 2. Deep learning and neural network acceleration -- 2.1. The neural processing unit -- 2.2. RENO: A reconfigurable NoC accelerator -- 3. HW accelerators for artificial neural networks and machine learning -- 3.1. CNN accelerator architecture |
|
3.2. SVM accelerator architecture -- 3.3. DNN based hardware acceleration -- 3.3.1. EyeRiss -- 4. SW framework for deep neural networks -- 5. Comparison of FPGA, CPU and GPU -- 5.1. Performance metrics -- 6. Conclusion and future scope -- References -- Chapter Two: Hardware accelerator systems for embedded systems -- 1. Introduction -- 2. Neural network computing in embedded systems -- 2.1. Driving neural network computing into embedded systems -- 2.2. Considerations for choosing embedded processing solutions -- 3. Hardware acceleration in embedded systems -- 3.1. Hardware acceleration options |
|
3.2. Commercial options for neural network acceleration -- 4. Software frameworks for neural networks -- Acknowledgments -- References -- Chapter Three: Hardware accelerator systems for artificial intelligence and machine learning -- 1. Introduction -- 2. Background -- 2.1. Overview of convolutional neural networks -- 2.2. Quantization of weights and activations -- 2.2.1. Performance of neural networks using quantized weights and activations based on arithmetic binary shift operations -- 2.3. Computational elements of hardware accelerators in deep neural networks |
|
3. Hardware inference accelerators for deep neural networks -- 3.1. Architectures of hardware accelerators -- 3.2. Eyeriss: hardware accelerator using a spatial architecture -- 3.3. UNPU and BIT FUSION: hardware accelerators using shift-based multiplier -- 3.4. Digital neuron: a multiplier-less massive parallel processor -- 3.5. Power saving strategies for hardware accelerators -- 4. Hardware inference accelerators using digital neurons -- 4.1. System architecture -- 4.2. Implementation and experimental results -- 5. Summary -- Acknowledgments -- Key terminology and definitions -- References |
|
Chapter Four: Generic quantum hardware accelerators for conventional systems -- 1. Introduction -- 2. Principles of computation -- 3. Need and foundation for quantum hardware accelerator design -- 3.1. Algorithms -- 3.2. Programming paradigm and languages -- 3.3. Compiler and runtime requirements -- 3.4. Quantum instruction set architecture (Q-ISA) -- 3.5. Quantum microarchitecture -- 4. A generic quantum hardware accelerator (GQHA) -- 4.1. Deciphering HQAP as a GQHA -- 4.2. Deconstructing GQHA -- 5. Industrially available quantum hardware accelerators -- 5.1. IBM Quantum project |
Subject |
Machine learning.
|
|
Apprentissage automatique.
|
Added Author |
Deka, Ganesh Chandra.
|
Other Form: |
Print version: Kim, Shiho Hardware Accelerator Systems for Artificial Intelligence and Machine Learning San Diego : Elsevier Science & Technology,c2021 9780128231234 |
ISBN |
9780128231241 |
|
0128231246 |
|