Enroll in CSCE 990-002: Hardware Acceleration for Machine Learning

We've added a new course to our fall semester schedule: CSCE 990-002: Hardware Acceleration for Machine Learning.
We've added a new course to our fall semester schedule: CSCE 990-002: Hardware Acceleration for Machine Learning.

CSCE-990002: Hardware Acceleration for Machine Learning
Seminar, Lecture
Instructor: Roohi, Arman
Days: MWF
Time: 1:30–2:20 PM
Location: BURN 121

Course Description:
Machine learning (ML) is currently widely used in much advanced artificial intelligence (AI) applications. The breakthrough of the computation ability has enabled the system to compute complicated different ML algorithms in a relatively short time, providing real-time human-machine interaction such as face detection for video surveillance, advanced driver-assistance systems (ADAS), and image recognition early cancer detection. Among all those applications, a high detection accuracy requires complicated ML computation, which comes at the cost of high computational complexity. This results in a high requirement on the hardware platform. Currently, most applications are implemented on general-purpose compute engines, especially graphics processing units (GPUs). However, work recently reported from both industry and academy shows a trend on the design of application-specific integrated circuit (ASIC) for ML, especially in the field of deep neural network (DNN). This course gives an overview of the hardware accelerator design, the various types of ML acceleration, and the technique used in improving the hardware computation efficiency of ML computation, especially by non-von Neumann architectures using post-CMOS technologies, including spintronic, memristor.

Course Objectives:
• HW+ML for the compute-heavy deep neural network (DNN) models of machine learning
• Foundations of ML and DL algorithms
• Compute and memory behavior of DL workloads
• Pros/cons of different compute platforms (CPU/GPU)
• Custom HW Accelerators
• Minimizing computation, data movement, memory overhead
• Co-design of ML algorithms and accelerators
• E.g., model compression/retraining for fixed-point arithmetic
• E.g., memory access strategy to reduce data movement
• Cross-layer perspective: algorithmic, architectural, and circuit-level