Performance Optimization on Ascend, Biren, and Cambricon Training Course
Ascend, Biren, dan Cambricon adalah platform hardware AI terkemuka di China, masing-masing menawarkan alat pengoptimalan dan pemrofilan unik untuk beban kerja AI skala produksi.
Pelatihan langsung ini (daring atau tatap muka) ditujukan bagi insinyur infrastruktur AI tingkat lanjut dan peningkatan kinerja yang ingin mengoptimalkan aliran kerja inferensi model dan pelatihan di beberapa platform chip AI China.
Setelah selesai, peserta akan dapat:
- Menjalankan benchmark model pada platform Ascend, Biren, dan Cambricon.
- Mengidentifikasi botleneck sistem dan ketidakefisienan memori/perhitungan.
- Menerapkan optimasi tingkat grafik, kernel, dan operator.
- Mempersiapkan pipa_deployemen untuk meningkatkan throughput dan latensi.
Format Kursus
- Ledakan interaktif dan diskusi.
- Penggunaan praktis alat pemrofilan dan pengoptimalan di setiap platform.
- Melakukan latihan yang difokuskan pada skenario penyetelan praktis.
Opsi Personalisasi Kursus
- Untuk meminta pelatihan terpersonalisasi berdasarkan lingkungan kinerja atau jenis model Anda, silakan hubungi kami untuk mengatur.
Course Outline
Konsep dan Metrik Kinerja
- Lantai, throughput, penggunaan daya, utilitas sumber daya
- Tambatan pada tingkat sistem vs model
- Profil untuk inferensi vs pelatihan
Profiling di Huawei Ascend
- Menggunakan CANN Profiler dan MindInsight
- Diganosa kernel dan operator
- Pola offload dan pemetaan memori
Profil di Biren GPU
- Cara mengecek kinerja Biren SDK
- Fusi kernel, penyelarasan memori, dan antrian eksekusi
- Profil yang memperhatikan daya dan suhu
Profiling di Cambricon MLU
- Alat kinerja BANGPy dan Neuware
- Keterlihatan tingkat kernel dan interpretasi log
- Integrasi profilir MLU dengan kerangka penerapan
Optimisasi Tingkat Grafik dan Model
- Stra tegi pemotongan grafik dan kuantifikasi
- Fusi operator dan restrukturisasi grafik komputasi
- Standarisasi ukuran input dan penyetelan batch
Optimisasi Memori dan Kernel
- Mengoptimalkan susunan memori dan penggunaan kembali
- Pengelolaan buffer yang efisien di antara chipset
- Teknik penyetelan tingkat kernel per platform
Praktik Terbaik lintas Platform
- Portabilitas performa: strategi abstraksi
- Membangun pipa penyetelan bersama untuk lingkungan multipel chipset
- Contoh: menyesuaikan model deteksi objek di antara Ascend, Biren, dan MLU
Ringkasan dan Langkah Berikutnya
Requirements
- Pengalaman bekerja dengan pipa AI model training atau deployment
- Pemahaman tentang prinsip komputasi GPU/MLU dan optimisasi model
- Ketahui dasar-dasar alat profil kinerja dan metriknya
Audience
- Engineer performa
- Tim infrastruktur machine learning
- Architect sistem AI
Open Training Courses require 5+ participants.
Performance Optimization on Ascend, Biren, and Cambricon Training Course - Booking
Performance Optimization on Ascend, Biren, and Cambricon Training Course - Enquiry
Performance Optimization on Ascend, Biren, and Cambricon - Consultancy Enquiry
Consultancy Enquiry
Upcoming Courses (Minimal 5 peserta)
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend adalah keluarga prosesor AI yang dirancang untuk inferensi dan pelatihan berkinerja tinggi.
Latihan langsung ini (daring atau di lokasi) ditujukan bagi insinyur AI tingkat menengah dan ilmuwan data yang ingin mengembangkan dan memperbaiki model jaringan saraf menggunakan platform Ascend Huawei dan toolkit CANN.
Selesai latihan ini, peserta akan dapat:
- Mengatur dan konfigurasi lingkungan pengembangan CANN.
- Mengembangkan aplikasi AI menggunakan MindSpore dan aliran kerja CloudMatrix.
- Memaksimalkan kinerja pada Ascend NPU dengan operator kustom dan tiling.
- Mendeploy model ke lingkungan tepi atau cloud.
Format Kursus
- Lecture interaktif dan diskusi.
- Penggunaan langsung Huawei Ascend dan toolkit CANN dalam aplikasi contoh.
- Latihan terarah yang fokus pada pembangunan model, pelatihan, dan deploy.
Opsi Kustomisasi Kursus
- Jika Anda ingin meminta latihan khusus untuk kursus ini berdasarkan infrastruktur atau dataset Anda, silakan hubungi kami untuk mengatur.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI compute stack for deploying and optimizing AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and engineers who wish to deploy trained AI models efficiently to Huawei Ascend hardware using the CANN toolkit and tools such as MindSpore, TensorFlow, or PyTorch.
By the end of this training, participants will be able to:
- Understand the CANN architecture and its role in the AI deployment pipeline.
- Convert and adapt models from popular frameworks to Ascend-compatible formats.
- Use tools like ATC, OM model conversion, and MindSpore for edge and cloud inference.
- Diagnose deployment issues and optimize performance on Ascend hardware.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios based on real-world AI models.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix adalah platform pengembangan dan penyebaran AI terpadu dari Huawei yang dirancang untuk mendukung pipa inferensi skala besar dan siap produksi.
Pelatihan langsung ini (daring atau di lokasi) ditujukan bagi profesional AI level pemula hingga menengah yang ingin menyebarkan dan memantau model AI menggunakan platform CloudMatrix dengan integrasi CANN dan MindSpore.
Setelah selesai pelatihan ini, peserta akan dapat:
- Menggunakan CloudMatrix untuk penyediaan model, penyebaran, dan pelayanan.
- Mengonversi dan mengoptimalkan model untuk chip Ascend.
- Menyiapkan pipa untuk tugas inferensi real-time dan batch.
- Mempantau penyebaran dan menyesuaikan kinerja dalam pengaturan produksi.
Format Kursus
- Lecture interaktif dan diskusi.
- Penggunaan praktis CloudMatrix dengan skenario penyebaran nyata.
- Latihan terarah yang difokuskan pada konversi, optimasi, dan skalabilitas.
Opsi Kustomisasi Kursus
- Untuk meminta pelatihan khusus berdasarkan infrastruktur AI atau lingkungan awan Anda, silakan hubungi kami untuk mengatur.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Unit) adalah chip AI khusus yang dioptimalkan untuk inferensi dan pelatihan dalam skenario edge dan data center.
Pelatihan langsung ini (daring atau di tempat) ditujukan bagi pengembang tingkat menengah yang ingin membangun dan mendeploy model AI menggunakan kerangka kerja BANGPy dan SDK Neuware pada hardware Cambricon MLU.
Selesai pelatihan ini, peserta akan dapat:
- Mengatur dan mengonfigurasi lingkungan pengembangan BANGPy dan Neuware.
- Mengembangkan dan memperbaiki model berbasis Python dan C++ untuk Cambricon MLUs.
- Deploy model ke perangkat edge dan data center yang menjalankan runtime Neuware.
- Mengintegrasikan alur kerja ML dengan fitur percepatan khusus MLU.
Format Kursus
- Lecture interaktif dan diskusi.
- Penggunaan langsung BANGPy dan Neuware untuk pengembangan dan deploy.
- Latihan yang dipandu dengan fokus pada optimasi, integrasi, dan pengujian.
Opsi Kustomisasi Kursus
- Untuk meminta pelatihan khusus berdasarkan model perangkat Cambricon atau kasus penggunaan Anda, silakan hubungi kami untuk mengatur.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit used to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at beginner-level AI developers who wish to understand how CANN fits into the model lifecycle from training to deployment, and how it works with frameworks like MindSpore, TensorFlow, and PyTorch.
By the end of this training, participants will be able to:
- Understand the purpose and architecture of the CANN toolkit.
- Set up a development environment with CANN and MindSpore.
- Convert and deploy a simple AI model to Ascend hardware.
- Gain foundational knowledge for future CANN optimization or integration projects.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with simple model deployment.
- Step-by-step walkthrough of the CANN toolchain and integration points.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit enables powerful AI inference on edge devices such as the Ascend 310. CANN provides essential tools for compiling, optimizing, and deploying models where compute and memory are constrained.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and integrators who wish to deploy and optimize models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
- Prepare and convert AI models for Ascend 310 using CANN tools.
- Build lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimize model performance for limited compute and memory environments.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work with edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack — from the low-level CANN SDK to the high-level MindSpore framework — offers a tightly integrated AI development and deployment environment optimized for Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level technical professionals who wish to understand how the CANN and MindSpore components work together to support AI lifecycle management and infrastructure decisions.
By the end of this training, participants will be able to:
- Understand the layered architecture of Huawei’s AI compute stack.
- Identify how CANN supports model optimization and hardware-level deployment.
- Evaluate the MindSpore framework and toolchain in relation to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-prem environments.
Format of the Course
- Interactive lecture and discussion.
- Live system demos and case-based walkthroughs.
- Optional guided labs on model flow from MindSpore to CANN.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursCANN SDK (Compute Architecture for Neural Networks) is Huawei’s AI compute foundation that allows developers to fine-tune and optimize the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI developers and system engineers who wish to optimize inference performance using CANN’s advanced toolset, including the Graph Engine, TIK, and custom operator development.
By the end of this training, participants will be able to:
- Understand CANN's runtime architecture and performance lifecycle.
- Use profiling tools and Graph Engine for performance analysis and optimization.
- Create and optimize custom operators using TIK and TVM.
- Resolve memory bottlenecks and improve model throughput.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with real-time profiling and operator tuning.
- Optimization exercises using edge-case deployment examples.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) provides powerful deployment and optimization tools for real-time AI applications in computer vision and NLP, especially on Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI practitioners who wish to build, deploy, and optimize vision and language models using the CANN SDK for production use cases.
By the end of this training, participants will be able to:
- Deploy and optimize CV and NLP models using CANN and AscendCL.
- Use CANN tools to convert models and integrate them into live pipelines.
- Optimize inference performance for tasks like detection, classification, and sentiment analysis.
- Build real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab with model deployment and performance profiling.
- Live pipeline design using real CV and NLP use cases.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM enable advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at advanced-level system developers who wish to build, deploy, and tune custom operators for AI models using CANN’s TIK programming model and TVM compiler integration.
By the end of this training, participants will be able to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom ops into the CANN runtime and execution graph.
- Use TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for custom computation patterns.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursArsitektur GPU China seperti Huawei Ascend, Biren, dan Cambricon MLUs menawarkan alternatif CUDA yang disesuaikan untuk pasar AI dan HPC lokal.
Pelatihan langsung ini (daring atau tatap muka) ditujukan bagi programmer tingkat lanjut dan spesialis infrastruktur GPU yang ingin mengalihkan dan memperbaiki aplikasi CUDA yang ada untuk diimplementasikan pada platform perangkat keras China.
Selesai melalui pelatihan ini, peserta akan dapat:
- Mengevaluasi kompatibilitas beban kerja CUDA yang ada dengan alternatif chip China.
- Mengalihkan basis kode CUDA ke lingkungan Huawei CANN, Biren SDK, dan Cambricon BANGPy.
- Membandingkan kinerja dan mengidentifikasi titik optimasi di berbagai platform.
- Mengatasi tantangan praktis dalam dukungan silang-arsitektur dan implementasi.
Bentuk Kursus
- Lektur interaktif dan diskusi.
- Laboratorium praktik alih kode dan perbandingan kinerja.
- Latihan terarah yang fokus pada strategi adaptasi multi-GPU.
Opsi Kustomisasi Kursus
- Untuk meminta pelatihan khusus untuk kursus ini berdasarkan platform atau proyek CUDA Anda, silakan hubungi kami untuk mengatur.