Introduction to the Intel® Advanced Matrix Extensions (Intel® AMX) for AI acceleration

 




A Step Ahead. If you like AVX-512 (introduced by Intel in July 2013), you will like AMX. This new feature expands the use of CPUs for AI workloads by adding hardware in the form of dedicated TILES and a set of matrix multiply instructions, or TMUL, to operate efficient matrix multiplication operations on those tiles. Intel® AMX supports INT8 and BF16 data types, which speed up deep learning training and inferencing, while AVX512 instructions continue support of FP32 and FP64 data types, which are used in classical machine learning workloads. 

Watch this video to get an introduction to the Intel® Advanced Matrix Extensions (Intel® AMX).

Watch Video 

Comments