Machine Learning

Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL™ device1 C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design.

More on Efficient Implementation of Neural Network Systems Built on FPGAs

CNN Implementation Using an FPGA and OpenCL™ Device

This is a power-efficient machine learning demo of the AlexNet convolutional neural networking (CNN) topology on Intel® FPGAs.

  • Classifies 50,000 validation set images at >500 images/second at ~35 W
  • Quantifies a confidence level via 1,000 outputs for each classified image
  • Performs hardened 32 bit floating-point computation
  • Developed with OpenCL™ device

Watch video

These HPC applications greatly benefit from machine learning implementations on an FPGA:

  • Intelligent vision 
  • Scientific simulations
  • Life science and medical data analysis
  • Financial services
  • Oil and gas

For more details on hardware and software application packages for Machine Learning, go to the Machine Learning page.

For more information on how you can use FPGAs to accelerate your machine learning application, contact your local sales representative. 

Computer and Storage Reference Links


インテル® FPGA の産業機器アプリケーション



インテル® FPGA とプログラマブル・デバイス





OpenCL™ および OpenCL™ ロゴは、Apple Inc. の商標であり、Khronos の許諾を得て使用されています。