Powered by

Intel has expanded their AI dev toolkit right before MMC, strategic or dumb?



Before the Mobile World Congress 2022, Intel has released a new version of their Intel Distribution of the OpenVINO toolkit, including major upgrades to accelerate AI inferencing performance. After Intel launched OpenVINO in 2018, hundreds and thousands of developers accelerated the performance of AI inferencing, beginning at the edge and extending to both enterprise and clients.

This latest release from Intel includes new features built with a lot of feedback from developers in the field, also includes a better selection of deep learning models, more device portability choices, and higher inferencing performance with viewer code changes. Built on the foundation of one API, the Intel Distribution of OpenVINO toolkit is a suite of tools for high-performance deep learning targeted at enabling faster, more accurate real-world results deployed into production from the edge to the cloud. New features in the latest release make it easier for developers to adopt, maintain, optimize and deploy code with ease across an expanded range of deep learning models.

The latest version of Intel distribution of OpenVINO Toolkit features a better updated, cleaner API requiring fewer code changes when transitioning from a different framework. This is helpful for some. The company has also included support for natural language programming models to use cases such as text-to-speech and voice recognition, in terms of performance, now mode self-discovery is disabled under variable system inferencing capacity based on model requirements. Also, Intel has added support for hybrid architecture in 12th Gen Intel Core CPUs to enhance high-performance inferencing on both the CPU and integrated GPU.

Intel has expanded their AI dev toolkit right before MMC, strategic or dumb?

Intel has expanded their AI dev toolkit right before MMC, strategic or dumb?


Before the Mobile World Congress 2022, Intel has released a new version of their Intel Distribution of the OpenVINO toolkit, including major upgrades to accelerate AI inferencing performance. After Intel launched OpenVINO in 2018, hundreds and thousands of developers accelerated the performance of AI inferencing, beginning at the edge and extending to both enterprise and clients.

This latest release from Intel includes new features built with a lot of feedback from developers in the field, also includes a better selection of deep learning models, more device portability choices, and higher inferencing performance with viewer code changes. Built on the foundation of one API, the Intel Distribution of OpenVINO toolkit is a suite of tools for high-performance deep learning targeted at enabling faster, more accurate real-world results deployed into production from the edge to the cloud. New features in the latest release make it easier for developers to adopt, maintain, optimize and deploy code with ease across an expanded range of deep learning models.

The latest version of Intel distribution of OpenVINO Toolkit features a better updated, cleaner API requiring fewer code changes when transitioning from a different framework. This is helpful for some. The company has also included support for natural language programming models to use cases such as text-to-speech and voice recognition, in terms of performance, now mode self-discovery is disabled under variable system inferencing capacity based on model requirements. Also, Intel has added support for hybrid architecture in 12th Gen Intel Core CPUs to enhance high-performance inferencing on both the CPU and integrated GPU.