The just-released 2023.2 Intel® oneAPI tools are available,
bringing to developers the freedom of multiarchitecture software development using Python,
simplified migration from CUDA to open SYCL, and elevated performance on the latest GPU and CPU hardware.
5 Benefits of This Release
(Plus a word to users of Intel® Parallel Studie XE.)
- Simplified CUDA-to-SYCL Migration to Target More Architectures – And more domains, too. Developers can now extend popular applications— deep learning, cryptography, scientific simulation, imaging, and more—to multi-vendor CPUs, GPUs, and other accelerators; plus, the new release supports the latest version of CUDA, additional CUDA APIs, and FP64 for broader migration coverage.
- Faster & More Accurate AI Inferencing – The addition of NaN-values support during inference streamlines pre-processing and boosts prediction accuracy for models trained on incomplete data.
- Accelerated AI-based Image Enhancement on GPUs – Intel® Open Image Denoise ray-tracing library now supports GPUs from Intel and other vendors, providing hardware choice for fast, high-fidelity, AI-based image enhancements.
- Faster Python for AI & HPC – This release introduces the beta version Data Parallel Extensions for Python, extending numerical Python capabilities to GPUs for NumPy and cuPy functions, including Numba compiler support.
- Streamlined Method to Write Efficient Parallel Code – Intel® Fortran Compiler extends support for DO CONCURRENT Reductions, a powerful feature that allows the compiler to execute loops in parallel and significantly improve code performance while making it easier to write efficient and correct parallel code
- For Intel® Parallel Studio XE Users – Upgrade to the latest release and get all the tool performance you already know and rely on plus more, including the ability to target multiple architectures and hardware acceleration engines with almost no code changes. With the sunsetting of Parallel Studio XE, now is the time to upgrade to Intel oneAPI Toolkits.
5 Tool Highlights
- Intel® oneAPI DPC++/C++ Compiler (based on well-proven LLVM technology) sets the immediate command lists feature as default, benefitting developers looking to offload computation to the Intel® Data Center GPU Max Series.
- Intel® DPC++ Compatibility Tool (based on the open source SYCLomatic project) adds support for CUDA 12.1 and more function calls, streamlines migration of CUDA to SYCL across numerous domains, and adds FP64 awareness to migrated code to ensure portability across Intel GPUs with and without FP64 hardware support.
- Intel® oneAPI Data Analytics Library (oneDAL) Model Builder feature adds missing values for NaN support during inference; this streamlines pre-processing and boosts algorithm prediction accuracy on CPUs and GPUs for models trained on incomplete data.
- Intel® oneAPI Math Kernel Library (oneMKL) (still the fastest and most-used math library for Intel-based systems†) drastically reduces kernel launch time on Intel Data Center GPU Max and Flex Series processors and introduces the LINPACK benchmark for GPU.
- Intel® oneAPI Threading Building Blocks (oneTBB) algorithms and Flow Graph nodes now can accept new types of user-provided callables, resulting in a more powerful and flexible parallel-programming environment.
To know more click here: https://aditech.in/news/now-available-2023-2-release-of-intel-oneapi-tools-2/