Developer Zone
Topics & Technologies
Featured Software Tools
Intel® Distribution of OpenVINO™ Toolkit
Run AI inferencing, optimize models, and deploy across multiple platforms.
Intel® oneAPI Toolkits
Heterogeneous architecture enables one programming model for all platforms.
Intel® Graphics Performance Analyzers
Identify and troubleshoot performance issues in games using system, trace, and frame analyzers.
Intel® Quartus® Prime Design Software
Design for Intel® FPGAs, SoCs, and complex programmable logic devices (CPLD) from design entry and synthesis to optimization, verification, and simulation.
Get Your Software & Development Products
Try, buy, or download directly from Intel and popular repositories.
Documentation
Get started with these key categories. Explore the complete library.
Explore Our Design Services
Intel® Solutions Marketplace
Engineering services offered include FPGA (RTL) design, FPGA board design, and system architecture design.
In 21 Hours, 42 Mins
Workshop: Bridge Optimized AI Models from Intel® Tiber™ AI Cloud to AI PC
March 27, 2025 | 4:00 PM
Discover the most effective techniques for creating apps—from simple to complex—for AI PCs, including large language models (LLMs). This workshop gives you a solid foundation for understanding Intel® Tiber™ AI Cloud, using Intel® Gaudi® AI accelerators and the Intel® Distribution of OpenVINO™ toolkit for AI models for deploying models on AI PCs. The session will also cover SynapseAI software for Intel Gaudi processors using Python* and PyTorch*.
March 27, 2025, 9:00 a.m.–12:00 p.m. Pacific Standard Time (PST)
In 5 Days, 13 Hours
KubeCon + CloudNativeCon Europe
April 1, 2025 | 8:00 AM
The Cloud Native Computing Foundation* flagship conference gathers adopters and technologists from leading open source and cloud-native communities. KubeCon + CloudNativeCon is the premier vendor-neutral cloud-native event that brings together the industry’s most respected experts and key maintainers behind the most popular projects in the cloud-native ecosystem.
April 1–4, 2025; London, UK
In 20 Days, 21 Hours
Webinar: Use Local AI for Efficient LLM Inference
April 16, 2025 | 4:00 PM
Build a large language model (LLM) application using the power of AI PC processing, tapping the native capabilities of Intel® Core™ Ultra processors for running AI locally. The session shows how to develop a Python* back end with a browser extension that compactly summarizes web page content. The exercise showcases the Intel® hardware and software that makes it possible to run LLMs locally.
April 16, 2025, 9:00 a.m. Pacific Daylight Time (PDT)