Job Description


We are seeking a software engineer to drive the implementation and performance optimization of generative AI workloads on Intel GPUs as part of the OpenVINO GPU team.

This role focuses on building high-performance, HW-aware software that enables efficient execution of AI models on current and future Intel GPU architectures. You will work across multiple layers of the stack—AI models, runtime systems, and GPU hardware—and take ownership of complex performance problems that require deep technical insight and careful trade-off analysis.

You will work on state-of-the-art AI models that push the limits of GPU performance. Your work directly impacts real-world AI performance experienced by developers and customers.

 

About OpenVINO

OpenVINO(https://github.com/openvinotoolkit/openvino) is a performance-focused AI inference runtime designed to efficiently execute deep learning models across Intel architectures.

The GPU plugin is a core component of OpenVINO that bridges high-level AI models and low-level GPU execution, covering areas such as graph transformation, kernel dispatch, memory management, and hardware-specific optimizations.

The codebase is performance-critical, largely written in modern C++, and requires strong understanding of system-level software design, debugging, and optimization.

 

What You Will Do

 

Required Qualifications

Preferred Qualifications

 

Work Model