AI Engineer
Description
Location: Munich/Hamburg or EindhovenBrief:We are looking for an AI Engineer passionate about Generative AI and Agentic AI systems, someone who thrives on optimizing models for efficient on-device deployment. You will work on large language models (LLMs), large multimodal models (LMMs), and Vision-Language-Action (VLA) models, ensuring they run reliably and efficiently on our NPU-based platforms.Responsibilities:Optimize LLMs and multimodal models for on-device deploymentInvestigate, develop and apply advanced quantization (8-bit, 4-bit, mixed precision), pruning, and distillation techniques for deriving optimized models for NXP NPU targets.Accelerate inference performanceInvestigate, develop and implement system optimizations such as speculative decoding and other efficient decoding algorithms tailored for edge environments.Engineer agentic AI capabilities towards tiny agentsInvestigate methodologies for enhancing the performance of small language models towards enabling tiny agents at the edge, while ensuring these follow safety principles.Work with inference engines and deployment frameworksDeploy optimized models using Ollama, llama. cpp, ONNX Runtime, and TFLite for efficient NPU inference.Benchmark LLMs and agentic systemsDesign benchmarking pipelines for assessing the performance of Generative and Agentic AI systems on-deviceRequirements:MSc, PhD or EngD in a technical specialism, like Computer Science or equally relevant.5+ years of experience in software/AI engineering with deep exposure to LLMs, VLMs, and systems performance.Experience with LLM quantization techniques (e.g., SmoothQuant, SpinQuant, QuaRoT), pruning (Wanda, SparseGPT, etc.) and other system optimizations like speculative decoding.Track-record experience in working with AI frameworks (PyTorch, TensorFlow, etc.), required.Experience with Agentic AI technologies and familiarity with existing frameworks (e.g., LangChain, Google ADK, SmolAgents, etc.)Understanding of AI toolchains, deployment, portability and inference engines (CUDA, TensorRT, TFLite, ONNX, Ollama, etc.) preferred.Affinity and experience with embedded systems, and NPU accelerators required.Broad experience with Operating systems GNU/Linux, embedded systems, development boards, and processors, and SW competencies required.Familiarity with setting up and maintaining related ML-Ops development environments (MLFlow, ClearML, etc.) required.Knowledge of build systems (YOCTO, OpenEmbedded, etc.) beneficial, working with cross-compilation toolchains for ARM preferred.Solid programming experience of C, C++, Python and Bash programming languages on Linux systems required. Location: Munich/Hamburg or Eindhoven We are looking for an AI Engineer passionate about Generative AI and Agentic AI systems, someone who thrives on optimizing models for efficient on-device deployment. You will work on large language models (LLMs), large multimodal models (LMMs), and Vision-Language-Action (VLA) models, ensuring they run reliably and efficiently on our NPU-based platforms. Responsibilities: Optimize LLMs and multimodal models for on-device deployment Investigate, develop and apply advanced quantization (8-bit, 4-bit, mixed precision), pruning, and distillation techniques for deriving optimized models for NXP NPU targets. Accelerate inference performance Investigate, develop and implement system optimizations such as speculative decoding and other efficient decoding algorithms tailored for edge environments. Engineer agentic AI capabilities towards tiny agents Investigate methodologies for enhancing the performance of small language models towards enabling tiny agents at the edge, while ensuring these follow safety principles. Work with inference engines and deployment frameworks Deploy optimized models using Ollama, llama. cpp, ONNX Runtime, and TFLite for efficient NPU inference. Benchmark LLMs and agentic systems Design benchmarking pipelines for assessing the performance of Generative and Agentic AI systems on-device Requirements: MSc, PhD or EngD in a technical specialism, like Computer Science or equally relevant. 5+ years of experience in software/AI engineering with deep exposure to LLMs, VLMs, and systems performance. Experience with LLM quantization techniques (e.g., SmoothQuant, SpinQuant, QuaRoT), pruning (Wanda, SparseGPT, etc.) and other system optimizations like speculative decoding. Track-record experience in working with AI frameworks (PyTorch, TensorFlow, etc.), required. Experience with Agentic AI technologies and familiarity with existing frameworks (e.g., LangChain, Google ADK, SmolAgents, etc.) Understanding of AI toolchains, deployment, portability and inference engines (CUDA, TensorRT, TFLite, ONNX, Ollama, etc.) preferred. Affinity and experience with embedded systems, and NPU accelerators required. Broad experience with Operating systems GNU/Linux, embedded systems, development boards, and processors, and SW competencies required. Familiarity with setting up and maintaining related ML-Ops development environments (MLFlow, ClearML, etc.) required. Knowledge of build systems (YOCTO, OpenEmbedded, etc.) beneficial, working with cross-compilation toolchains for ARM preferred. Solid programming experience of C, C++, Python and Bash programming languages on Linux systems required.