StoryblokEurope

Senior ML Solutions Architect - Token Factory

Description

Why work at Nebius ‍Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field. Where we work ‍Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team. The role We seek an experiencedSenior ML Solutions Architectto support customers leveragingNebius Token Factory'sserverless inference platform for open-source LLMs across multiple modalities. In this role, you will be collaborating with clients to design and implement customized LLM-based solution and architect scalable AI applications using our served models, and working together with our backend team to improve our platform to match the clients' needs. You’re welcome to work remotely fromEurope. Your responsibilities will include: Design and implement LLM-based solutions using Nebius Token Factory’s inference services to drive business value and support customer goalsBuild production-ready applications leveraging our serverless LLM APIs, including multimodal models (text, vision, audio) and domain-specific modelsProvide technical expertise in prompt engineering, RAG architectures, model selection, and inference optimizationCollaborate with product and engineering teams to surface customer feedback and shape the platform roadmapGuide customers in scaling from POC to production with a focus on performance, reliability, and cost efficiency Design and implement LLM-based solutions using Nebius Token Factory’s inference services to drive business value and support customer goals Build production-ready applications leveraging our serverless LLM APIs, including multimodal models (text, vision, audio) and domain-specific models Provide technical expertise in prompt engineering, RAG architectures, model selection, and inference optimization Collaborate with product and engineering teams to surface customer feedback and shape the platform roadmap Guide customers in scaling from POC to production with a focus on performance, reliability, and cost efficiency We expect you to have: 5+ years of experience in ML/AI systems, with at least 2 years focused on LLMs and generative AIDeep knowledge of the LLM ecosystem, including model architectures and fine-tuning approachesHands-on experience with:Prompt engineering and LLM pipeline development, including evaluationAgentic frameworks such as Langchain, Langsmith, smolagents, or equivalentVector databases and RAG implementation patternsDeploying LLM-powered applications using APIs from OpenAI, Anthropic, or open-source modelsStrong Python programming skillsExcellent communication skills, with the ability to clearly explain technical concepts to diverse audiences 5+ years of experience in ML/AI systems, with at least 2 years focused on LLMs and generative AI Deep knowledge of the LLM ecosystem, including model architectures and fine-tuning approaches Hands-on experience with:Prompt engineering and LLM pipeline development, including evaluationAgentic frameworks such as Langchain, Langsmith, smolagents, or equivalentVector databases and RAG implementation patternsDeploying LLM-powered applications using APIs from OpenAI, Anthropic, or open-source models Prompt engineering and LLM pipeline development, including evaluationAgentic frameworks such as Langchain, Langsmith, smolagents, or equivalentVector databases and RAG implementation patternsDeploying LLM-powered applications using APIs from OpenAI, Anthropic, or open-source models Prompt engineering and LLM pipeline development, including evaluation Agentic frameworks such as Langchain, Langsmith, smolagents, or equivalent Vector databases and RAG implementation patterns Deploying LLM-powered applications using APIs from OpenAI, Anthropic, or open-source models Strong Python programming skills Excellent communication skills, with the ability to clearly explain technical concepts to diverse audiences It would be an added bonus if you have: Experience with inference frameworks and libraries (e.g., vLLM, SGLang, TensorRT-LLM, Transformers)Familiarity with inference optimization techniques such as quantization, batching, caching, and routingWork with multimodal AI models (e.g., vision-language, speech)Proficiency with DevOps tools (Docker, Kubernetes)Contributions to open-source ML/AI projects Experience with inference frameworks and libraries (e.g., vLLM, SGLang, TensorRT-LLM, Transformers) Familiarity with inference optimization techniques such as quantization, batching, caching, and routing Work with multimodal AI models (e.g., vision-language, speech) Proficiency with DevOps tools (Docker, Kubernetes) Contributions to open-source ML/AI projects Preferred technical stack: Programming Languages– PythonML Frameworks and Libraries– vLLM, SGLang, TensorRT-LLM, Transformers, OpenAI/Anthropic SDKsFrameworks for Agentic Pipelines– Langchain / Langsmith / smolagents / equivalentAPI and Web Frameworks– FastAPI, FlaskMLOps and DevOps tools– Kubernetes (K8s), Docker, GitCloud Platforms– AWS (SageMaker, Bedrock), GCP (Vertex AI), Azure (Azure ML) Programming Languages– Python ML Frameworks and Libraries– vLLM, SGLang, TensorRT-LLM, Transformers, OpenAI/Anthropic SDKs Frameworks for Agentic Pipelines– Langchain / Langsmith / smolagents / equivalent API and Web Frameworks– FastAPI, Flask XXXX XXXX DevOps tools– Kubernetes (K8s), Docker, Git Cloud Platforms– AWS (SageMaker, Bedrock), GCP (Vertex AI), Azure (Azure ML) What we offer Competitive salary and comprehensive benefits package.Opportunities for professional growth within Nebius.Flexible working arrangements.A dynamic and collaborative work environment that values initiative and innovation. Competitive salary and comprehensive benefits package. Opportunities for professional growth within Nebius. Flexible working arrangements. A dynamic and collaborative work environment that values initiative and innovation. We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Skills

FastAPIAzureAWSDockerMLOpenAIAPIGitAILLMGCPDevOpsPythonMachine LearningFlaskKubernetes

Want AI to find more roles like this?

Upload your CV once. Get matched to relevant assignments automatically.

Try personalized matching