Data Engineer – Azure & Databricks (Performance & CMP)
Description
As a Data Engineer in the team, you will be responsible for building and maintaining the foundational data infrastructure that powers two Performance Evaluation products. This includes deg and implementing data pipelines that connect, collect, organize, and version data from multiple operational systems. Additionally, the Data Engineer will develop performant, scalable data models and ingestion frameworks required for consolidating the diverse datasets into a unified platform. This includes integrating data sources such as IKEA Common Sales, cost-to-serve, Pulse CX, IKEA concept reviews, brand metrics, and country & CMP range information to ensure high quality, timeliness, and consistency across the full dataset. The work will address current challenges of fragmented, inconsistent, and manually managed information by creating resilient data pipelines and automated validation checks that increase data accuracy, reduce reporting time, and provide a reliable source for CMP and performance insights. These capabilities will directly support faster decision-making and a more seamless omnichannel experience globally to serve, Pulse CX, concept reviews, brand metrics, and country & CMP range information making to serve, Pulse CX, concept reviews, brand metrics, and country & CMP range information makingAs a Data Engineer you will play a key role in developing CMP data foundationWe expect candidates to:• Comfortable working in the early phases of solution design and contributing throughout the full data‑engineering lifecycle, including ideation, high‑level and low‑level architecture, requirements specification, functional and technical design, estimation, sprint planning, development, testing, documentation, deployment, and operational follow‑up.• Strong understanding of data engineering guidelines, release processes, and quality expectations, including data validation, performance optimization, monitoring, and troubleshooting in production environments.• Passionate about building clean, scalable, resilient, and cost‑efficient data solutions using cloud‑native architectures and modern data platforms, with a strong focus on maintainability and reusability.• Proven experience deg and implementing end‑to‑end data pipelines (batch and streaming), including ingestion, transformation, and serving layers, using best practices in data modeling and lakehouse architecture.• Hands‑on expertise with Databricks, including Apache Spark, Delta Lake, job orchestration, performance tuning, and environment management.• Strong knowledge of Microsoft Azure, particularly services commonly used in data platforms such as Azure Data Lake Storage,etc• Solid experience with DevOps practices, including source control (e.g., Git), CI/CD pipelines, automated testing, environment promotion, and infrastructure‑as‑code for data platforms.• Strong communication skills, with the ability to clearly explain data architectures, pipelines, and trade‑offs to non‑technical stakeholders.You will work with:• Databricks (Apache Spark, Delta Lake, notebooks, job orchestration, performance optimization)• Microsoft Azure (e.g., Azure Data Lake Storage, )• DevOps & CI/CD (version control, automated testing, deployment pipelines, infrastructure as code)Start date: ASAPEnd date: 2026-08-31 WITH extension possibility Workload: 100%Number of hours for the work period (estimated): 850 hWork place: Klipporna, MalmöRemote work: Yes, but needs to be onsite 3 days/week at the office