דרושיםLocation:Herzliya

דרושים»דאטה» Senior Data Engineer

Project-Based

Description

לפני 1 שעות חברה חסויה Location: Job Type: Required Senior Data Engineer Join our core platform engineering team, developing our AI-powered automotive data management platform. We are developing the next generation data-driven products for the Automotive industry, focusing on cybersecurity (XDR) and vehicle quality. Our products monitor and secure millions of vehicles worldwide and help automakers leverage connected vehicle data to deliver cyber resilience, safety, customer satisfaction and increase brand loyalty. Our Data Engineering & Data Science Group leads the development of our Iceberg-based data platform, including data lake, query engine, and ML-Ops tools, serving as a solid AI-ready foundation for all our products. At the core of our Engineering Team, you will build and operate scalable, production-grade customer-facing data and ML platform components, focusing on reliability and performance. Technological background and focus: Iceberg, Trino, Prefect, GitHub Actions, Kubernetes, JupyterHub, MLflow, dbt This role is full-time and is Herzliya, Israel based. Responsibilities Design, build, and maintain scalable data pipelines to ingest and transform batch data on our data lake, enabling analytics and ML with strong data quality, governance, observability, and CI/CD. Build and expand our foundational data infrastructure, including our data lake, analytics engine, and batch processing frameworks. Create robust infrastructure to enable automated pipelines that will ingest and process data into our analytical platforms, leveraging open-source, cloud-agnostic frameworks and toolsets. Develop and maintain our data lake layouts and architectures for efficient data access and advanced analytics. Develop and manage orchestration tools, governance tools, data discovery tools, and more. Work with other team members of the engineering group, including data scientists, data architects, and data analysts, to provide solutions using a use case-based approach that drives the construction of technical data flows.Requirements: BSc/BA in Computer Science, Engineering or a related field At least 6 years of experience with deg and building data pipelines, analytical tools and data lakes Experience with the data engineering tech stack: ETL & orchestration tools (e.g. Airflow, Argo, Prefect), and distributed data processing tools (e.g Spark, Kafka, Presto) Experience with Python is a must Experience working in a containerized environment (e.g. k8s) Experience working with open-source products End-to-end ownership mindset with a proactive, production-first approach Development experience, using a general purpose programming language (Java, Scala, Kotlin, Go, etc.) - An advantage.This position is open to all candidates. Hide

Skills

DbtApache SparkKubernetesCybersecurityAirflowScalaGithub ActionsData EngineeringJavaCI/CDGoMachine LearningEtlKafkaKotlinPython

Want AI to find more roles like this?

Upload your CV once. Get matched to relevant assignments automatically.

Try personalized matching