דרושיםLocation:Ramat Gan

דרושים»דאטה» Senior DataOps Engineer

Project-Based

Description

לפני 22 שעות חברה חסויה Location: Job Type: We are looking for a DataOps Engineer to own the infrastructure that powers our large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure - you'll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence. You understand data workloads deeply enough to make smart infrastructure decisions, and you have the production instincts to keep complex systems healthy at scale. If you get excited about shaving minutes off Spark job runtimes, right-sizing cluster autoscalers, and building the internal tooling that makes a data platform feel effortless, this role is for you. RESPONSIBILITIES: Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads Own the reliability, performance, and cost-efficiency of the data platform - including SLAs, autoscaling, resource quotas, and workload isolation Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components Develop observability tooling - metrics, logging, alerting, and data quality dashboards - to proactively surface issues across the pipeline stack Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions Manage cloud storage (GCS/S3), Delta Lake, and Unity Catafrastructure Drive platform improvements end-to-end: from design through deployment and ongoing ownership.Requirements: 5+ years of experience in a production infrastructure, SRE, or DevOps role Strong Kubernetes experience, autoscaling, resource management, and the broader K8s ecosystem 2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar) Proficiency in at least one general-purpose language - Python or Go preferred Experience with workflow orchestration tools, particularly Apache Airflow Solid understanding of cloud infrastructure - GCP preferred (GCS, GKE, IAM) Strong observability skills: metrics pipelines, structured logging, alerting frameworks OTHER REQUIREMENTS: Hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production Familiarity with Delta Lake, Parquet, and columnar storage formats Experience with data quality frameworks and pipeline lineage tooling Knowledge of query optimization, partition strategies, and Spark performance tuning Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar).This position is open to all candidates. Hide

Skills

GoPostgreSQLApacheGCPKafkaData EngineeringDatabricksKubernetesDevOpsCI/CDPythonRedisTerraformAirflowPulumiApache SparkIam

Want AI to find more roles like this?

Upload your CV once. Get matched to relevant assignments automatically.

Try personalized matching