Venon.ioRemote

Senior Data + Infrastructure Engineer (Clickhouse, Kafka, dbt)

Description

We're a small, fast-moving team building a B2B SaaS product in the e-commerce analytics space. We're looking for a skilled Data & Infrastructure Engineer to help us scale our data pipelines, optimize our analytical engine, and maintain a rock-solid production environment.

About the role You'll work closely with our technical team to ensure our data flows seamlessly from ingestion to insight. This isn't a role where you'll be handed a perfectly architected blueprint. We need someone who can take ownership of our infrastructure, make smart trade-offs between performance and cost, and communicate clearly when architecting complex systems.

Our stack is built to handle high-volume analytics at scale. We rely on ClickHouse for our heavy lifting, Kafka for real-time data streaming, and Kubernetes for orchestration. We use dbt to keep our transformations sane and manageable.

What you'll be doing Scaling the Backbone: Managing and optimizing our ClickHouse clusters and Kafka pipelines to handle growing data volumes. Infrastructure as Code: Maintaining and evolving our Kubernetes environment to ensure high availability and performance. Data Modeling: Building and maintaining robust data models using dbt to support our analytics product. Performance Tuning: Identifying bottlenecks in data ingestion and query performance, and implementing long-term fixes. Reliability: Participating in technical discussions, code reviews, and debugging production infrastructure issues when they arise.

What we're looking for 3+ years of professional experience in Data Engineering, DevOps, or Site Reliability Engineering. Expertise in ClickHouse: You know how to optimize MergeTree engines and write efficient analytical queries. Streaming Mastery: Hands-on experience deploying and managing Kafka clusters. Orchestration Skills: Strong experience working with Kubernetes (K8s) in a production environment. SQL & dbt: A deep understanding of SQL and experience using dbt for data transformation workflows. Independence: Ability to manage your own time and take infrastructure projects from concept to completion.

Bonus points Hetzner Cloud: Experience managing bare metal or cloud instances specifically within the Hetzner ecosystem. Infrastructure as Code: Experience with Terraform, Pulumi, or Ansible. Industry Context: Background in e-commerce, Shopify, or marketing analytics. Backend Knowledge: Familiarity with Node.js/TypeScript to help bridge the gap between infra and the app layer.

The Process The interview process consists of a 30-minute technical interview and, in the second round, a 1-hour coding/system design task XXXX XXXX data architecture.

Skills

KafkaAnsibleTypeScriptData EngineeringdbtKubernetesSystem DesignDevOpsNode.jsSQLPulumiTerraformBackbone

Want AI to find more roles like this?

Upload your CV once. Get matched to relevant assignments automatically.

Try personalized matching