ProxifyTime zone: CET (+/- 3 hours)

Senior Data Engineer (Python)

Project-Based

Description

About us:

Talent has no borders. Proxify's mission is to connect top developers around the world with the opportunities they deserve. So, it doesn't matter where you are; we are here to help you fast-track your independent career in the right direction. 🙂

Since our launch, Proxify's developers have successfully worked with 1200+ happy clients to build their products and growth features. 5000+ talented developers trust Proxify and its network to fulfill their dreams and objectives.

Proxify is shaped by a global network of supportive, talented developers interested in remote full-time jobs. Our Glassdoor (4.5/5) and Trustpilot (4.8/5) ratings reflect the trust developers place in us and our commitment to our members' success.

The Role:

We are looking for a Senior Data Engineer to architect and scale the data foundations for one of our high-growth client products. The ideal candidate is a Python expert who treats data infrastructure as software, implementing CI/CD, unit testing, and observability into every layer of the modern data stack. You are a perfect candidate if you are growth-oriented, you love what you do, and you enjoy working on new ideas to develop exciting products.

What we’re looking for:

  • 5+ years of experience building complex data processing applications using Python (Pandas, PySpark, or Dask).
  • Advanced SQL skills for complex transformations, window functions, and query optimization in cloud warehouses.
  • Deep experience with dbt (data build tool) for managing the T in ELT, including documentation and testing.
  • Proven experience with Apache Airflow, Prefect, or Dagster for managing complex dependency graphs.
  • Hands-on experience with Snowflake, BigQuery, or AWS Redshift.
  • Strong understanding of Dimensional Modeling (Star/Snowflake schema) and Data Vault 2.0.
  • Experience with Git, Docker, and implementing CI/CD for data pipelines.

Nice-to-Have:

  • Experience building Real-time Pipelines using Kafka or Flink.
  • Familiarity with Data Contracts and Data Quality frameworks (Great Expectations, Monte Carlo).
  • Knowledge of Vector Databases (Pinecone, Milvus) for AI/LLM applications.
  • Infrastructure as Code (Terraform) experience.

Responsibilities:

  • Build and maintain scalable, automated ELT/ETL pipelines that provide a "single source of truth" for the organization.
  • Implement rigorous automated testing and monitoring to ensure data integrity and reliability.
  • Optimize warehouse storage and compute costs while reducing pipeline latency.
  • Partner with Data Scientists and Product Managers to translate business requirements into technical data models.
  • Promote a "DataOps" culture within the team, conducting code reviews and sharing best practices.

What we offer:

Get paid, not played

No more unreliable clients. Enjoy on-time monthly payments with flexible withdrawal options.

Predictable project hours

Enjoy a harmonious work-life balance with consistent 8-hour working days with clients.

Flex days, so you can recharge

Enjoy up to 24 flex days off per year without losing pay, for full-time positions found through Proxify.

Career-accelerating positions at cutting-edge companies

Discover exclusive long-term remote positions at the world's most exciting companies

.

Hand-picked opportunities, just for you

Skip the typical recruitment roadblocks and biases with personally matched positions.

One seamless process, multiple opportunities

A one-time contracting process for endless opportunities, with no extra assessments.

Compensation

Enjoy the same pay, every month with positions landed through Proxify.

Skills

ApacheVaultSnowflakePrefectCI/CDSQLKafkaBigQueryLLMRedshiftTerraformData EngineeringAWSETLGitAirflowdbtFlinkPandasPythonAIUnit TestingDocker

Want AI to find more roles like this?

Upload your CV once. Get matched to relevant assignments automatically.

Try personalized matching