דרושים»דאטה» Software Engineer (Data Platforms)
Description
לפני 19 שעות ") חברה חסויה Location: Job Type: Were looking for a Software Engineer (Data Platforms) to join the Users & Integrations team within our companys Intelligence Group. This role is built for an experienced engineer who thrives on solving complex backend challenges and scaling data pipelines. In this role, you will take ownership of crucial user data integrations and architect the sophisticated matching logic that powers our platform from data ingestion and transformation to delivery. You will work extensively with large-scale data pipelines, translate complex algorithms into high-performance production code, and tackle massive scalability challenges to enhance the data experience for our companys customers Where does this role fit in our vision? Every role at our company is designed with a clear purpose. At our company, data is everything; its at the heart of everything we do. The Intelligence Group is responsible for shaping the experience of hundreds of thousands of users who rely on our data daily. The Users Team is the engine behind our companys data connectivity, handling massive-scale user data integrations and engineering complex entity-matching logic. By translating millions of data signals and advanced algorithms into high-performance pipelines, we ensure users receive highly accurate, tailored data - optimizing their overall experience while driving the core KPIs of our Intelligence Group. What will you be responsible for? Deg, building, and maintaining robust, scalable ETL/ELT data pipelines and integration solutions within our companys Databricks-based environment. Implementing and optimizing algorithms for data processing and entity resolution with a strong emphasis on delivering high-quality, high-throughput data. Deploying data infrastructure leveraging technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations. Deg innovative data solutions that support millions of data points, at high performance and massive scale.Requirements: What we look for: 3+ years of software engineering experience building scalable backend systems Experience scaling big data pipelines, complex data integrations, and robust data infrastructure. Expertise in big data technologies, including Spark (or Databricks), Kafka (or other real-time streaming tools), and workflow orchestrators like Airflow. Experience using GenAI tools for software development (such as Cursor, Claude Code, Codex, etc). A strong builder mindset, with experience turning ideas into working solutions Algorithmic experience, including developing and optimizing machine learning models and implementing advanced data algorithms. Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or comparable cloud environments (Azure/GCP). Expertise in extracting, ingesting, and transforming large datasets efficiently. A passion for sharing knowledge, fostering a supportive engineering culture, and engaging in collaborative problem-solving with your peers. Bonus Points: Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.This position is open to all candidates. Hide
Skills
Want AI to find more roles like this?
Upload your CV once. Get matched to relevant assignments automatically.