LOOYASMannouba, La Manouba, Tunisia

Data Engineer

Project-Based

Description

We are seeking a skilled Data Engineer to design, build, and optimize data pipelines and architectures supporting analytics, AI, and enterprise applications. The ideal candidate has deep technical expertise in data modeling, ETL/ELT processes, and cloud data platforms, with a strong understanding of how to ensure data reliability, scalability, and security. Key Responsibilities •Design and develop scalable data pipelines to collect, transform, and load structured and unstructured data from various sources. •Architect and maintain modern data infrastructures (data lakes, data warehouses, streaming systems) using cloud platforms such as AWS, Azure, or Google Cloud. •Collaborate with data scientists, analysts, cloud engineers and application teams to ensure data availability and integrity for analytical and operational use cases. •Implement and optimize ETL/ELT workflows with tools such as Apache Airflow, Spark, Kafka, or dbt. •Develop data models and schemas to support BI dashboards, AI/ML models, and business reporting. •Ensure data governance, quality, lineage, and compliance with security and regulations. •Monitor and troubleshoot data systems performance, ensuring minimal downtime and high reliability. •Automate data ingestion, transformation, and validation tasks to improve efficiency. ⸻ Qualifications •Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field. •3+ years of experience as a Data Engineer, ETL Developer, or in a related data engineering role. •Proficiency in SQL, Python, and at least one cloud data platform (e.g., BigQuery, Snowflake, Redshift, Synapse). •Experience with data orchestration tools (e.g., Airflow, Prefect, Dagster) and CI/CD pipelines. •Strong understanding of data modeling (star/snowflake schemas) and database optimization techniques. •Familiarity with data security, encryption, and data standards (,PDPL, ISO 27001, NDMO in KSA). •Knowledge of API integrations, streaming data (Kafka, Kinesis), and NoSQL systems is a plus. ⸻ Preferred Skills •Experience with containerization (Docker, Kubernetes). •Familiarity with AI/ML data preparation workflows. •Exposure to data catag and metadata management tools. •Understanding of DevOps and CI/CD best practices for data pipelines.

Skills

GCPData EngineeringKafkaAWSEncryptionETLAIDevOpsMLGDPRPrefectCI/CDApache SparkPythonSparkSnowflakeBigQueryApacheSQLKubernetesAirflowdbtAzureRedshiftComplianceMachine LearningDockerAPISecurity

Want AI to find more roles like this?

Upload your CV once. Get matched to relevant assignments automatically.

Try personalized matching