Back to listings
CGIBangalore, KA

Senior Databricks Developer

Description

Job Title: Senior Databricks Developer Position: SSE / LA Experience: 5 to 10 years of experience Category: Software Development Job location: Bangalore / Chennai Position ID: J1125-2184 Work Type: Hybrid Employment Type: Full Time / Permanent Qualification: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

Role Summary Databricks with strong data engineering skills and deep AWS cloud expertise. To lead the design, build, and deployment of data products on the Databricks Lakehouse platform, ensuring scalability, governance, security, and business value delivery. The role requires mastery in Databricks features, Delta Lake, AWS ecosystem integration, and modern Data Engineering best practices.

Primary Responsibilities • Design, build, and manage data products and event driven architecture using Databricks Lakehouse principles. • Design end-to-end data pipelines using PySpark, Databricks SQL. • Define Data Product blueprint including domain boundaries, ownership, SLAs, documentation, quality rules. • Implement modern ingestion frameworks using Auto Loader, Delta Live Tables, Workflows. • Develop multi-zone medallion architecture (Bronze/Silver/Gold) using Delta Lake. • Lead Databricks on AWS integration including S3 access, IAM roles, VPC networking. • Implement Unity Catalog governance frameworks, fine-grained permissions, lineage tracking. • Drive automation & DevOps practices using Git, Repos, Databricks CLI/SDK, CI/CD pipelines. • Build and optimize Spark workloads for performance and cost control (Photon/Serverless tuning). • Lead performance reviews, code quality checks, and data engineering guidance for teams. • Operationalize Machine Learning and advanced analytics with MLflow/Feature Store/Model Serving. • Monitor and enhance reliability using job monitoring, observability dashboards, alerts. • Hands-on development of Power BI dashboards, DAX measures, data models, DirectQuery/Import mode • Evaluate new Databricks features and adopt innovation into technical roadmap.

Technical Skills Required • Strong programming experience in PySpark, SQL & Python for production-grade data pipelines. • Deep mastery in Databricks, Delta Lake, Unity Catalog, Workflows, SQL Warehouses. • Expert knowledge of AWS services: S3, Glue, Lambda, EMR, Step Functions, CloudWatch, VPC networking. • Experience building data products with versioning, discoverability, contracts, metadata & lineage. • Good understanding of Infra-as-Code: Terraform (workspace, clusters, UC policies, jobs, service principals). • Strong foundation in data modeling, schema design, Lakehouse & Data Mesh principles. • Familiar with governance & security frameworks (encryption, tokenization, row/column controls). • Experience with enterprise integrations (Power BI).

Qualifications • 5–10 years of professional Data Engineering experience. • Minimum 3+ years hands-on with Databricks (production scale). • Proven experience in delivering Data Products on cloud Lakehouse platforms.

Soft Skills • Strong ownership mindset with Data Product thinking. • Excellent communication and ability to translate architecture for business stakeholders. • Ability to lead engineering teams and influence architecture decisions. • Continuous innovation mindset with focus on automation, performance & reusability.

CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs.

#LI-GB9

Skills:

Data Analysis, Python

Skills

DatabricksAWSSecuritySQLApache SparkPythonPower BiDevOpsEncryptionGitData EngineeringTerraformMachine LearningIamCI/CD