Sutherland Global Services Private Limited

Data Platform Architect - Databricks & Google Cloud

Sutherland Global Services Private Limited
Hyderabad
Not disclosed
Work from OfficeWork from Office
Full TimeFull Time
Min. 8 yearsMin. 8 years

Job Description

Databricks & GCP Data Platform Architect

Company Description

About Sutherland

Artificial Intelligence. Automation. Cloud engineering. Advanced analytics. For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence.

We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model.

For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA. We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships.

Sutherland
Unlocking digital performance. Delivering measurable results.

 

Job Description

We are looking for a hands-on Databricks & GCP Data Platform Architect who will design and personally implement scalable Lakehouse solutions on Google Cloud Platform (GCP).

This role requires deep technical involvement, including building pipelines, configuring Databricks, and troubleshooting production issues, in addition to architecture ownership.

Key Responsibilities

1. Architecture & Hands-on Implementation

  • Design end-to-end Databricks Lakehouse architecture on GCP
  • Hands-on implementation of:
    • Databricks workspaces, clusters, jobs, and workflows
    • Delta Lake–based Bronze / Silver / Gold data layers
    • Batch and streaming pipelines using Spark and Databricks
  • Create reference implementations and reusable frameworks for teams
  • Actively participate in coding, reviews, and production deployments

2. Data Engineering (Hands-on)

  • Build and optimize Spark jobs and Databricks notebooks
  • Implement ingestion pipelines from:
    • Databases and enterprise applications
    • Streaming sources (Pub/Sub, Kafka)
    • External and SaaS systems
  • Perform performance tuning and cost optimization
  • Troubleshoot pipeline failures and production issues directly

3. Security, Governance & Compliance

  • Implement (not just define) governance using Unity Catalog
  • Configure access control integrated with GCP IAM
  • Set up secure networking (VPC, private endpoints)
  • Enable audit logging, lineage, and data classification
  • Work closely with security teams to operationalize standards

4. DevOps, Automation & Operations (Hands-on)

  • Build CI/CD pipelines for Databricks notebooks, jobs, and configs
  • Implement Infrastructure as Code using Terraform
  • Set up monitoring, alerting, and operational dashboards
  • Participate in production support, root-cause analysis, and fixes
  • Drive hands-on cost optimization initiatives

5. Stakeholder Collaboration

  • Translate business requirements into implemented solutions
  • Guide and mentor data engineers through code-level support
  • Conduct architecture and code reviews
  • Act as a technical owner from design through production

Required Skills & Experience

Must Have

  • Strong hands-on experience with Databricks (Apache Spark)
  • Proven experience building and deploying Lakehouse architectures
  • Hands-on experience with GCP, including:
    • Google Cloud Storage (GCS)
    • BigQuery
    • Pub/Sub
    • IAM & VPC basics
  • Experience implementing batch and streaming pipelines
  • Strong troubleshooting and production support skills

Good to Have

  • Unity Catalog, Delta Live Tables
  • CI/CD, Git, Terraform
  • MLflow, Vertex AI exposure
  • Multi-cloud Databricks experience (Azure / AWS)

Qualifications

  • 8–12 years of experience in data engineering / data platforms
  • 3+ years in a hands-on architect or senior technical lead role

Additional Information

All your information will be kept confidential according to EEO guidelines.

Experience Level

Senior Level

Job role

Work location
Work locationHyderabad, //TS, India
Department
DepartmentData Science & Analytics
Role / Category
Role / CategoryData Science & Machine Learning
Employment type
Employment typeFull Time
Shift
ShiftDay Shift

Job requirements

Experience
ExperienceMin. 8 years

About company

Name
NameSutherland Global Services Private Limited
Job posted by Sutherland Global Services Private Limited

Similar jobs you can apply for

Software Development
DTCC Enterprise Services India Private Limited

Software Engineer

DTCC Enterprise Services India Private Limited
Hyderabad
Work from Office
Full Time
Min. 3 years
DTCC Enterprise Services India Private Limited

Performance Test Engineer

DTCC Enterprise Services India Private Limited
Hyderabad
Work from Office
Full Time
Min. 6 years
Google India Pvt Ltd

Data Science Intern

Google India Pvt Ltd
Hyderabad
Work from Office
Full Time
Min. 8 years
Google India Pvt Ltd

Senior Software Engineer

Google India Pvt Ltd
Hyderabad
Work from Office
Full Time
Min. 5 years
Google India Pvt Ltd

Senior Software Engineer

Google India Pvt Ltd
Hyderabad
Work from Office
Full Time
Min. 5 years
Google India Pvt Ltd

Software Engineer

Google India Pvt Ltd
Hyderabad
Work from Office
Full Time
Min. 2 years

You can expect a minimum salary of 0 INR. The salary offered will depend on your skills, experience and performance in the interview.

The candidate should have completed the required education and people who have 8 to 12 years are eligible to apply for this job. You can apply for more jobs in Hyderabad to get hired quickly.

The candidate should have sound communication skills and sound communication skills for this job.

Both Male and Female candidates can apply for this job.

No, it's not a work from home job and can't be done online. You can explore and apply for other work from home jobs in Hyderabad at apna.

No work-related deposit needs to be made during your employment with the company.

Go to the apna app and apply for this job. Click on the apply button and call HR directly to schedule your interview.

The last date to apply for this job is . For more details, download apna app and find Full Time jobs in Hyderabad . Through apna, you can find jobs in 64 cities across India. Join NOW!