Assistant Manager - GCP and Databricks Data Engineering

Kpmg India Services Llp

Bengaluru/Bangalore

Not disclosed

Work from Office

Full Time

Min. 7 years

Job Details

Job Description

GCP and Databricks Assistant Manager Hiring

  • Bachelors or higher degree in Computer Science or a related discipline; or equivalent (minimum 5 years work experience). 
  • At least 7+ years of consulting or client service delivery experience on Databricks and GCP
  • Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions using at least four GCP services among Data Flow, Data Proc, Pub Sub, BigQuery, Cloud Functions, Composer, GCS
  • Proficient hands-on programming experience in Spark/Scala (python/java)
  • Proficient in building production level ETL/ELT data pipelines from data ingestion to consumption
  • Data Engineering knowledge (such as Data Lake, Data warehouse - Redshift/Hive/Snowflake, Integration, Migration) 
  • Excellent communicator (written and verbal formal and informal)
  • Experience using software version control tools (Git/Bitbucket/code commit).

  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.

  • Ability to multi-task under pressure and work independently with minimal supervision.

  • Must be a team player and enjoy working in a cooperative and collaborative team environment.

  • Proficient in databricks concepts like volumes, notebooks, clusters, workflows etc

An GCP and Data Engineer is responsible for designing, building, and maintaining the data infrastructure for an organization using GCP and Databricks cloud services. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The GCP + Databricks Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. 

Preferred Skills

  • Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions using at least four GCP services among Data Flow, Data Proc, Pub Sub, BigQuery, Cloud Functions, Composer, GCS
  • Proficient hands-on programming experience in Spark/Scala (python/java)
  • Proficient in building production level ETL/ELT data pipelines from data ingestion to consumption
  • Data Engineering knowledge (such as Data Lake, Data warehouse - Redshift/Hive/Snowflake, Integration, Migration) 
  • Excellent communicator (written and verbal formal and informal)
  • Experience using software version control tools (Git/Bitbucket/code commit).

  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.

  • Ability to multi-task under pressure and work independently with minimal supervision.

  • Must be a team player and enjoy working in a cooperative and collaborative team environment.

  • Proficient in databricks concepts like volumes, notebooks, clusters, workflows etc

  • Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions using at least four GCP services among Data Flow, Data Proc, Pub Sub, BigQuery, Cloud Functions, Composer, GCS
  • Proficient hands-on programming experience in Spark/Scala (python/java)
  • Proficient in building production level ETL/ELT data pipelines from data ingestion to consumption
  • Data Engineering knowledge (such as Data Lake, Data warehouse - Redshift/Hive/Snowflake, Integration, Migration) 
  • Excellent communicator (written and verbal formal and informal)
  • Experience using software version control tools (Git/Bitbucket/code commit).

  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.

  • Ability to multi-task under pressure and work independently with minimal supervision.

  • Must be a team player and enjoy working in a cooperative and collaborative team environment.

  • Proficient in databricks concepts like volumes, notebooks, clusters, workflows etc

Experience Level

Mid Level

Job role

Work location

Bangalore, Karnataka, India

Department

Data Science & Analytics

Role / Category

DBA / Data warehousing

Employment type

Full Time

Shift

Day Shift

Job requirements

Experience

Min. 7 years

About company

Name

Kpmg India Services Llp

Job posted by Kpmg India Services Llp

Apply on company website