Data Engineer - Microsoft Fabric & Azure

Kpmg India Services Llp

Bengaluru/Bangalore

Not disclosed

Work from Office

Full Time

Min. 4 years

Job Details

Job Description

Consultant-Data Engineer Fabric

Job Description 

Role: Fabric Data Engineer 

Responsibilities

  • Data Pipeline ETL Development: Design and optimize ETL/ELT processes using Python/PySpark and SQL to efficiently ingest, transform, and integrate data from various sources into Azure data platforms.
  •  Data Architecture Integration: Leverage Azure Data Factory, Databricks, and Synapse Analytics to establish a medallion architecture in ADLS, transforming raw data into refined and actionable formats. 
  •  Power BI Integration: Collaborate with power bi developers and data analysts to ensure seamless integration of user-facing elements.
  •  Full Software Development Lifecycle: Participate in all phases of the SDLC, including requirement analysis, planning, development, testing, and deployment.
  •  Cross-Functional Collaboration: Work closely with cross-functional teams, including data scientists, data analysts and product managers, to solve technical and business challenges collaboratively.

Education: Bachelor's degree (BE/BTECH) in Computer Science, Engineering, Data Science, or related fields. Highly Preferable: Master’s degree in data Analytics, Data Science, Statistics, or any relevant certification(s)

Microsoft Certified: DP 600 - Fabric Analytics Engineer Associate

Good to HavePL300 – Power BI data Analyst associate

Work Experience: 

  • 4-6+ years of total experience in data analytics, with at least 2 years in Azure Cloud data services like Data Factory, Data Bricks, Synapse, PySpark.
  • Data Engineering Fundamentals:
    1. Proficiency in SQL for querying and creating views, stored procedures, etc.
    2. Proven experience in designing and implementing ETL/ELT pipelines for data ingestion, transformation, and integration across platforms.
    3. Strong knowledge in handling large datasets and performing data analytics using Spark, preferably PySpark.
  • Cloud Technologies/Azure:
    1. Hands-on experience with Azure Data Factory (ADF) for orchestrating data workflows, Azure Databricks for advanced analytics and Azure Synapse Analytics for scalable data warehousing solutions. 
    2. Experience with implementing medallion architecture (Bronze, Silver, Gold layers) utilizing Delta Lake tables in Azure Data Lake Storage (ADLS).  
    3. Familiarity with Git and version control for collaborative and efficient project management.
    4. Experience with Microsoft Fabric for seamless data ingestion, processing, and report building within a single unified platform.

Good to have - 

  • Expertise in Power BI, including Power Query, Data Modeling, and Visualization.

Job Description 

Role: Fabric Data Engineer 

Responsibilities

  • Data Pipeline ETL Development: Design and optimize ETL/ELT processes using Python/PySpark and SQL to efficiently ingest, transform, and integrate data from various sources into Azure data platforms.
  •  Data Architecture Integration: Leverage Azure Data Factory, Databricks, and Synapse Analytics to establish a medallion architecture in ADLS, transforming raw data into refined and actionable formats. 
  •  Power BI Integration: Collaborate with power bi developers and data analysts to ensure seamless integration of user-facing elements.
  •  Full Software Development Lifecycle: Participate in all phases of the SDLC, including requirement analysis, planning, development, testing, and deployment.
  •  Cross-Functional Collaboration: Work closely with cross-functional teams, including data scientists, data analysts and product managers, to solve technical and business challenges collaboratively.

Education: Bachelor's degree (BE/BTECH) in Computer Science, Engineering, Data Science, or related fields. Highly Preferable: Master’s degree in data Analytics, Data Science, Statistics, or any relevant certification(s)

Microsoft Certified: DP 600 - Fabric Analytics Engineer Associate

Good to HavePL300 – Power BI data Analyst associate

Work Experience: 

  • 4-6+ years of total experience in data analytics, with at least 2 years in Azure Cloud data services like Data Factory, Data Bricks, Synapse, PySpark.
  • Data Engineering Fundamentals:
    1. Proficiency in SQL for querying and creating views, stored procedures, etc.
    2. Proven experience in designing and implementing ETL/ELT pipelines for data ingestion, transformation, and integration across platforms.
    3. Strong knowledge in handling large datasets and performing data analytics using Spark, preferably PySpark.
  • Cloud Technologies/Azure:
    1. Hands-on experience with Azure Data Factory (ADF) for orchestrating data workflows, Azure Databricks for advanced analytics and Azure Synapse Analytics for scalable data warehousing solutions. 
    2. Experience with implementing medallion architecture (Bronze, Silver, Gold layers) utilizing Delta Lake tables in Azure Data Lake Storage (ADLS).  
    3. Familiarity with Git and version control for collaborative and efficient project management.
    4. Experience with Microsoft Fabric for seamless data ingestion, processing, and report building within a single unified platform.

Good to have - 

  • Expertise in Power BI, including Power Query, Data Modeling, and Visualization.

Education: Bachelor's degree (BE/BTECH) in Computer Science, Engineering, Data Science, or related fields. Highly Preferable: Master’s degree in data Analytics, Data Science, Statistics, or any relevant certification(s)

#KGS

Experience Level

Senior Level

Job role

Work location

Bangalore, Karnataka, India

Department

Data Science & Analytics

Role / Category

DBA / Data warehousing

Employment type

Full Time

Shift

Day Shift

Job requirements

Experience

Min. 4 years

About company

Name

Kpmg India Services Llp

Job posted by Kpmg India Services Llp

Apply on company website