Senior DevOps Engineer
PriceWaterhouseCoopers Pvt Ltd ( PWC )Job Description
IN_Senior Associate_Pyspark_Data & Analytics_Advisory_PAN India
Line of Service
AdvisoryIndustry/Sector
Not ApplicableSpecialism
Data, Analytics & AIManagement Level
Senior AssociateJob Description & Summary
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals.In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage.
*Why PWC
At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.
At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. "
Responsibilities:
Job Description (JD) Azure Data Engineering role focused on Azure Databricks with PySpark (must have) capabilities and strong emphasis on Python, SQL, Data Lake, and Data Warehouse:
Job Summary:
We are looking for a skilled and experienced Data Engineer with 5-8 years of experience in building scalable data solutions on the Microsoft Azure ecosystem. The ideal candidate must have strong hands-on experience with Microsoft Fabric, Azure Databricks along with strong PySpark, Python and SQL expertise. Familiarity with Data Lake, Data Warehouse concepts, and end-to-end data pipe lines is essential.
Responsibilities:
Requirement gathering and analysis
Experience with different databases like Synapse, SQL DB, Snowflake etc.
Design and implement data pipelines using Microsoft Fabric & Databricks
Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage
Implement data security and governance measures
Monitor and optimize data pipelines for performance and efficiency
Troubleshoot and resolve data engineering issues
Provide optimized solution for any problem related to data engineering
Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc.
Strong knowledge on Databricks, Delta tables
Required Skills:
5–8 years of experience in Data Engineering or related roles.
Hands-on experience in Microsoft Fabric
Hands-on experience in Azure Databricks
Proficiency in PySpark for data processing and scripting.
Strong command over Python & SQL – writing complex queries, performance tuning, etc.
Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas).
Hands on experience in performance tuning & optimization on Databricks & MS Fabric.
Ensure alignment with overall system architecture and data flow.
Understanding CI/CD practices in a data engineering context.
Excellent problem-solving and communication skills.
Exposure to BI tools like Power BI, Tableau, or Looker.
Good to Have:
Experienced in Azure DevOps.
Knowledge of Scala or other distributed processing frameworks.
Familiarity with data security and compliance in the cloud.
Experience in leading a development team.
Mandatory skill sets:
Pyspark/Data Engineer
Preferred skill sets:
Pyspark/Data Engineer
Years of experience required:
3—8 Years
Education qualification:
B.E.(B.Tech)/M.E/M.Tech
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology, Bachelor of EngineeringDegrees/Field of Study preferred:Certifications (if blank, certifications not specified)
Required Skills
PySparkOptional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Applied Macroeconomics, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Geopolitical Forecasting {+ 24 more}Desired Languages (If blank, desired languages not specified)
Travel Requirements
Not SpecifiedAvailable for Work Visa Sponsorship?
NoGovernment Clearance Required?
NoJob Posting End Date
March 11, 2026Experience Level
Senior LevelJob role
Job requirements
About company
Similar jobs you can apply for
Software / Web Developer
Web Designer & Developer
Cynosure HealthcareSecretary / Executive Assistant
Mahesh Sonika & Co LlpWeb Developer
Pixel Solution
Laravel Developer
Ace Hiring Solutions
PHP Developer
UBK Infotech