Back End Data Engineer - AI and Cloud Platforms

Adani Enterprises Ltd

Ahmedabad

Not disclosed

Work from Office

Full Time

Min. 1 year

Job Details

Job Description

Back End Engineer-AI Labs

About Business:

Adani Group: Adani Group is a diversified organisation in India comprising 10 publicly traded companies. It has created a world-class logistics and utility infrastructure portfolio that has a pan-India presence. Adani Group is headquartered in Ahmedabad, in the state of Gujarat, India. Over the years, Adani Group has positioned itself to be the market leader in its logistics and energy businesses focusing on large-scale infrastructure development in India with O & M practices benchmarked to global standards. With four IG-rated businesses, it is the only Infrastructure Investment Grade issuer in India.

Job Purpose: The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support AI applications. The role involves implementing efficient data integration, processing, and transformation solutions using Python, PySpark, and cloud-based data engineering tools (Azure, GCP). The engineer will work closely with AI, ML, and DevOps teams to enable seamless data flow for AI ML model training, deployment, and operations (MLOps), ensuring optimized data architecture, security, and compliance.

Data Engineer-AI Labs

Data Pipeline Development & Optimization:

Ensure efficient data processing by designing, implementing, and optimizing ETL ELT data pipelines for AI and machine learning workloads.

Enhance data flow and transformation using Python, PySpark, and cloud-based data engineering tools (Azure Data Factory, Google Dataflow, Databricks).

Improve data ingestion and integration by leveraging Kafka, Pub Sub, and other messaging queues for real-time and batch processing.

Ensure scalability and performance by implementing distributed computing frameworks and optimizing data storage architectures.

Cloud Data Engineering & AI Model Enablement:

Improve AI data readiness by designing data lakes, data warehouses, and real-time streaming architectures on Azure and GCP.

Optimize AI model performance by structuring, cleaning, and transforming data to meet ML model training and inferencing needs.

Ensure data accessibility by implementing data governance, security policies, and access controls for AI teams.

Reduce AI model training time by optimizing big data storage and processing strategies.

MLOps & AI Model Deployment Support:

Enable AI model lifecycle automation by implementing CI CD pipelines for ML model deployment using MLOps best practices.

Ensure seamless AI model serving by integrating Docker, Kubernetes, and cloud-based AI services.

Improve AI ML data versioning by using MLflow, DVC, or similar tools for data tracking and experiment logging.

Enhance AI observability by setting up real-time monitoring, logging, and alerting for AI ML data pipelines.

Data Security, Compliance & Governance:

Ensure compliance with data privacy regulations (e.g., GDPR, HIPAA) by implementing data encryption, masking, and anonymization techniques.

Strengthen data security by enforcing role-based access control (RBAC) and identity & access management (IAM) policies.

Ensure data integrity by implementing data validation, schema enforcement, and audit logging mechanisms.

Cross-functional Collaboration & Continuous Improvement:

Collaborate with AI, DevOps, and business teams to align data infrastructure with AI and analytics needs.

Drive innovation in data engineering by evaluating and adopting emerging cloud, AI, and big data technologies.

Optimize data engineering efficiency by identifying and implementing best practices for automation, cost reduction, and performance tuning.

Key Stakeholders - Internal

AI & Data Science Teams

DevOps & Cloud Teams

Business Intelligence & Analytics Teams

IT Security & Compliance Teams

Key Stakeholders - External

Cloud & Data Service Providers

Third-party AI Model Vendors

Regulatory Bodies & Compliance Authorities

 

Educational Qualification:

Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or related fields.

Certification:

Microsoft Azure Data Engineer Associate Google Professional Data Engineer AWS Certified Data Analytics – Specialty.

Big Data & Apache Spark Certification (Cloudera, Databricks, Coursera, Udemy).

Certified Kubernetes Administrator (CKA) for data pipeline orchestration.

Work Experience (Range of years):

1-10 years of experience in data engineering, cloud data platforms, and AI ML data management.

Expertise in data pipeline development, ETL ELT processes, and cloud-based big data solutions.

Hands-on experience with Python, PySpark, SQL, and cloud-native data services.

Experience with AI ML deployment, MLOps, and real-time data streaming architectures.

Experience Level

Senior Level

Job role

Work location

Ahmedabad, Gujarat, India

Department

Software Engineering

Role / Category

DBA / Data warehousing

Employment type

Full Time

Shift

Day Shift

Job requirements

Experience

Min. 1 year

About company

Name

Adani Enterprises Ltd

Job posted by Adani Enterprises Ltd

Apply on company website