Senior AWS Data Engineer
JP Morgan Services India Pvt LtdHyderabad
Not disclosed
Job Description
Software Engineer III-AWS Data Engineer (AI, Data Lake, Snowflake, Python, Spark, Copilot & Claude)
Job Summary
We are seeking a highly skilled AWS Data Engineer with expertise in AI, Python, Spark, and modern AI productivity tools such as Copilot and Claude. The ideal candidate will design, build, and optimize scalable data pipelines and architectures on AWS, leveraging Data Lake, Snowflake, and distributed processing technologies. You will collaborate with data scientists, analysts, and business stakeholders to deliver robust, AI-driven data solutions, and leverage AI assistants to enhance productivity and code quality.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes on AWS (S3, Glue, Lambda, Redshift, etc.) using Python and Spark.
- Architect and implement Data Lake solutions, ensuring efficient data ingestion, storage, and retrieval.
- Integrate and manage Snowflake environments for data warehousing and analytics.
- Develop and optimize distributed data processing workflows using Apache Spark (PySpark).
- Collaborate with AI/ML teams to enable data-driven models and solutions, supporting feature engineering and model deployment.
- Leverage AI coding assistants such as Copilot and Claude to accelerate development, improve code quality, and automate repetitive tasks.
- Optimize data workflows for performance, reliability, and cost efficiency.
- Ensure data quality, governance, and security across all platforms.
- Automate data processing tasks using Python, Spark, and AWS-native tools.
- Monitor, troubleshoot, and resolve issues in data pipelines and infrastructure.
- Document technical solutions and provide knowledge transfer to team members.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 8+ years of experience in data engineering, with hands-on AWS experience.
- Strong proficiency in AWS services: S3, Glue, Lambda, Redshift, IAM, etc.
- Experience with Data Lake architecture and implementation.
- Expertise in Snowflake data warehousing, including schema design, performance tuning, and security.
- Advanced programming skills in Python and SQL.
- Hands-on experience with Apache Spark (preferably PySpark) for large-scale data processing.
- Familiarity with AI/ML concepts and workflows; experience supporting data science teams.
- Experience using AI coding assistants such as GitHub Copilot and Claude to enhance productivity and code quality.
- Knowledge of data governance, security, and compliance best practices.
- Excellent problem-solving and communication skills.
Preferred Skills
- Experience with workflow/orchestration tools such as Apache Airflow.
- Exposure to DevOps practices and CI/CD pipelines.
- AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect).
- Experience with real-time data processing and streaming (e.g., Kinesis, Kafka).
- Familiarity with BI tools (e.g., Tableau, Power BI).
Job role
Work locationHyderabad, Telangana, India
DepartmentSoftware Engineering
Role / CategoryDBA / Data warehousing
Employment typeFull Time
ShiftDay Shift
Job requirements
ExperienceMin. 8 years
About company
NameJP Morgan Services India Pvt Ltd
Job posted by JP Morgan Services India Pvt Ltd
Similar jobs you can apply for
Software / Web DeveloperMobile Application Developer
Cubefore Solutions Private LimitedLB Nagar, Hyderabad
₹15,000 - ₹18,000

Trainee Process Reengineering Consultant
Navabharat LimitedMadhapur, Hyderabad
₹20,000 - ₹30,000

Quality Control Executive
Bhati Solitaire LlpRamgopal Pet, Hyderabad
₹13,000 - ₹18,000

Full-stack Developer
Meritocracy Techlytics Private LimitedMadhapur, Hyderabad
₹20,000 - ₹1,00,000

Data Analyst
Mangatrai Neeraj Jewellers Private LimitedBanjara Hills, Hyderabad
₹45,000 - ₹62,000*
Engineering Supervisor
EFS facilities services India pvt ltdGopanapalli Thanda, Hyderabad
₹24,000 - ₹26,000