Product Manager - Data Publishing and Processing

JP Morgan Services India Pvt Ltd

Hyderabad

Not disclosed

Work from Office

Full Time

Min. 8 years

Job Details

Job Description

Product Manager-Data Publishing & Processing

You’re passionate about making data easy to publish, trustworthy to consume, and fast to power AI-driven decisions. Join our Data Publishing & Processing (DPP) organization to lead products that simplify and standardize the end‑to‑end publishing experience across ingestion, transformation, quality, orchestration, and governed delivery. You will help deliver a unified, intelligent platform that reduces time‑to‑publish, lowers cost‑to‑serve, and enforces quality checks and lineage. 

 

As a Product Manager in Data Publishing and Processing team , you will define and execute the strategy for a modern data publishing and processing framework that serves thousands of applications and diverse personas (data engineers, data owners, analysts, and AI agents). You will lead the product lifecycle from discovery to launch and continuous improvement, balancing performance, cost, security, and compliance. Partnering closely with engineering, architecture, governance, and lines of business, you will deliver reusable blueprints, zero‑touch onboarding, and observability that make publishing fast, safe, and self‑service across our managed lake and consumption platforms.

 

Job Responsibilities

  • Develop and own the product strategy and vision for a unified data publishing and processing platform that supports both batch and streaming workloads, and integrates with enterprise catalogs, entitlements, and observability.
  • Lead discovery and customer research to translate publisher needs into a prioritized roadmap (e.g., onboarding, ingestion, processing frameworks, and platform adapters).
  • Define and deliver reusable solutions for common publishing use cases (e.g., sor‑to‑lake, streaming, transform to refined, externalization to consumption platforms). 
  • Drive platform capabilities for Onboarding & self‑service ; Orchestration ,Processing (PySpark/SQL/streaming frameworks; serverless where appropriate);Data quality & readiness (DQ/PII checks, contract validation, versioning) ;Catalog & entitlement integration (registration, lineage, access; Observability & SLOs (time‑to‑publish, error rates, cost guardrails) policies) 
  • Own and maintain the product backlog; write clear requirements and acceptance criteria; ensure value delivery via iterative releases and measurement.
  • Establish and track success metrics (e.g., time‑to‑publish, publish error rate, adoption/NPS, cost per dataset, streaming adoption) and course‑correct based on data.
  • Partner with platform engineering, governance, SRE/L2, and vendor teams to ensure reliability (multi‑region failover where needed), security, compliance, and cost efficiency at scale.
  • Champion a contracts‑first, API‑first approach that prepares the platform for autonomous and AI‑assisted publishing workflows.

 

Required Qualifications, Capabilities, and Skills

  • 8+ years of product management or adjacent experience in data platforms/analytics/cloud in highly matrixed environments, with a track record of shipping platform capabilities at scale.
  • Advanced knowledge of the product development lifecycle, discovery methods, and data‑informed prioritization; demonstrated ability to lead end‑to‑end (strategy → ideation → requirements → launch → iteration).
  • Hands‑on experience with ETL/ELT tooling and patterns (e.g., Informatica, Ab Initio, dbt, Spark/PySpark/Glue, SQL, Databricks) for batch and streaming
  • Working knowledge of modern data lake/lakehouse stacks and table formats (e.g., Iceberg/Delta/Hudi), and how they enable consumption via platforms such as Snowflake. 
  • Familiarity with orchestration platforms (e.g., Airflow/MWAA), CI/CD, and observability practices for data pipelines (quality, lineage, SLAs/SLOs).
  • Practical exposure to serverless and cloud‑native data processing on AWS/Azure/GCP (e.g., Glue, EMR, Databricks, EKS, Lambda), and cost/performance optimization in cloud.
  • Experience integrating with data catalogs/metadata systems and entitlement services; comfort with contracts‑as‑code, schema/versioning, and readiness checks.
  • Strong communication and stakeholder management skills; ability to translate customer needs into platform capabilities and measurable outcomes.
  • Analytical skills to instrument, monitor, and improve platform health and business KPIs (time‑to‑publish, error rates, cost efficiency, adoption/NPS).

 

Preferred Qualifications, Capabilities, and Skills

  • Master’s degree in Computer Science, Engineering, Information Systems, or Business.
  • Experience building large‑scale enterprise data platforms in financial services or similarly regulated environments, including familiarity with privacy, security, and audit requirements.
  • Knowledge of data contracts, semantic/technical lineage, and policy enforcement (entitlements, tag/column/row‑level access).
  • Familiarity with API‑first product design, adapter frameworks, and capabilities for self‑service data engineering.

 

Job role

Work location

Hyderabad, 33435-JPMorgan Chase & Co Towers, H, MAGMA,UNIT-1,PHASE-IV,SY NO.83/1,PLOT NO 2, GR Floor TO 2 Floor and 5 Floor TO 16 Floor,Basement 1,2, Hyderabad, Telangana, India

Department

Product Management

Role / Category

Product Management - Technology

Employment type

Full Time

Shift

Day Shift

Job requirements

Experience

Min. 8 years

About company

Name

JP Morgan Services India Pvt Ltd

Job posted by JP Morgan Services India Pvt Ltd

Apply on company website