Eli Lilly and Company India Pvt Ltd

Cybersecurity AI Platform Engineer

Eli Lilly and Company India Pvt Ltd
Bengaluru/Bangalore
Not disclosed
Work from OfficeWork from Office
Full TimeFull Time
Min. 8 yearsMin. 8 years

Job Description

Cybersecurity AI Platform Engineer

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world.

Role Overview

The Cybersecurity AI Platform Engineer is responsible for developing and deploying AI-powered use cases across Eli Lilly's cybersecurity platforms. The engineer will own the full delivery lifecycle — from identifying and scoping use cases through design, build, test, and production deployment. Core responsibilities include developing Agentic AI automation pipelines, RAG workflows, that address real security challenges such as platform operations automation, SOC automation etc. The role requires close collaboration with security operations, data privacy, compliance, and platform engineering teams to ensure solutions are secure, explainable, and regulatory critical to the pharmaceutical industry.

Key Responsibilities

Agentic AI Use Case Discovery & Architecture

  • Partner with cybersecurity teams to identify, scope, and prioritise agentic AI use cases — including autonomous threat investigation, adaptive response orchestration, multi-agent SOC workflows, and self-directed anomaly triage
  • Architect agentic systems end-to-end: define agent roles, tool inventories, memory strategies, orchestration patterns (single-agent, multi-agent, hierarchical), and inter-agent communication protocols
  • Establish threat models specific to agentic AI covering prompt injection, unintended action loops, tool misuse, escalation of autonomy, and lateral movement risks introduced by agent-to-agent trust
  • Define agentic security acceptance criteria — including sandboxing requirements, permission scoping, HITL trigger conditions, and kill-switch mechanisms — before any agent progresses to development
  • Maintain a versioned agentic use case registry with design blueprints, threat models, tool manifests, and reusable agent patterns for cross-team adoption

Agentic AI Development & Engineering

  • Build, configure, and orchestrate AI agents using frameworks such as LangGraph, AutoGen, or CrewAI to execute multi-step cybersecurity workflows autonomously and reliably
  • Design and implement agent tool layers for security platforrm tools — with least-privilege access controls and strict input/output contracts
  • Develop RAG pipelines that power agent knowledge retrieval: validate retrieval sources, apply document-level injection shields, and enforce context boundaries to prevent data exfiltration through agent outputs
  • Implement runtime enforcement engines that intercept, validate, and sanitize agent inputs, tool calls, and outputs based on configurable security policies — blocking unsafe actions before execution
  • Apply CI/CD, Infrastructure-as-Code, and version control to all agent configurations, tool definitions, prompt templates, and orchestration logic to ensure full reproducibility and auditability

Testing, Red-Teaming & Safety Validation

  • Design and execute agentic-specific test plans covering multi-step reasoning accuracy, tool call correctness, loop detection, boundary enforcement, and failure mode handling across diverse scenario types
  • Conduct systematic red-team exercises targeting agentic failure modes: prompt injection via tool outputs, privilege escalation through chained actions, goal hijacking, and unintended side effects in production environments
  • Validate that human-in-the-loop checkpoints, sandboxing controls, permission gates, and emergency kill-switch mechanisms engage correctly under adversarial and edge-case conditions
  • Convert red-team findings into automated regression tests that run on every agent, prompt, or tool update — ensuring safety properties are preserved through continuous change
  • Benchmark production-candidate agents against security policy compliance, action explainability, latency SLAs, and cost efficiency before sign-off for deployment

Production Deployment & Lifecycle Management

  • Deploy agentic AI systems to enterprise cloud environments (AWS, Azure, GCP) with structured action logging, decision tracing, cost monitoring, and real-time alerting on anomalous agent behaviour
  • Implement agent health monitoring covering task completion rates, tool failure patterns, reasoning drift, and policy enforcement effectiveness — with automated alerts and rollback triggers
  • Manage the full agentic lifecycle: version-controlled agent releases, controlled rollouts, A/B evaluation of agent variants, scheduled re-validation against updated threat landscapes, and deprecation of obsolete agents
  • Integrate agents with upstream/downstream security platforms — SIEM, SOAR, EDR, identity, and ticketing systems — through governed API layers that enforce authentication, rate limits, and action audit trails
  • Provide Level 3 engineering support for agentic incidents including runaway action loops, unexpected tool invocations, and agent-induced security events — with structured post-incident reviews

Governance, Collaboration & Knowledge Sharing

  • Define and maintain agentic AI governance standards covering action logging requirements, human oversight triggers, permissible tool scopes, and escalation procedures for high-risk autonomous decisions
  • Collaborate with data privacy, legal, compliance, and quality assurance teams to ensure agentic systems meet regulatory obligations around auditability, explainability, and high-risk AI classifications
  • Create and maintain comprehensive documentation: agent architecture diagrams, tool manifests, decision trace examples, red-team reports, runbooks, and post-deployment model cards
  • Mentor junior engineers and security operations personnel on safe agentic design patterns, tool authoring best practices, and responsible AI principles in cybersecurity contexts
  • Engage with vendors, open-source communities, and technology partners to evaluate emerging agentic frameworks, influence platform roadmaps, and bring best practices back into Lilly's engineering standards

Qualifications

Required

  • 8+ years of software or platform engineering experience, with at least 2 years focused on AI/ML or LLM application development
  • Demonstrated experience delivering AI use cases end-to-end: from design through testing to production deployment
  • Working knowledge of cybersecurity domains including SIEM, EDR, network security, threat intelligence, or identity platforms
  • Proficiency in Python for ML model development, LLM orchestration (e.g. LangChain, LlamaIndex), and API integration
  • Strong understanding of LLM-specific threat models: prompt injection (direct and indirect), hallucination, jailbreaks, data poisoning, and model misuse
  • Familiarity with OWASP Top 10 for LLM Applications and MITRE ATLAS adversarial AI threat framework
  • Experience building or operating CI/CD pipelines for ML/LLM systems including automated testing and deployment gates
  • Solid understanding of cloud security across AWS, Azure, and GCP environments
  • Strong scripting capabilities in Python, PowerShell, or Bash; proficient in RESTful APIs and system integration patterns
  • Excellent written and verbal communication skills with the ability to translate AI concepts for both technical and business audiences
  • Bachelor’s degree in Computer Science, Cybersecurity, Information Systems, or related technical field, or equivalent practical experience

Preferred

  • Experience with AI security reviews, adversarial red-teaming, or AI governance frameworks in regulated industries
  • Exposure to agentic AI frameworks (AutoGen, CrewAI, LangGraph) and associated safety mechanisms such as HITL, sandboxing, and tool-call guardrails
  • Familiarity with AI regulatory expectations including high-risk AI classifications, auditability requirements, and conformity assessments
  • Security certifications: GSEC, GCIH, GCIA, CISSP, or vendor-specific AI/security platform certifications
  • Experience with containerization (Docker, Kubernetes) and cloud-native architectures for ML workloads
  • Project management exposure or Agile/Scrum experience within cross-functional AI delivery teams

Eli Lilly is an equal opportunity employer and is committed to creating a diverse and inclusive workplace.

Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.

Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.

#WeAreLilly

Experience Level

Senior Level

Job role

Work location
Work locationIN: Lilly Bengaluru, India
Department
DepartmentIT & Information Security
Role / Category
Role / CategoryIT Security
Employment type
Employment typeFull Time
Shift
ShiftDay Shift

Job requirements

Experience
ExperienceMin. 8 years

About company

Name
NameEli Lilly and Company India Pvt Ltd
Job posted by Eli Lilly and Company India Pvt Ltd

Similar jobs you can apply for

Hardware & Network Engineer
Getsetfix Technology

Laptop Repair Technician

Getsetfix Technology
Bellandur, Bengaluru/Bangalore
₹20,000 - ₹36,000*
Field Job
Full Time
Min. 6 months
No English Required

Network Technician

Black Cats Hr Consulting Private Limited
Bedarahalli, Bengaluru/Bangalore
₹25,000 - ₹33,000
Field Job
Full Time
Min. 1 year
Basic English

Senior Technical Engineer

M/s Pranag Datalinks
Bengaluru/Bangalore
₹20,000 - ₹29,000*
Field Job
Full Time
Min. 2 years
Good (Intermediate / Advanced) English

Field Installation Engineer

Airte
Bengaluru/Bangalore
₹17,200 - ₹27,000
Field Job
Full Time
Any experience
No English Required
Ciel Hr

Technical Engineer

Ciel Hr
White Field, Bengaluru/Bangalore
₹23,000 - ₹25,000
Work from Office
Full Time
Freshers only
Good (Intermediate / Advanced) English

Technical Associate

Pragathi It Solutions
Peenya, Bengaluru/Bangalore
₹18,000 - ₹22,000
Work from Office
Full Time
Night Shift
Freshers only
Basic English
Cybersecurity AI Platform Engineer in Eli Lilly and Company India Pvt Ltd | apna.co