Posted:
4/7/2026, 5:06:50 PM
Location(s):
Bengaluru, Karnataka, India ⋅ Karnataka, India
Experience Level(s):
Expert or higher ⋅ Senior
Field(s):
Data & Analytics
Workplace Type:
Hybrid
Job Title: Principal, Data Engineer
The Purpose of this Role
We are seeking a Principal Data Engineer with 10+ years of experience to architect and scale modern data platforms, lead cloud data engineering initiatives, and deliver robust, secure, and high‑performance data solutions. We use data and analytics to personalize incredible customer experiences and develop solutions that help our customers live the lives they want. As part of our digital transformation, we have made significant investments to build cloud data lake platforms. We are looking for a hands-on data engineer who can help us design and develop our next generation Cloud Data Lake and Analytics Platform for Workplace Solutions.
The Expertise You Have
10+ years in Data Engineering / Big Data / Platform Engineering with end‑to‑end delivery of large enterprise programs.
Multi‑tenant database and data modeling expertise—designing tenant‑aware schemas and delivering efficient, robust database code and designs for websites and products.
Advanced SQL across Oracle, Azure SQL Managed Instance (SQL MI), MySQL, and Snowflake; strong performance tuning and query optimization.
Hands‑on with cloud data platforms: Azure (preferred), including big data services and integrations.
Deep experience building ETL/ELT pipelines using Informatica, SnapLogic, Azure Functions, Python targeting Snowflake and other cloud data warehouses.
Proven track record designing, developing, and deploying batch data processing pipelines and feature generation platforms using Azure, Python, and SQL.
Strong DataOps/CI/CD for data and code: GitHub‑based workflows, automated database releases, and source control best practices across the SDLC.
Experience with event/streaming: Azure Event Hubs, Kafka, or Amazon Kinesis.
Data architecture design & documentation using Confluence and draw.io; metadata management and lineage.
Production excellence in financial services / benefits administration—triaging incidents, root cause analysis (logs/reports), and minimizing business impact.
Experience leading or mentoring teams on business‑critical data platform solutions; strong stakeholder communication.
The Skills That Are Key to This Role
Translate complex business and analytical needs into scalable, resilient data architectures and pipelines.
Design and implement high‑performance distributed pipelines (batch & streaming) with strong SLAs, monitoring, and cost efficiency.
Build lakehouse and warehouse solutions (e.g., Delta/Iceberg/Hudi + Snowflake) with robust semantic/data models.
End‑to‑end ownership: requirements → architecture → build → deploy → monitor → optimize.
ETL/ELT excellence with Informatica, SnapLogic, Azure Data Factory, Azure Functions, Python, and Databricks/Spark.
Azure platform proficiency, including:
Azure Data Factory, Azure Databricks, AKS, Azure Service Bus (ASB), API Management, Storage Accounts, Event Hub/Event Grid, Redis Cache.
Strong RDBMS/NoSQL experience: Oracle, SQL Server/SQL MI, MySQL, Postgres, Cosmos DB, MongoDB.
DataOps/DevOps: CI/CD pipelines (GitHub Actions/Azure DevOps), unit/integration tests, automated data quality, and infrastructure‑as‑code (Bicep/ARM/Terraform).
Data quality & observability (e.g., Great Expectations/Deequ/Monte Carlo), schema evolution, partitioning, and performance tuning.
Build tools and frameworks that turn pipelines into actionable insights for key business KPIs; partner with BI/analytics teams.
Strong Agile delivery—driving epics, stories, and tasks using Agile/Scrum and Jira; excellent written and verbal communication.
Good to Have Skills for This Role
dbt for modular SQL development, testing, and documentation.
Mainframe/enterprise integrations (e.g., Control‑M, DB2, CICS) and hybrid/cloud connectivity patterns.
Advanced Spark optimization and distributed compute internals.
Containerization/Kubernetes for data workloads and platform services.
The Value You Deliver
Establish and scale a multi‑tenant, secure, and cost‑efficient data platform that powers analytics, ML, and real‑time applications.
Convert leadership vision into reference architectures, patterns, and platform roadmaps; influence data strategy across domains.
Standardize data modeling, metadata, and pipeline frameworks; raise engineering quality via code reviews, design forums, and mentorship.
Optimize compute/storage spend, improve pipeline reliability/throughput, and reduce manual toil through automation.
Build robust batch and streaming foundations with clear SLAs, lineage, observability, and self‑service enablement.
Ensure operational excellence—monitor, triage, conduct RCA, and continuously harden systems to protect business outcomes.
Deliver actionable, trusted data that accelerates decision‑making and KPI visibility across business units and products.
The Expertise We’re Looking For
10+ years professional experience with multiple successful enterprise data platform deliveries.
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
How Your Work Impacts the Organization
You will strengthen Workplace Investments (WI) by modernizing data platforms and enabling reliable, timely, and secure data for analytics, customer experiences, and operational excellence. Your work underpins retirement solutions, employer services, investor centers, and advisory capabilities—improving both business outcomes and customer trust.
Location & Shift
Location: Bangalore
Shift: 11:00 AM – 08:00 PM
Website: https://www.fidelity.com/
Headquarter Location: Boston, Massachusetts, United States
Employee Count: 10001+
Year Founded: 1946
IPO Status: Private
Last Funding Type: Secondary Market
Industries: Asset Management ⋅ Finance ⋅ Financial Services ⋅ Retirement ⋅ Wealth Management