Principal Data Engineer

Posted:
1/27/2026, 2:08:20 PM

Location(s):
Texas, United States ⋅ Westlake, Texas, United States

Experience Level(s):
Expert or higher ⋅ Senior

Field(s):
Data & Analytics

Workplace Type:
On-site

Job Description:

Position Description:

Produces scalable, resilient, Cloud-based systems design using multiple methodologies -- data warehousing, data visualization, and data integration. Simplifies Online Transaction Processing (OLTP) using relational database technologies (Oracle SQL and PL/SQL) and Snowflake. Constructs and compares models using data modelling tools and data ingestion tool sets, including Apache NiFi and Kafka. Sets up reliable infrastructure to perform data-related tasks, particularly with Kafka, to stream analytics. Writes SQL queries in Oracle/Snowflake and performs performance optimization for large datasets. Develops Extract, Transform, Load (ETL) and Extract, Load, Transform (ELT) pipelines to move data to and from Snowflake data store, using Python, Amazon Web Services (AWS), and Snowflake. Works closely with the business units and architects to gather requirements, plan, design, develop and deploy on-premises and cloud-based applications. Designs, develops and modifies software systems to predict and measure outcomes and consequences of design, using scientific analysis and mathematical models.

Primary Responsibilities:

  • Determines system performance standards.
  • Assists in establishing and maintaining industry standards in systems and security.
  • Monitors functioning of equipment to ensure system operates in conformance with specifications.
  • Crafts and implements operational data stores and data lakes in a production environment.
  • Analyzes information to determine, recommend, and plan installation of a new system or modification of an existing system.
  • Confers with systems analysts and other software engineers/developers to design systems and obtain information on project limitations and capabilities, performance requirements, and interfaces.
  • Develops and maintains software system testing and validation procedures, programming, and documentation.
  • Analyzes business requirements and delineates possible roadmaps and milestone plans to achieve the desired strategic initiatives.
  • Collaborates with various business units to fulfill their needs as well as their customer’s, provides technical support such as application and framework development and data management solution and implementation.
  • Develops ingestion and transformation frameworks to establish data pipelines that can collect logs and enable metadata for the consumption layer.
  • Provides exploratory analysis and framework development, product development enhancements, and platform and infrastructure solutions and support.
  • Delivers actionable insights to business units by way of data convergence and establishes lower costs for innovation. 

Education and Experience:

Bachelor’s degree in Computer Science, Engineering, Information Technology, Information Systems, Information Science or a closely related field (or foreign education equivalent) and five (5) years of experience as a Principal Data Engineer (or closely related occupation) supporting analytical platforms in a scalable, high volume, cloud based environment by building data pipelines and developing data movement, reporting, and web applications using AWS Cloud Platforms, Python, and Confluent Kafka.

Or, alternatively, Master’s degree in Computer Science, Engineering, Information Technology, Information Systems, Information Science, or a closely related field (or foreign education equivalent) and three (3) years of experience as a Principal Data Engineer (or closely related occupation) supporting analytical platforms in a scalable, high volume, cloud based environment by building data pipelines and developing data movement, reporting, and web applications using AWS Cloud Platforms, Python, and Confluent Kafka.

Skills and Knowledge:

Candidate must also possess:

  • Demonstrated Expertise (“DE”) designing, building, and deploying Artificial Intelligence (AI) and Machine Learning (ML) model engineering pipelines, using AWS Kubernetes, Jenkins Core, Stash, Artifactory, ModelOp, and Docker, to support multiple legal, risk, and compliance models.
  • DE implementing data components (databases, tables, views, types, stored procedures, functions, roles, and queries) for legal, compliance, risk, and security applications on relational databases (DB2 and Oracle), using SQL and PL/SQL queries.
  • DE designing and developing event-based system integration frameworks for legal, compliance, risk, and security applications, using AWS Lambda, Confluent Kafka, SQS, Control-M, and Spring Schedulers.

#PE1M2

#LI-DNI

Certifications:

Category:

Information Technology

Most roles at Fidelity are Hybrid, requiring associates to work onsite every other week (all business days, M-F) in a Fidelity office. This does not apply to Remote or fully Onsite roles.

Please be advised that Fidelity’s business is governed by the provisions of the Securities Exchange Act of 1934, the Investment Advisers Act of 1940, the Investment Company Act of 1940, ERISA, numerous state laws governing securities, investment and retirement-related financial activities and the rules and regulations of numerous self-regulatory organizations, including FINRA, among others. Those laws and regulations may restrict Fidelity from hiring and/or associating with individuals with certain Criminal Histories.

Fidelity Investments

Website: https://www.fidelity.com/

Headquarter Location: Boston, Massachusetts, United States

Employee Count: 10001+

Year Founded: 1946

IPO Status: Private

Last Funding Type: Secondary Market

Industries: Asset Management ⋅ Finance ⋅ Financial Services ⋅ Retirement ⋅ Wealth Management