Senior AWS Data Engineer

Posted:
1/13/2026, 9:49:05 PM

Location(s):
Bengaluru, Karnataka, India ⋅ Karnataka, India

Experience Level(s):
Senior

Field(s):
Data & Analytics

Job Description: Senior AWS Data Engineer

Skill Set: AWS, Big Data Engineering, ETL/ELT, Python, Spark, Data Pipelines, CI/CD, DataOps

Overview

We are seeking a highly skilled Senior AWS Data Engineer to support data platform modernization and analytical initiatives. The ideal candidate will have strong hands-on expertise in building scalable data pipelines on AWS, a solid understanding of data engineering best practices, and the ability to work in an Agile, product-focused environment.

Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines on AWS

  • Build and optimize data ingestion frameworks using AWS Glue, Lambda, Step Functions, EMR, Kinesis, or equivalent services

  • Develop high-quality, reusable data components using Python, PySpark, or Spark

  • Work closely with architects and product teams to design data models and data flow solutions

  • Implement data quality checks, validation rules, and automated monitoring frameworks

  • Optimize data pipelines for performance, scalability, and cost efficiency

  • Support CI/CD and DataOps practices to streamline deployment and delivery

  • Collaborate with cross-functional teams to ensure seamless integration across data platforms

  • Perform root cause analysis for data issues, production defects, and pipeline failures

  • Follow best practices for security, encryption, logging, and governance in an enterprise environment

  • Document data pipelines, workflows, and mapping specifications clearly and thoroughly

Required Skills & Experience

  • 7+ years of experience in data engineering or big data development

  • Strong hands-on experience with AWS data services (preferred):

    • AWS Glue

    • AWS Lambda

    • AWS S3

    • AWS EMR

    • AWS Step Functions

    • AWS Redshift / Aurora / DynamoDB

  • Proficiency in Python, PySpark, and distributed data processing

  • Experience with Spark, dataframes, and performance tuning

  • Strong understanding of ETL/ELT design patterns and data modeling

  • Experience with data orchestration tools (Airflow, Step Functions, or similar)

  • Familiarity with CI/CD tools (GitHub Actions, Jenkins, CodePipeline)

  • Knowledge of data testing, data validation, and quality frameworks

  • Strong problem-solving skills and ability to work in complex enterprise environments

  • Excellent communication, documentation, and stakeholder management skills

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.


All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice