Senior Data Engineer

Posted:
2/11/2026, 4:55:04 PM

Location(s):
Karnataka, India

Experience Level(s):
Senior

Field(s):
Data & Analytics

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use.  I further attest that all information I submit in my employment application is true to the best of my knowledge.

Job Description

Job Description

About the role:

As a Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business.

In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through adopting & implementing data engineering best practices, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer team located in India & US. This role will align to the Data & Analytics chapter of the ICC.

This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead.

How you will contribute:

  • Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity.

  • Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment.

  • Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity.

  • Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes.

  • Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives.

  • Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling.

  • Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth.

  • Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.

  • Leverage AI/ML and Generative AI capabilities within data engineering workflows to enhance data quality, automate pipeline optimization, enable intelligent data discovery, and support advanced analytics use cases.

Minimum Requirements/Qualifications:

  • Bachelor's degree in engineering, Computer Science, Data Science, or related field

  • 5-7 years of experience in software development, data science, data engineering, ETL, and analytics reporting development

  • Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines.

  • Experience in design and developing ETL pipeline

  • Proven track record of designing and implementing complex data solutions

  • Demonstrated understanding and experience using:

    • Data Engineering Programming Languages (i.e., Python, SQL)

    • Distributed Data Framework (e.g., Spark)

    • Cloud platform services (AWS/ Azure preferred)

    • Relational Databases

    • DevOps and continuous integration

    • AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services

    • Azure knowledge on services like ADF, ADLS, etc.

    • Knowledge of Data lakes, Data warehouses

    • Databricks/Delta Lakehouse architecture

    • Code management platforms like Github/ Gitlab/ etc.,

  • Understanding of database architecture, Data modelling concepts and administration.

  • Handson experience of Spark Structured Streaming for building real-time ETL pipelines.

  • Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases.

  • Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals.

  • Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions.

  • Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners

  • Strong problem solving and troubleshooting skills

  • Ability to work in a fast-paced environment and adapt to changing business priorities

Preferred requirements:

  • Master's degree in engineering specialized in Computer Science, Data Science, or related field

  • Demonstrated understanding and experience using:

    • Knowledge in CDK

    • Experience in IICS Data Integration tool

    • Job orchestration tools like Tidal/Airflow/ or similar 

    • Knowledge on No SQL

  • Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous.

  • Databricks Certified Data Engineer Associate

  • AWS/Azure Certified Data Engineer

Locations

IND - Bengaluru

Worker Type

Employee

Worker Sub-Type

Regular

Time Type

Full time