Data Engineer III

Posted:
9/25/2024, 8:05:09 AM

Location(s):
Beaverton, Oregon, United States ⋅ Oregon, United States

Experience Level(s):
Mid Level ⋅ Senior

Field(s):
Data & Analytics

WHO YOU’LL WORK WITH

You will be collaborating with the Engineering manager, the Product Manager, other Engineering team members and with a variety of dedicated Nike teammates. You will join a highly motivated team that will be a driving force in building Data and Analytic solutions for the Consumer Product and Innovation organization in Nike Technology.

WHO WE ARE LOOKING FOR

We are seeking a highly skilled Senior Data Engineer to join our data engineering team in Nike’s Consumer Product and Innovation (CP&I) organization. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and analytics solutions. As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Advanced Analytics and Business Intelligence initiatives. You will be reporting to the Engineering Director and be part of a team that is building a cross-capability data foundation, defining and implementing data products to deliver Analytics solutions that drive business growth for Nike.

WHAT YOU BRING

  • Bachelor’s degree in computer science, engineering or a related field, or equivalent experience.
  • Proven experience (5+ years) as a Data Engineer, with a focus on Python, PySpark, and SQL.
  • Strong expertise in Apache Spark and distributed computing frameworks, with hands-on experience optimizing Spark jobs for performance and scalability.
  • Proficiency in SQL, with the ability to write complex queries and perform data transformations.
  • Experience with Databricks Lakehouse Platform, Medallion architecture and Delta Lake.
  • Experience working with AWS including data related services such as S3 and RDS.
  • Experience with data modeling, ETL/ELT processes, and data warehousing concepts.
  • Experience with CI/CD pipelines, version control (Git), and DevOps practices in a data engineering context.
  • Excellent problem-solving skills and the ability to design solutions for complex data challenges.
  • Ability to communicate effectively with team members and business stakeholders, both verbally and in written form.
  • Familiarity with real-time data processing frameworks such as Apache Kafka.
  • Knowledge of Generative AI and Machine Learning pipelines and integrating them into production environments.
  • Certification in Databricks (e.g., Databricks Certified Data Engineer, Databricks Certified Developer for Apache Spark).

WHAT YOU’LL WORK ON

  • Design, build, and maintain robust ETL/ELT data pipelines, reusable components, frameworks, and libraries to process data from a variety of data sources ensuring data quality and consistency.
  • Collaborate with data engineers, analysts, product managers and business stakeholders to understand data requirements, translate them into technical specifications and deliver data solutions that drive decision-making.
  • Participate in code reviews, provide feedback, and contribute to continuous improvement of the team's coding practices.
  • Identify and tackle issues concerning data management to improve data quality.
  • Monitor and troubleshoot data pipelines, ensuring high availability and performance.
  • Implement CI/CD pipelines to automate deployment and testing of data engineering workflows.