Job Summary
We are seeking a highly skilled Senior Data Engineer to join our data engineering team in Nike’s Consumer Product and Innovation (CP&I) organization. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and analytics solutions. As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Advanced Analytics and Business Intelligence initiatives. You will be reporting to the Engineering Director and be part of a team that is building a cross-capability data foundation, defining and implementing data products to deliver Analytics solutions that drive business growth for Nike.
Key Responsibilities
- Design, build, and maintain robust ETL/ELT data pipelines, reusable components, frameworks, and libraries to process data from a variety of data sources ensuring data quality and consistency.
- Collaborate with data engineers, analysts, product managers and business stakeholders to understand data requirements, translate them into technical specifications and deliver data solutions that drive decision-making.
- Participate in code reviews, provide feedback, and contribute to continuous improvement of the team's coding practices.
- Identify and tackle issues concerning data management to improve data quality.
- Monitor and troubleshoot data pipelines, ensuring high availability and performance.
- Implement CI/CD pipelines to automate deployment and testing of data engineering workflows.
Required Qualifications
- Bachelor Degree or a combination of relevant education, training and experience
- 5+ years of experience working in data engineering including experience with data technology platforms such as Snowflake and Databricks Lakehouse, and cloud technologies such as AWS, Azure or GCP.
- Proven proficiency in SQL, Python and PySpark.
- Solid understanding of Apache Spark and distributed computing frameworks, with hands-on experience optimizing Spark jobs for performance and scalability.
- Solid understanding of modern data and platform architectures, including medallion architecture, Delta Lake, egress/ingress and ETL/ELT methodologies.
- Strong data profiling, analysis and data modeling skills.
- Excellent problem-solving skills and the ability to design solutions for complex data challenges.
- Proficient in DevOps practices in a data engineering context, including CI/CD, automated testing, security administration, and workflow orchestration to ensure streamlined and efficient data product development and deployment processes.
- Proficient in Agile methodologies and best practices, including Scrum and Kanban.
- Excellent communication and interpersonal skills, with ability to mentor junior team members while collaborating with both business and technical audiences.
- Ability to breakdown technical concepts to simplified “business speak” for different audiences.
- Comfortable working in a fast-paced, highly matrixed organizational environment, with globally distributed and diverse teams.
Preferred Qualifications
- Experience with real-time data processing frameworks such as Apache Kafka.
- Knowledge of Generative AI and Machine Learning pipelines and integrating them into production environments.
- Certification in Databricks (e.g., Databricks Certified Data Engineer, Databricks Certified Developer for Apache Spark, etc...).
We are committed to fostering a diverse and inclusive environment for all employees and job applicants. We offer a number of accommodations to complete our interview process including screen readers, sign language interpreters, accessible and single location for in-person interviews, closed captioning, and other reasonable modifications as needed. If you discover, as you navigate our application process, that you need assistance or an accommodation due to a disability, please complete the Candidate Accommodation Request Form.