Job Posting Title:
Senior Data Engineer
Req ID:
10079783
Job Description:
Department/Group Overview:
On any given day at Disney Entertainment & ESPN Technology, we’re reimagining ways to create magical viewing experiences for the world’s most beloved stories while also transforming Disney’s media business for the future. Whether that’s evolving our streaming and digital products in new and immersive ways, powering worldwide advertising and distribution to maximize flexibility and efficiency, or delivering Disney’s unmatched entertainment and sports content, every day is a moment to make a difference to partners and to hundreds of millions of people around the world.
A few reasons why we think you’d love working for Disney Entertainment & ESPN Technology
- Building the future of Disney’s media business: DE&E Technologists are designing and building the infrastructure that will power Disney’s media, advertising, and distribution businesses for years to come.
- Reach & Scale: The products and platforms this group builds and operates delight millions of consumers every minute of every day – from Disney+ and Hulu, to ABC News and Entertainment, to ESPN and ESPN+, and much more.
- Innovation: We develop and execute groundbreaking products and techniques that shape industry norms and enhance how audiences experience sports, entertainment & news.
Job Summary:
The Data Platforms Team, a segment under the Disney Entertainment & ESPN Technology (DEET) organization, is looking for a Sr. Data Engineer to join our Product Performance and Instrumentation Team. Product Performance Data is crucial to powering the consumer facing systems for all the brands and digital experiences owned and operated by the DEET Portfolio. Product Performance data comprises data that covers all aspects of the product life cycle, from conversion and onboarding funnels, to content merchandising performance data marts, as well as data which measures the quality of experience of our portfolio.
Our Product Performance and Instrumentation team is seeking a highly motivated Data Engineering Manager with a strong technical background who is passionate about designing and building systems to process data at scale and provide foundational business value that unlocks capabilities across software and data disciplines. Our tech stack includes AWS, Databricks, Snowflake, Airflow, Spark, and languages include Scala, Python, SQL, and Java
Responsibilities and Duties of the Role:
- Contribute to the architecture, design and growth of our Data Products and Data pipelines in Scala and Python / Spark while maintaining uptime SLAs.
- Design and develop scalable solutions, building ETL pipelines in Big Data environments (cloud, on-prem, hybrid)
- Implement the Lakehouse architecture, working with key partners to shift towards a Lakehouse-centric data platform.
- Our tech stack includes AWS, Snowflake, Spark, Databricks, Delta Lake and Airflow and languages include Python, Scala.
- Collaborate with Data Product Managers, Data Architects and Data Engineers to design, implement, and deliver successful data solutions.
- Maintain detailed documentation of your work and changes to support data quality and data governance.
- Ensure high operational efficiency and quality of your solutions to meet SLAs and support commitment to our customers (Data Science, Data Analytics teams)
- Be an active participant and advocate of agile/scrum practice to ensure health and process improvements for your team.
- Be a problem solver; when presented with new challenges, you are expected to research and network to find solutions.
- Seek out answers to business problems and look for opportunities to automate processes & optimize Cost.
- Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements.
Required Education, Experience/Skills/Training:
- At least 5 years of data engineering experience developing large data pipelines.
- Strong algorithmic problem-solving expertise
- Strong SQL skills and ability to create queries to extract data and build performant datasets.
- Hands-on experience with distributed systems such as Spark and PySpark to query and process data.
- Strong fundamental Scala and/or Python programming skills.
- Strong hands-on Experience with Cloud technologies like AWS (S3, EMR, EC2)
- Experience with at least one major MPP or cloud database technology (Preferably Snowflake, Redshift, Big Query)
- Solid experience with data integration toolsets (i.e Airflow) and writing and maintaining Data Pipelines with Databricks.
- Experience in Data Modeling techniques and Data Warehousing standard methodologies and practices
- Familiar with Scrum and Agile methodologies
- Demonstrated excellent interpersonal skills, communication skills including ability to partner with others and build consensus in a cross-functional team toward a desired outcome.
Required Education
- Bachelor’s degree in Computer Science, Information Systems or related field, or equivalent work experience
The hiring range for this position in Seattle is $142,600.00 - $191,100.00 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate’s geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.
Job Posting Segment:
Product & Data Engineering
Job Posting Primary Business:
PDE - Data Platform Engineering
Primary Job Posting Category:
Data Engineering
Employment Type:
Full time
Primary City, State, Region, Postal Code:
Seattle, WA, USA
Alternate City, State, Region, Postal Code:
Date Posted:
2024-02-15