Data Engineer - Fanduel

Posted:
5/26/2024, 5:00:00 PM

Location(s):
Cluj-Napoca, Romania

Experience Level(s):
Junior ⋅ Mid Level ⋅ Senior

Field(s):
Data & Analytics

Workplace Type:
Hybrid

Data Engineer - Fanduel

Data Engineer Mid 1

We are looking for a Data Engineer to join our Data Tribe.
This role follows a hybrid approach to working, allowing you to combine working from home with working in our modern offices. These discussions are between you and your manager to find the best pattern for you both! We will kit you out to work from home but know that working as a team is what makes us great and spending quality time together is essential for keeping us mission aligned.
Why we need you: 
Our competitive edge is enabled by the ability to leverage accurate and timely data for informed decision-making. We are seeking a Data Engineer to advance FanDuel’s data capabilities and contribute to the further growth and maturity of our Data Team.
As a Data Engineer, you will be a valued member of the team, maintaining exceptional standards, servicing all the needs of data consumers across the company. Your role will be to provide second-tier engineering capabilities to ensure the health of our data infrastructure whilst contributing to the delivery of reliable, timely, and consumable data for all company data consumers. The success of this team will elevate the capabilities of our Data tribe to maximise the value of our data assets and empower employees to innovate.
As well as your day-to-day responsibilities, you will be expected to take part in enhancing the core data pipelines, building out enhanced data validation, implementing a system for engineering metrics visualisation, and both implementing and shaping infrastructure strategy.
You will have the opportunity to work in an agile environment where you will team up with colleagues having experience in cloud data warehousing, data analytics, data engineering, machine learning and more.
The successful candidate should have good technical and problem-solving skills, a positive and results driven attitude, and a good communicator who can interact with both technical and non-technical people.
Key responsibilities include:
●    Proactively monitor internal channels and system dashboards for reported data or technology issues, escalating tier 2 or tier 3 tickets appropriately to ensure timely communication and resolution of production incidents within SLAs.
●    Provide on-call technical assistance to analysts and other data consumers, responding to inquiries relating to query optimization, reporting anomalies or complex data pipeline issues.
●    Proactively investigate & resolve data quality issues in response to system alerts or notification from stakeholders.
●    Implement batch & real-time data pipelines to the data warehouse or data lake, using data transformation technologies.
●    Create and maintain pipelines and infrastructure documentation, including monitoring and troubleshooting steps and a knowledge base of solutions to recurring issues.
●    Driving operational best practices and shaping workflows for routine consumer support, operations monitoring, incident management, and problem management.
●    Proactively identifying workflow automations, optimising data delivery pipelines, re-designing infrastructure for greater scalability.
●    Assist in ensuring compliance of data services and systems with internal/external audits and control practices.
Who are we looking for:
We are looking for someone who has a passion for data, enjoys problem solving and can identify process improvements. You must be a team player who is interested in learning all aspects of the Data tribe operations to provide top-notch support for all FanDuel data consumers as well as internal Data staff.
Preferred skills:
●    Strong problem-solving skills, with the ability to get to the root causes of issues and implement effective solutions.
●    A customer-oriented mindset, with a focus on delivering high-quality support services and attention to detail.
●    Intermediate SQL knowledge.
●    Experience programming in languages such as Python, Java or R.
●    Experience with Dimensional data modelling.
●    Experience with batch data ingestion.
●    Understanding of data warehousing and ETL/ELT processing.
●    Experience with Cloud databases, such as AWS Redshift or Databricks.
●    Exceptional communication and interpersonal skills.
●    Exposure to orchestration and monitoring tools, such as Airflow, Datadog or Databand.
●    Familiarity with data visualisation tools, such as Tableau or Looker.
Desired skills:
●    Experience with data streaming patterns.
●    Experience with ETL/ELT tools, such as dbt.
●    Exposure to database design & performance analysis.
●    Exposure to Data Quality Metrics & Unit Testing patterns.
●    Exposure to Continuous Integration / Continuous Delivery tools, such as BuildKite or Github Actions.

Oncall: serving on-call will be required for one weekend per month

What you can expect:

  • 25 days of annual leave
  • ShareSave scheme and „Flexible Benefits” of your choice
  • Private health insurance (includes dental insurance and health assessments)
  • Excellent development opportunities including thousands of courses online through ‘Udemy'
  • Working from home options
     

We thank all applicants for their interest, however only the suitable candidates will be contacted for an interview. By submitting your application online, you agree that: your details will be used to progress your application for employment. If your application is successful, your details will be used to administer your personnel record. If your application is unsuccessful, we will retain your details for a period no longer than two years, in order to consider you for prospective roles within our company.