Data Engineer

Posted:
9/5/2024, 9:31:35 AM

Location(s):
Toronto, Ontario, Canada ⋅ Ontario, Canada

Experience Level(s):
Junior ⋅ Mid Level ⋅ Senior

Field(s):
Data & Analytics

Workplace Type:
Hybrid

Job Description:

Here at Rakuten Kobo Inc. we offer a casual working start-up environment and a group of friendly and talented individuals. Our employees rank us highly in terms of commitment to work/life balance. We realize that for our people to be innovative, creative and passionate they need to feel valued and supported. We believe in rewarding all our employees with competitive salaries, performance based annual bonuses, stock options and training opportunities.  

If you’re looking for a company that inspires passion, personal, and professional growth – join Kobo and come help us on our mission of making reading lives better.

About the Data Engineering team

The Data Engineering team at Kobo is responsible for extracting raw data from a variety of sources into the Kobo ecosystem, transforming it, and providing it to stakeholders for further use. We use a modern tech stack to efficiently handle data sources of different kinds, work closely with other Kobo teams as well as external parties, and focus on building robust, scalable solutions for the large datasets we work with. We are continually striving toward unlocking more potential out of data sources and enabling other Kobo teams to harness its full potential in decision making and product building.

Responsibilities:

  • Create & document efficient data pipelines (ETL/ELT)  moving raw data from a variety of sources, through transformation, to cleaned datasets in Cloud services.
  • Write and optimize complex transformation operations on large data sets.
  • Be able to communicate with business and then transform datasets and map them to business-friendly datasets for consumption.
  • Work with other teams (Finance, Marketing, Customer Support, etc.) to gather requirements, enable them to access curated datasets, and empower them to understand and contribute to our data processes.
  • Design and implement data retrieval, storage, and transformation systems at scale.
  • Understand and implement data lineage, data governance, and data quality practices.
  • Create tooling to help with day-to-day tasks.
  • Exhibit ownership over data quality from end-to-end.
  • Introduce new tools and technologies to the teammates through research and POCs.
     

The Ideal Candidate:

  • One who takes ownership and accountability for their work and the needs of their team.
  • A generalist who can jump on any problem where no level of work is beneath them.
  • A problem solver at heart.
  • Skilled in failing fast, iterating, and improving.
  • Believer of automation, reducer of toil.
  • Self-motivated to continually learn, improve, and be better.
  • One who enjoys sharing knowledge & mentoring other team members.
  • Highly adaptive to changes, fun and supportive.
  • Motivated, creative, organized with attention to detail.
     

Must Haves:

  • Advanced experience ( ~5 years ) with Python and SQL.
  • Experience with dbt / DataForm
  • Experience with RabbitMQ and Kubernetes
  • Experience working in hybrid environments (GCP / Azure / AWS & on-prem) and, specifically with storage, database, and distributed processing services.
  • Experience working with REST APIs and Python data tools (Pandas/SQLAlchemy).
  • Experience building ETL and data pipelines with an orchestration tool (Airflow/Dagster/Prefect).
  • Ability to read and understand existing code/scripts/logic.
  • Experience with an IaC tool (Terraform).
  • Comfortable with using a terminal/ssh /Powershell/Bash, code versioning, branching / git.
  • Experience managing a project backlog and working cross-functionally with multiple stakeholders.
  • Ability to work effectively on a self-organizing team with minimal supervision.
  • Initiative in communicating with co-workers, asking questions and learning.
  • Excellent oral and written communication skills.
     

Nice to Haves:

  • Experience with CI/CD tools (Jenkins/Github/Gitlab).
  • Experience with stream-processing systems (Storm/Flume/Kafka).
  • Experience with schema design and dimensional data modeling.
  • Experience working in an Agile environment.
     

The Perks: 

  • Flexible hours and working environment  
  • 4 extended summer long weekends 
  • Full benefits starting from your first day  
  • Paid Volunteer days, unlimited sick days, and 3% RRSP matching  
  • Monthly commuting allowance for hybrid employees  
  • Flexible health spending account  
  • Training budget + Udemy account  
  • Free Kobo device + free weekly e-book or audiobook  
  • Weekly Kobo Tech University sessions  
  • Maternity/paternity leave top up  
  • 90 Day Work from Anywhere program  
  • Daily lunch credit when in-office and in-office snacks  
  • Dog friendly office 

About Rakuten Kobo Inc.  
Owned by Tokyo-based Rakuten and headquartered in Toronto, Rakuten Kobo Inc. is one of the most advanced global ecommerce companies, with the world’s most innovative eReading services offering more than 6 million eBooks and audiobooks to 30 million + customers in 190 countries. Kobo delivers the best digital reading experience through creative innovation, award-winning eReaders, and top-ranking mobile apps. Kobo is a part of the Rakuten group of companies.  

Rakuten Kobo Inc. is an equal opportunity employer. Accessibility accommodations for candidates with disabilities participating in the selection process are available on request. Any information received related to accommodation needs of applicants will be addressed confidentially.  

Rakuten Kobo would like to thank all applicants for their interest in this role however only qualified candidates will be shortlisted. 

#RKI

Kobo

Website: https://www.kobo.com/

Headquarter Location: Toronto, Ontario, Canada

Employee Count: 251-500

Year Founded: 2009

IPO Status: Private

Last Funding Type: Series C

Industries: E-Commerce ⋅ Electrical Distribution ⋅ Electronics ⋅ Internet ⋅ News ⋅ Retail ⋅ Software