As a data engineer you will be enabling the business by providing data, reporting and analytics solutions, through developing and supporting Data Ingestion and transformation in the Azure Enterprise Data & Analytics Platform (EDAP).
What will you be doing?
- Designing, developing, and maintaining scalable data pipelines using Databricks, ensuring robust data processing workflows for cleaning, transforming, and aggregating data.
- Implementing stringent measures to ensure data quality, governance, and compliance within the data lake, leveraging Databricks lifecycle management to maintain predictive models.
- Deploying and managing data solutions using Azure cloud platforms, optimising data and platform performance for maximum efficiency.
- Working closely with various business units, data analysts, and scientists to understand and meet diverse data needs, facilitating the integration and deployment of AI and ML models.
- Demonstrating the ability to anticipate and balance the needs of multiple stakeholders, building partnerships and working collaboratively to achieve shared objectives.
- Developing new and innovative strategies to enhance organisational success, handle complex and sometimes contradictory information to effectively solve problems, and focus on building member relationships while delivering customer-centric solutions.
What are we looking for in you?
- Exposure to modern data engineering tool set, preferably databricks with an interest in developing your expertise in building and maintaining data pipelines, operational processes, and predictive models. A solid grounding in data architecture, data modelling, and data lake concepts with a certification in Databricks will be highly regarded.
- Understanding and usage of Unity Catalog and Delta Tables, particularly in managing secure data access and optimising performance within Databricks environments.
- Familiarity with modern data and AI technologies, Synapse, MLOps, LLM frameworks or Generative AI. Willingness to learn new tools and work effectively in cross functional team is essential.
- Strong foundation in SQL, Python, and PySpark, with experience in apply concept to BAU and Project activities.
- Familiarity with various cloud platforms, such as GCP, Azure, and AWS, is preferred. Experience with Azure services, including Data Lake, Blob Storage, and SQL-based solutions, is also desirable.
- Experience using GitHub or similar for version control, with some exposure to CI/CD pipelines, ETL processes, or real-time data integration.
- Understanding of Data Quality and Data Governance practices and willingness to expand capability in these area.
- Experience with business intelligence and reporting tools, preferably Power BI, as well as exposure to APIs and tools such as Postman for testing and integration.
- A proactive and collaborative mindset, with strong problem-solving skills and a keen eye for detail.
What can we offer you?
- Working in an environment that is embracing a continuous improvement culture using experimentation to support learning.
- Time dedicated to your personal development time with access to training, development, and certification programs to ensure you can upskill.
- Being part of empowered, cross-functional agile delivery teams that works with the business to build systems that solve problems.
- Complete flexibility to choose between working from the home or office with flexible hours for a better work life balance.
- Building digital systems that will shape the future for the RAC and a better WA.
- Gain RAC Staff Benefits including free Roadside Assistance and 25% discount on your Insurances, Social Club and access Fitness Passport for cheaper gym memberships.
About RAC
As an Equal Opportunity Employer, RAC values inclusivity and promotes a workplace that actively seeks to welcome contributions from all people. If you need assistance or adjustments to fully participate in the application process, please contact [email protected]
#BetterJourneysRAC #LI-JC1