Tech Lead – Big Data Engineering
Country: Spain
Role: Tech Lead – Big Data Engineering
Responsabilities:
- Lead the development and delivery of scalable, high-performance Big Data software components, leveraging technologies such as Spark, Hive, Scala, Airflow, Kibana and Databricks.
- Coordinate day-to-day engineering activities within the Big Data team, ensuring timely and high-quality delivery of data pipelines, ingestion frameworks, data platform procedures and analytical solutions.
- Collaborate closely with cross-functional teams—including Data Ingestion squads, Centers of Excellence (CoE), the Chief Data Office (CDO), Chief Tech Office (CTO), platform support and production teams — to streamline business requirements into operating technical solutions and measuring the potential impact of each change.
- Implement best practices for coding, testing, CI/CD, and documentation within the Big Data environment taking advantage also of the AI tools provided by the company.
- Drive operational excellence by monitoring system performance, troubleshooting issues, and optimizing data processing workflows by mitigating potential risks in the meantime.
- Ensure alignment of technical solutions with business strategy, especially in the CIB Banking Domain, through clear and data-driven communication with stakeholders and senior management.
- Mentor and guide developers, fostering a culture of ownership, innovation, and continuous improvement within the team.
Key Personal Traits:
- Strong analytical and problem-solving mindset, capable of understanding end to end BigData ecosystems and identifying optimization opportunities within complex data pipelines.
- Hands-on leader with the ability to roll up sleeves, perform deep dives into code, data, and systems, and lead by example.
- Excellent communication and stakeholder management skills, with the ability to translate technical details into business value and open connect with cross teams to understand the product.
- Solution-oriented and pragmatic, highly focused on continuous improvement and measurable delivery outcomes.
- Collaborative mindset, thriving in cross-functional and multicultural environments by zooming out the solution of cases identifying all stakeholders and dependencies.
Qualifications and Experience:
- 5+ years of experience in data engineering and data warehousing, having experience in financial institutions and being involved in the whole lifecycle of the data.
- Proven experience in Big Data technologies (Hadoop, Spark, Hive, HDFS) and data orchestration tools (Airflow, ControlM) deployed in different environments to comply with CI/CD standars.
- Strong programming skills in Scala, java and Python and proficiency in SQL for data processing and analytics.
- Hands-on experience with data visualization tools (Kibana, Power BI) and data governance frameworks.
- Solid understanding of data management, metadata, lineage, and data quality principles.
- Experience leading technical teams and coordinating project delivery in agile or hybrid setups.
- Knowledge of modern data trends (streaming, cloud data platforms, ML pipelines) and ability to apply them in semi-legacy, hybrid environments, having worked closely with Azure, AWS and platforms like Databricks.
- Fluent in English; Spanish or Portuguese is a plus.