Posted:
3/19/2025, 1:43:53 PM
Experience Level(s):
Senior
Field(s):
Data & Analytics
Important Information:
Years of Experience: 5+ years
Job Mode: Full-time
Work Mode: Remote
We are looking for an experienced Data Engineer with strong expertise in stream processing technologies, particularly Apache Flink and Kafka Connect, to join our Data Engineering team. In this role, you will be responsible for designing, implementing, and maintaining scalable data pipelines that process both real-time and batch data, enabling critical business insights and data-driven decision-making.
Design, develop, and optimize data pipelines using Apache Flink for stream and batch processing.
Implement and maintain Kafka Connect connectors for seamless data integration.
Build and maintain data infrastructure on Google Cloud Platform (GCP).
Design and optimize BigQuery tables, views, and stored procedures.
Collaborate with product managers and analysts to understand data requirements.
Ensure data quality, reliability, and proper governance.
Troubleshoot and resolve data pipeline issues.
Document data flows, transformations, and architecture.
Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field.
3+ years of experience as a Data Engineer.
Strong experience with Apache Flink, including knowledge of the DataStream and Table APIs.
Experience with Kafka and Kafka Connect for data integration.
Proficiency in containerization (Docker) and orchestration (Kubernetes).
Experience with cloud platforms such as AWS or GCP.
Solid SQL skills, particularly with BigQuery SQL dialect.
Experience with data modeling and schema design.
Proficient in at least one programming language (Java or Python).
Familiarity with version control systems (Git) and CI/CD pipelines.
Master’s degree in Computer Science or related field (preferred).
Hands-on experience managing and configuring Kafka Infrastructure.
Experience with other stream processing frameworks (Spark Streaming, Kafka Streams).
Knowledge of data governance and security best practices.
Proficiency with Google Cloud Platform (GCP) services, including:
Google Cloud Storage
Dataflow
Pub/Sub
BigQuery
Experience with Apache Airflow.
Familiarity with BI tools such as Tableau or Power BI.
Understanding of monitoring tools like Prometheus and Grafana.
Google Cloud Professional Data Engineer certification (preferred).
Stream Processing: Apache Flink, Kafka Streams, Spark Streaming.
Data Integration: Kafka, Kafka Connect.
Cloud Platforms: GCP, AWS.
Storage & Data Warehousing: BigQuery, Google Cloud Storage.
Orchestration & Workflow Management: Apache Airflow, Kubernetes.
Programming Languages: Java, Python.
Infrastructure & DevOps: Docker, Git, CI/CD, Terraform.
Strong problem-solving skills and analytical thinking.
Excellent communication skills, both verbal and written.
Ability to work collaboratively in cross-functional teams.
Self-motivated with the ability to work independently.
Attention to detail and commitment to delivering high-quality work.
Encora is the preferred digital engineering and modernization partner of some of the world’s leading enterprises and digital-native companies. With over 9,000 experts in 47+ offices and innovation labs worldwide, Encora’s technology practices include Product Engineering & Development, Cloud Services, Quality Engineering, DevSecOps, Data & Analytics, Digital Experience, Cybersecurity, and AI & LLM Engineering.
At Encora, we hire professionals based solely on their skills and qualifications, and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.
Website: https://encora.com/
Headquarter Location: Scottsdale, Arizona, United States
Employee Count: 10001+
Year Founded: 2003
IPO Status: Private
Last Funding Type: Private Equity
Industries: Big Data ⋅ Cloud Computing ⋅ Software