Integration developer – Integration Competence Centre (ICC)

Posted:
4/27/2026, 4:10:12 PM

Location(s):
Region of Žilina, Slovakia ⋅ Žilina, Region of Žilina, Slovakia

Experience Level(s):
Mid Level ⋅ Senior

Field(s):
DevOps & Infrastructure ⋅ Software Engineering

About ICA & ICC

At ICA Gruppen, integration and platform capabilities are a core enabler for business development across retail, banking, pharmacy, and logistics.

The Integration Competence Centre (ICC) provides shared integration and platform services that enable product teams to build solutions in a secure, scalable, and standardized way.

About the Role Integration developer

As a Platform Engineer within ICC, you are responsible for building, operating, and continuously improving integration and platform capabilities that are used by multiple development teams.

The role focuses on integration platform stability, automation, self-service, and end-to-end operational responsibility (you build it – you run it). This position is centered around Kafka cluster administration and the automation of related management tasks, ensuring reliable operation and scalability of Kafka infrastructure. While the position is not focused on daily application development, you are expected to participate in customer projects when needed.

Responsibilities

  • Architect and deliver Strimzi Kafka-related services with a strong focus on scalability, such as multi-tenant Kafka clusters, dynamic topic provisioning, and automated partition management to support fluctuating workloads and future growth.

  • Implement Kafka stream processing services for real-time data analytics, ensuring that resources scale automatically based on throughput and latency requirements.

  • Develop self-service APIs and CLI tools for developers to provision Kafka topics and schemas, enabling rapid onboarding and efficient scaling without manual intervention.

  • Establish managed Kafka Connectors for seamless integration between legacy systems, cloud platforms, and third-party data sources, supporting scalable data pipelines.

  • Set up advanced monitoring and alerting solutions for Kafka clusters using Prometheus, Grafana, and Splunk to proactively manage performance bottlenecks and ensure high availability.

  • Design high-availability and disaster recovery strategies for Kafka, including cross-region replication and automated failover, to maintain platform resilience as usage scales.

  • Ensure stability, performance, scalability, and cost efficiency

  • Implement and manage security protocols (mTLS, SASL/SCRAM, RBAC) to ensure data-in-transit and data-at-rest encryption.

  • Enforce data governance standards through the management of Schema Registries (Avro, Protobuf, JSON Schema).

  • Manage secrets and sensitive configurations using tools like Azure Key Vault.

  • Design and test high-availability (HA) and disaster recovery (DR) strategies, including cross-region replication (MirrorMaker 2).

  • Perform regular cluster maintenance, such as version upgrades, security patching, and partition rebalancing, with zero downtime.

  • Conduct capacity planning by analyzing throughput trends to proactively scale broker storage and compute.

  • Architect reusable integration patterns (Request-Response vs. Event-Driven) to guide microservices of communication.

  • Develop and maintain custom Kafka Connectors (Source/Sink) for seamless data movement between legacy systems and the cloud.

  • Build self-service tooling (APIs or CLI tools) that allow developers to provision topics and schemas without manual intervention.

  • Provide Tier 3 support for complex integration issues, such as consumer lag, rebalance loops, or network bottlenecks.

  • Perform Root Cause Analysis (RCA) for platform outages and implement automated guardrails to prevent recurrence.

  • Build and maintain CI/CD pipelines and automation

  • Establish observability (logging, metrics, alerts)

  • Enable development teams through standards, templates, and documentation

  • Handle incident, problem, and change management at platform level

  • Work with Security, Architecture, and development teams at ICA

Required Competence

Platform & Cloud

• 4–6 years of experience with Apache Kafka, Strimzi, Kafka Connect API, Kraft.

• 4–6 years of experience working with Java and Spring Boot Framework

• Experience with PostgreSQL

• Strong experience with Azure Kubernetes Service (AKS)

• Linux experience (including WSL)

• GitHub Actions and Ansible Tower

• Knowledge of Splunk, Fluent Bit and Helm

DevOps & Automation

• CI/CD using GitHub Actions and/or Jenkins

• Infrastructure as Code (Terraform or equivalent)

Observability

• Splunk, Grafana, Prometheus

• Fluent Bit or OpenTelemetry

Streaming & Messaging

• Apache Kafka at platform level (clusters, performance, security)

Nice to Have

• Experience in API platforms and/or MQTT / IoT

• Experience in Azure, AWS or Google cloud

• Cloud cost optimization experience

• Experience of working at an Integration department in large enterprise environments

Personal Attributes

• Strong problem-solving skills

• Humble attitude

• Comfortable with operational responsibility

• Fluent in English (Swedish is an advantage)

What happens next?

We welcome your application as soon as possible, as we are doing an ongoing selection.  

The last day to apply is 2026-05-12.

For questions regarding the position, contact Anna Nordell , the recruiting manager, [email protected].

 
ICA aims to reflect our customers and society, and therefore strives to hire people with diverse backgrounds, skills, knowledge, and experience. We value good working conditions and aim for completely smoke-free workplaces. If required by the role, you will need to undergo assessments, background checks, and drug testing prior to employment. 

Read more about what it’s like to work at ICA.