Sr. Data Engineer (100% Remote, 1099 Contract)

Posted:
6/24/2024, 5:00:00 PM

Location(s):
Baltimore, Maryland, United States ⋅ Maryland, United States

Experience Level(s):
Senior

Field(s):
Data & Analytics

POSITION SUMMARY

1099 Contract

- Bachelor’s degree
- 5+ years’ experience in software engineering
- Experience developing ETL processing flows with MapReduce technologies like Spark and Hadoop
- Experience developing with ingestion and clustering frameworks such as Kafka, Zookeeper, YARN

DUTIES AND RESPONSIBILITIES:

- Bachelor’s degree

- 5+ years’ experience in software engineering

- Experience developing ETL processing flows with MapReduce technologies like Spark and Hadoop

- Experience developing with ingestion and clustering frameworks such as Kafka, Zookeeper, YARN

EXPERIENCE AND QUALIFICATIONS

- Hands-on development experience of distributed/scalable systems and high-volume transaction applications, understanding of big data processing.

- Excellent analytical and problem-solving skills.

- Energetic, motivated self-starter that is eager to excel with excellent inter-personal skills.

- Expert in knowing a balance driving the right architecture but realizing the realities of having customers and the need to ship software

Have experience in many of the following areas:

- Programming in Scala, Java or another object-oriented language.

- Using big data technologies like Apache Kafka and Spark.  Preferably with Structured Streaming and Delta lake.

- Using tools like Databricks notebooks to rapidly prototype solutions.

Preferred Knowledge and Skills:

- Hands-on working experience in cloud infrastructure like AWS. Able to scale cade and deploy applications in the public cloud using technologies like AWS, Lambda, Docker, Kubernetes.

- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O

- Experience working with healthcare specific data exchange formats including HL7 and FHIR.

- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming

- Experience with various messaging systems, such as Kafka or RabbitMQ

- Working knowledge of Databricks, Team Foundation Server, TeamCity, Octopus deploys and DataDog

Responsibilities:

- Translate business requirements into effective technology solutions

- Help lead the design, architecture and development of the Omnicell Data Platform

- Aid in design and code reviews

- Resolve defects/bugs during QA testing, pre-production, production, and post-release patches

- Analyze and improve efficiency, scalability, and stability of various system resources once deployed

- Assist in providing technical leadership to agile teams – onshore and offshore: Mentor junior engineers and new team members, and apply technical expertise to challenging programming and design problems

- Help define the technology roadmap that will support the product development roadmap

- Continue to improve code quality by tracking, reducing and avoiding technical debt

COMPANY INFORMATION

Click here to access our website for more information about our company.

Catalyte

Website: https://catalyte.io/

Headquarter Location: Baltimore, Maryland, United States

Employee Count: 251-500

Year Founded: 2000

IPO Status: Private

Last Funding Type: Venture - Series Unknown

Industries: Artificial Intelligence (AI) ⋅ Enterprise Software ⋅ Human Resources