At Owens & Minor, we are a critical part of the healthcare process. As a Fortune 500 company with 350+ facilities across the US and 22,000 teammates in over 90 countries, we provide integrated technologies, products and services across the full continuum of care. Customers—and their patients—are at the heart of what we do.
Our mission is to empower our customers to advance healthcare, and our success starts with our teammates.
We are seeking a highly skilled Data Scientist with a strong background in machine learning, data engineering, and model optimization. The ideal candidate should be proficient in Python, PySpark, and SQL, experienced in time series forecasting, feature engineering, and data model performance evaluation, and capable of working with large-scale data integration projects across various domains.
A large part of this role involves building machine learning models that not only meet but exceed user expectations, driving measurable value for the business. The candidate will need to have a strong grasp of data model optimization, feature engineering, and model evaluation metrics to ensure high-performance solutions.
This role also requires experience with cloud platforms, ETL tools, data transformation processes, and working with structured and unstructured data. While not required, familiarity with object-oriented programming languages (C#, Java, JavaScript) is a plus. Strong communication skills are essential for collaborating with cross-functional teams and presenting findings effectively.
Key Responsibilities:
- Develop and optimize machine learning models with a focus on time series forecasting and predictive analytics.
- Perform feature engineering and data model optimization to enhance model accuracy and efficiency.
- Continuously evaluate model performance using metrics such as MAPE, RMSE, R², and adjust strategies accordingly.
- Build and implement data pipelines using PySpark, SQL, and cloud-based solutions for seamless data integration.
- Work on large-scale data integration projects, leveraging tools such as Boomi, SnapLogic, SSIS, or Palantir to extract, transform, and load data.
- Utilize Palantir Foundry, Google Cloud, AutoAI, and Google Colab for data modeling, processing, and automation.
- Design and maintain data warehouse solutions to support advanced analytics and business intelligence.
- Perform complex data transformations using SQL queries and data objects to support AI/ML-driven initiatives.
- Collaborate closely with business stakeholders to ensure models align with user expectations and business objectives.
- Deploy, monitor, and continuously improve machine learning models in production environments.
- Communicate technical findings and insights effectively to both technical and non-technical audiences.
Required Skills & Qualifications:
- Proficiency in Python, PySpark, and SQL for data analysis, feature engineering, and model development.
- Expertise in time series forecasting models, including ARIMA, Prophet, LSTMs, and ML-based approaches.
- Strong experience in data model optimization, feature engineering, and performance evaluation.
- Deep understanding of ML model evaluation metrics and best practices in improving model accuracy.
- Hands-on experience in data engineering, working on data pipelines, ETL, and data transformation projects.
- Experience using Boomi, SnapLogic, SSIS, or Palantir for data integration.
- Proficiency in cloud computing, particularly Google Cloud (BigQuery, Vertex AI, Cloud Functions, etc.).
- Experience with Palantir Foundry for data processing, analysis, and visualization.
- Ability to optimize and query large-scale datasets using data lakes and relational databases.
- Familiarity with AutoAI for automated model selection and hyperparameter tuning.
- Experience with Google Colab for collaborative machine learning development.
- Excellent problem-solving and communication skills, with the ability to convey complex concepts to business stakeholders.
Preferred Qualifications:
- Experience with MLOps for continuous deployment, monitoring, and retraining of ML models.
- Knowledge of business intelligence and reporting tools for data visualization.
- Background in supply chain, logistics, or operational forecasting.
- Experience in both batch and real-time data processing architectures.
- Ability to optimize SQL queries and data transformations for performance improvements.
- Familiarity with object-oriented programming languages such as C#, Java, or JavaScript (not required but beneficial).
If you feel this opportunity could be the next step in your career, we encourage you to apply. This position will accept applications on an ongoing basis.
Owens & Minor is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, national origin, sex, sexual orientation, genetic information, religion, disability, age, status as a veteran, or any other status prohibited by applicable national, federal, state or local law.