Sr Analyst I Data Engineering

Posted:
12/5/2024, 4:42:02 PM

Location(s):
Karnataka, India

Experience Level(s):
Senior

Field(s):
Data & Analytics

Job Description:

Job Description: Data Engineer with Azure (Databricks, PySpark), and ETL Experience

Position Overview:

Seeking a highly skilled and motivated Data Engineer with a strong background in Azure Data Engg (with Databricks) and ETL processes, (nice to have experience in IBM DataStage or any other ETL tool). Must be proficient in data warehousing and business intelligence (DW BI) concepts and have a understanding of warehouse, data lake and lakehouse architectures. This role requires hands-on expertise in scripting, SQL, pySpark and modern cloud data engineering tools, particularly within the Azure Data Engg ecosystem.

Responsibilities:

    Data Engineering & ETL Development:
        Design, develop, and maintain robust ETL processes using Databricks and DataStage (nice to have)
        Extract, transform, and load data from various sources into data warehouses / data lakes/ lakehouse.
        Optimize and enhance ETL processes for improved performance and scalability.
    Data Warehousing & Business Intelligence:
        Implement and maintain data warehousing solutions.
        Ensure data integrity, accuracy, and consistency across the data warehouse and data lake.
    Cloud Data Solutions:
        Utilize Azure Storage and Azure Databricks to build and maintain scalable data solutions.
        Develop and manage data workflows in Azure using PySpark and other relevant tools.
    Database Management & SQL Development:
        Design and optimize SQL queries for data extraction and transformation.
        Manage and maintain database systems ensuring high availability and performance.
    Scripting & Automation:
        Write efficient Unix shell scripts for process automation and data manipulation.
        Develop Python scripts for data processing and analysis tasks.
        Automate routine data engineering tasks to improve productivity and reliability.

Skills:

    Technical Proficiency:
        Extensive experience with data warehousing and data lake architectures.
        Proficient in DW BI concepts and their practical applications.
        Strong knowledge of databases and SQL, with a focus on performance optimization.
        Hands-on experience in Unix and Python scripting.

    Azure & Databricks:
        In-depth knowledge of Azure Storage, container, DBFS. 
        Proficient in PySpark / SparkSQL for large-scale data processing.
        Experience with Azure Databricks and its ecosystem for data engineering.
    ETL Expertise:
        Nice to Have: Proven experience with any of the ETL tools, preferably IBM DataStage.
        Willingness and capability to learn IBM Datastage.
    Soft Skills:
    Be a team player, exhibit flexibility and openness
        Strong analytical and problem-solving skills.
        Excellent communication and collaboration abilities.
        Ability to work in a fast-paced, dynamic environment.
        Self-motivated with a keen eye for detail and continuous improvement.

Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here.

iOSCM

Website: https://ioscm.com/

Headquarter Location: Newcastle Upon Tyne, Newcastle upon Tyne, United Kingdom

Year Founded: 2012

Industries: E-Learning ⋅ Education ⋅ Logistics ⋅ Procurement ⋅ Supply Chain Management ⋅ Warehousing