Data Engineer III

Posted:
8/29/2024, 8:34:13 AM

Location(s):
Sunnyvale, California, United States ⋅ California, United States

Experience Level(s):
Junior ⋅ Mid Level ⋅ Senior

Field(s):
Data & Analytics

What you'll do...

Position: Data Engineer III

Job Location:  860 W. California Avenue, Sunnyvale, CA 94086

Duties: Build streaming pipelines using technologies such as Spark Streaming or Kafka. Develop data pipelines and processing layers using programming languages such as Scala and Python. Apply SQL and No-SQL database expertise to work with databases such as Cassandra, BigQuery, and Cosmos DB. Implement workflow management tool Airflow to optimize data processing. Develop enterprise data warehouses using technologies such as BigQuery. Run Spark and Hadoop workloads on platforms such as Dataproc to enhance data processing capabilities. Utilize Big Data technologies such as Spark, PySpark, Hive, and SQL to architect and design scalable, low-latency, and fault-tolerant data processing pipelines. Implement data governance practices, including data quality, metadata management, and security measures. Optimize complex queries across large datasets for efficient data processing. Leverage Kafka for data streaming and processing. Collaborate with cross-functional teams, such as Data Scientists, Data Analysts, and Business Intelligence experts, to deliver high-quality and efficient data solutions. Utilize data visualization tools such as Looker and Tableau to present insights and reports effectively. Utilize GCP and Azure cloud platforms to enhance data processing capabilities.

Minimum education and experience required: Master's degree or the equivalent in Computer Science, Engineering (any), Information Technology, or related field. Position does not require specific years of experience but requires listed skills.

Skills required: Experience with Big Data technologies including Spark, PySpark, and SQL. Experience with No-SQL databases including Cassandra, BigQuery, and Cosmos DB. Experience building streaming pipelines using Spark streaming or Kafka. Experience with Azure cloud platforms, Databricks, and GCP. Experience with workflow management tool Airflow. Experience running Spark and Hadoop workloads using Azure Data Factory. Experience with Java, Scala, or Python to write data pipelines and data processing layers. Experience with data visualization tools including Looker and Tableau. Experience with IDE's (including Eclipse, Intellij, and Visual Studio Code). Employer will accept any amount of professional experience with the required skills.

Salary Range: $138,911/year to $234,000/year. Additional compensation includes annual or quarterly performance incentives.   Additional compensation for certain positions may also include: Regional Pay Zone (RPZ) (based on location) and Stock equity incentives.

Benefits: At Walmart, we offer competitive pay as well as performance-based incentive awards and other great benefits for a happier mind, body, and wallet. Health benefits include medical, vision and dental coverage. Financial benefits include 401(k), stock purchase and company-paid life insurance. Paid time off benefits include PTO (including sick leave), parental leave, family care leave, bereavement, jury duty and voting. Other benefits include short-term and long-term disability, education assistance with 100% company paid college degrees, company discounts, military service pay, adoption expense reimbursement, and more.

Eligibility requirements apply to some benefits and may depend on your job classification and length of employment. Benefits are subject to change and may be subject to a specific plan or program terms. For information about benefits and eligibility, see One.Walmart.com.

Wal-Mart is an Equal Opportunity Employer.

#LI-DNI #LLI-DNP