Posted:
5/5/2026, 8:36:22 PM
Location(s):
Telangana, India ⋅ Tamil Nadu, India ⋅ Hyderabad, Telangana, India ⋅ Bengaluru, Karnataka, India ⋅ Chennai, Tamil Nadu, India ⋅ Karnataka, India
Experience Level(s):
Senior
Field(s):
AI & Machine Learning ⋅ Data & Analytics ⋅ Software Engineering
Workplace Type:
Hybrid
You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife.
What you will be doing…
The VCP Far Edge Automation team builds and maintains the automation that manages the full lifecycle of Verizon's virtualized RAN (vRAN) infrastructure — powering the nation's largest, most reliable 5G network. Alongside our DevOps engineers, our AI engineers are responsible for building the intelligent layer of our automation platform: Gen AI agents, LLM-powered pipelines, and context-aware tooling that reduces manual effort, accelerates remediation, and makes our automation smarter over time.
As a Senior AI Engineer, you'll design, build, and operationalize production-grade Generative AI agents and applications that integrate directly with our automation workflows and network infrastructure. You'll work closely with the team to take technical direction and deliver reliable, low-latency AI solutions that the broader team can depend on in production
Designing, developing, and deploying production-grade Gen AI agents and applications, with a focus on reliability, low latency, and real-world operability.
Building and maintaining LangGraph agents and custom Python orchestration logic to power GenAI pipelines — enabling low-latency inference, context-aware decision-making, and multi-step agentic workflows.
Integrate AI agents with internal data sources, Postgres databases, and REST API endpoints to give agents the context they need to act intelligently.
Designing and optimizing data ingestion and preprocessing pipelines in Python to support LLM inference and grounding workflows (RAG, tool use, structured outputs).
Collaborating with DevOps engineers to ensure AI agents and pipelines are deployable, observable, and maintainable within existing CI/CD and infrastructure frameworks.
Instrument and monitor AI agent performance — tracking latency, reliability, failure rates, and accuracy — and own improvements to those metrics.
Maintaining clear documentation: agent architecture designs, integration specs, prompt strategies, and operational runbooks.
Staying current with the rapidly evolving LLM and agent tooling ecosystem and bring relevant advances back to the team.
What we are looking for…
You'll need to have:
Bachelor's degree or four or more years of hands-on work experience.
Six or more years of relevant experience.
Experience with a strong Python focus — clean, production-grade, testable code.
Deep, hands-on experience with the Python ecosystem for AI/ML and data workflows (LangChain, LangGraph, LlamaIndex, or similar orchestration frameworks.
Demonstrated experience building and deploying LLM-powered agents or applications in a production environment.
Strong understanding of LLM concepts: prompt engineering, RAG, tool/function calling, context windows, structured outputs, and agent memory patterns.
Experience integrating AI systems with relational databases (Postgres or equivalent) and REST APIs.
Solid understanding of software engineering fundamentals: version control (Git), code review, testing, and documentation practices.
Ability to work US Central Standard Time (CST) business hours (8:00 AM to 5:00 PM CT), which corresponds to 6:30 PM to 3:30 AM Indian Standard Time.
Even better if you have:
Hands-on experience with AI agent frameworks and developer tools — such as Claude Code, OpenAI Assistants, or similar agentic platforms — including building custom tooling on top of them.
Experience with MLOps practices: model versioning, pipeline monitoring, experiment tracking, and production observability for AI systems.
Familiarity with DevOps tooling — Ansible, Jenkins, GitLab CI — and comfort working alongside infrastructure automation engineers.
Linux server experience and Shell scripting skills for deploying and debugging AI applications in server environments.
Experience with containerization (Docker, Kubernetes) for deploying AI workloads.
Exposure to telecommunications, network operations, or infrastructure automation use cases — experience applying AI to ops problems like anomaly detection, log analysis, or failure prediction.
Familiarity with vector databases (pgvector, Pinecone, Weaviate, or similar) for semantic search and RAG pipelines.
Experience with streaming or event-driven architectures (Kafka, Redis) for real-time AI agent integrations.
If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above.
Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Website: https://www.verizon.com/
Headquarter Location: Basking Ridge, New Jersey, United States
Employee Count: 101-250
Year Founded: 1990
Last Funding Type: Series B
Industries: Enterprise ⋅ Hardware ⋅ Media and Entertainment