DevOps Engineer
Build and deploy AI agents including prompt design, workflow configuration, integrations, telephony setup, and evaluation frameworks. Act as the primary technical partner for customers by leading demos, communicating progress, gathering feedback, and guiding solutions from concept to production. Configure and connect systems via APIs, handling authentication, data mapping, error handling, and integrations with CRMs, knowledge bases, and other enterprise tools. Set up telephony integration including SIP/CCaaS/PSTN routing, metadata passing, fallback configurations, and troubleshooting call quality. Write and refine prompts for LLM-driven agents, monitor performance, conduct iterative testing, and ensure agents meet automation and containment targets. Translate customer requirements into actionable solutions and work consultatively to resolve challenges related to security, connectivity, or knowledge ingestion. Collaborate with product and engineering teams to address platform gaps, resolve technical issues, and lead client implementations independently.
DevOps Engineer (Argentina)
Debug and fix issues in the platform and ship pull requests with fixes. Build internal tools and copilots powered by generative AI to enhance the team. Rapidly prototype proof-of-concept solutions for customer use cases. Collaborate across Engineering, Product, and Solutions teams to unblock customers and push the boundaries of AI adoption.
Senior Platform/DevOps Engineer (Kubernetes-Linux)
Translate business requirements into requirements for AI/ML models; prepare data to train and evaluate AI/ML/DL models; build AI/ML/DL models by applying state-of-the-art algorithms, especially transformers; leverage existing algorithms from academic or industrial research when applicable; test, evaluate, and benchmark AI/ML/DL models and publish the models, data sets, and evaluations; deploy models in production by containerizing them; work with customers and internal employees to refine model quality; establish continuous learning pipelines for models using online or transfer learning; build and deploy containerized applications on cloud or on-premise environments.
Senior Infrastructure Engineer
As a Senior Infrastructure Engineer at Bland, responsibilities include contributing to the design of scalable architecture by building distributed systems using Kubernetes that handle high-volume, real-time voice processing with strict latency and reliability requirements; building and supporting machine learning infrastructure including training pipelines and real-time inference serving across multiple regions; maintaining robust integrations with enterprise telephony systems, SIP trunks, and VoIP infrastructure; identifying architectural flaws and solving them; ensuring platform reliability through monitoring, alerting, and incident response systems to maintain enterprise-grade uptime; anticipating and solving scaling challenges related to exponential call volume growth; and implementing security best practices and compliance requirements for enterprise customers in regulated industries.
Lead DevOps Engineer
Lead the design, building, deployment, and optimization of enterprise-grade AI agents including voice, chat, and AI copilots. Own the full lifecycle of AI agent development including prompt engineering, workflow creation, API integration, telephony setup, and evaluation forms. Engage with clients through weekly demos, progress updates, feedback gathering, and act as the primary technical contact for deployed solutions. Configure system integrations involving APIs, data maps, authentication, and connectivity to CRM, databases, and knowledge systems. Set up telephony routing (SIP/CCaaS/PSTN), manage metadata, configure fallbacks, and troubleshoot call quality issues. Monitor agent performance and iteratively refine prompts to meet automation and containment goals. Work strategically to translate customer requirements into technical solutions, addressing challenges related to security, connectivity, and knowledge ingestion. Collaborate with product and engineering teams to support deep technical fixes and platform development while independently leading client delivery and support.
DevOps Engineer, Infrastructure & Security
The role involves taking full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies. Responsibilities include overseeing the end-to-end health of the platform to ensure seamless integration between the AI core and all full-stack components, from APIs to UI, maintaining a responsive and production-ready environment. The job also requires building automated systems to monitor model performance and data drift across geographically dispersed environments, managing the technical lifecycle within diverse regulatory frameworks, leading the response for production issues in mission-critical environments, ensuring rapid resolution and prevention of future issues. Additionally, the role requires translating deep technical performance metrics into clear insights for senior international government officials and partnering with Engineering and ML teams to ensure lessons learned in the field influence the technical architecture and decisions of future use cases.
Senior Pathologist
Lead the team responsible for the infrastructure supporting AI/ML Stack, focusing on scalability and efficiency of the Machine Learning Operations platform. Develop and execute the long-term vision and roadmap for the MLOps team to support ML development and deployment across business units, balancing short-term tactical deliveries with long-term architectural transformation. Manage and mentor a team of 6-7+ engineers, allocating resources strategically to support existing services and execute key strategic initiatives. Collaborate cross-functionally with leaders in machine learning, data science, product engineering, and infrastructure to identify pain points, remove bottlenecks, and facilitate new solution deployment. Architect compute and storage pipelines for ML Engineers to manage large datasets and artifacts efficiently. Modernize the AI product inference stack for significant growth in global deployments. Work with Site Reliability Engineering to establish comprehensive system observability metrics. Conduct assessments for technology refresh and benchmark proprietary tools against commercial and open-source alternatives to meet future needs.
Staff DevOps Engineer
The Staff DevOps Engineer will design and architect secure, scalable cloud and edge infrastructure for deploying AI workloads across multi-cloud and hybrid environments. They will build and maintain production-grade Infrastructure as Code using tools like Terraform, Ansible, or Pulumi, managing over 100 resources with GitOps workflows and automated validation. The role includes designing and operating production Kubernetes clusters optimized for AI/ML workloads with GPU support, implementing container security, multi-tenancy, and resource optimization. They will implement secure CI/CD pipelines with integrated security controls and automated deployment workflows for containerized AI models. The engineer will lead MLOps infrastructure initiatives including model deployment pipelines, versioning, feature stores, experiment tracking, and monitoring for model performance and drift. Responsibilities also include designing comprehensive observability and monitoring solutions using tools like Prometheus, Grafana, ELK, or Datadog with distributed tracing, application performance monitoring, and real-time alerting. They will implement security best practices such as least-privilege access, encryption at rest and in transit, network segmentation, and automated compliance validation. The engineer will lead incident response and reliability initiatives, participate in on-call rotation, conduct post-mortems, and drive continuous improvement for system reliability. Architecting disaster recovery and business continuity strategies with automated backup, failover, and recovery processes is required. They will develop reusable infrastructure modules and templates to accelerate environment provisioning and standardize deployment patterns. Mentoring mid-level and senior engineers on cloud architecture, DevOps best practices, and platform reliability through design reviews and technical guidance is part of the role. They will also drive technical documentation and knowledge sharing including runbooks, architecture decision records, and infrastructure standards.
Site Reliability Engineer, Inference Infrastructure
As a Site Reliability Engineer on the Model Serving team, you will build self-service systems that automate managing, deploying, and operating services, including custom Kubernetes operators supporting language model deployments. You will automate environment observability and resilience, enabling all developers to troubleshoot and resolve problems, and take steps to ensure defined SLOs are met, including participating in an on-call rotation. Additionally, you will build strong relationships with internal developers and influence the Infrastructure team’s roadmap based on their feedback, as well as develop the team through knowledge sharing and an active review process.
DevOps Engineer
The DevSecOps / Platform Engineer will design, implement, and operate secure, cloud-native infrastructure powering core data and application platforms for a defense-focused company. They will develop CI/CD pipelines, automate deployments, uphold security practices, and collaborate across teams to ensure reliability, scalability, and compliance for government users.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
