Docker AI Jobs

Discover the latest remote and onsite Docker AI roles across top active AI companies. Updated hourly.

Check out 252 new Docker AI roles opportunities posted on AI Chopping Block

Senior Engineer, System-Level Design Verification

New
Top rated
Tenstorrent
Full-time
Full-time
Posted

Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, placement and routing (P&R), static timing analysis (STA), signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and quality of results (QoR). Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, working closely with verification, extraction, timing, design for test (DFT), and EDA vendors.

$100,000 – $500,000
Undisclosed
YEAR

(USD)

Santa Clara or Austin or Fort Collins, United States
Maybe global
Hybrid
Python
PyTorch
TensorFlow
MLOps
Docker

Director of Customer Engineering

New
Top rated
Tenstorrent
Full-time
Full-time
Posted

Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR. Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.

$100,000 – $500,000
Undisclosed
YEAR

(USD)

Santa Clara or Austin or Fort Collins, United States
Maybe global
Hybrid
Python
PyTorch
TensorFlow
MLOps
Docker

Research Engineer, Data Infrastructure

New
Top rated
Mistral AI
Full-time
Full-time
Posted

The role involves building and operating the next generation of data infrastructure at Mistral AI, being a core contributor to the design and scaling of massive compute fleets and storage systems for high performance and scalability. Responsibilities include architecting and maintaining multi-cluster orchestration layers for optimizing workload placement across diverse hardware and regions, designing future-proof storage systems anticipating exabyte scale growth, contributing to the internal training platform development to support model training and fine-tuning across Kubernetes and SLURM environments, implementing and managing metadata and lineage systems to provide visibility and traceability of data and model pipelines, and managing cloud-native deployments using modern workflows to ensure scalability and operational excellence. The role also includes full lifecycle ownership, from migrating away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs.

Undisclosed

()

Palo Alto, United States
Maybe global
Onsite
Python
Kubernetes
Data Pipelines
MLOps
Docker

Member of Technical Staff (Applied AI Engineer)

New
Top rated
Videcode
Full-time
Full-time
Posted

The role involves working on custom memory systems that grow and scale as users use the platform, developing a custom cutting-edge agent, managing bare metal infrastructure and scalability with concurrency and high reliability, optimizing cost and output with multiple models, and evaluating large language model (LLM) performance across a wide domain of tasks.

Undisclosed

()

New York City, United States
Maybe global
Onsite
Python
PyTorch
TensorFlow
Model Evaluation
MLOps

Applied AI Inference Engineer

New
Top rated
Baseten
Full-time
Full-time
Posted

Develop and maintain software systems and product features using one or more general-purpose programming languages in a production-level environment, with a preference for Python due to its relevance in ML projects. Drive customer impact by designing, implementing, and deploying Baseten solutions end-to-end, working with customers’ engineering teams at every stage of the customer journey including sales, implementation, and expansion. Deliver with velocity by turning vague objectives into clear specs and well-defined PoCs to rapidly ship well-tested services and outcomes for customers. Optimize and enhance AI/ML projects, contributing to continuous improvement of the technical stack, including developing features and PRDs with other engineering and product organizations. Own products and customer projects end-to-end, functioning as an engineer, project manager, and product manager with focus on user empathy, project specification, and execution. Navigate ambiguity and exercise good judgment on tradeoffs and tools needed to solve problems while avoiding unnecessary complexity. Demonstrate pride, ownership, and accountability for your work.

$165,000 – $330,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Remote
Python
MLOps
Docker
Kubernetes
AWS

AI Solutions Engineer

New
Top rated
Baseten
Full-time
Full-time
Posted

As an AI Solutions Engineer at Baseten, the responsibilities include partnering directly with customers to architect, build, and deploy high-scale production AI applications on Baseten’s platform, owning the journey from initial exploration to production deployment, translating ambiguous business goals into reliable, observable services with clear quality, latency, and cost outcomes. The role requires developing and maintaining software systems and product features using general-purpose programming languages, preferably Python, in a production-level environment. It involves designing, implementing, and deploying Baseten solutions end-to-end by collaborating with customers' engineering teams throughout the sales, implementation, and expansion stages. The engineer is expected to turn vague objectives into clear specifications and well-defined proofs of concept to rapidly ship well-tested services and outcomes. They also optimize and enhance AI/ML projects, contribute to continuous improvement of the technical stack, develop features and product requirement documents with other engineering and product teams, and own products and customer projects end-to-end as both an engineer and product manager with focus on user empathy, project specification, and execution. The role requires navigating ambiguity, exercising good judgment on tradeoffs and tools, and demonstrating pride, ownership, and accountability for their work, expecting the same from teammates.

$165,000 – $330,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Remote
Python
Docker
MLOps
Model Evaluation
AWS

Staff Engineer, CPU Core Verification

New
Top rated
Tenstorrent
Full-time
Full-time
Posted

Lead and contribute to cross-functional efforts solving complex physical design challenges across IPs, projects, and advanced technology nodes. Develop and enhance RTL-to-GDS methodologies, including floorplanning, synthesis, P&R, STA, signoff, and assembly. Architect and deploy AI/ML-driven solutions in production flows to improve engineering efficiency, turnaround time, and QoR. Optimize EDA tools and custom CAD flows using data-driven and ML-based techniques, in close collaboration with verification, extraction, timing, DFT, and EDA vendors.

$100,000 – $500,000
Undisclosed
YEAR

(USD)

Austin or Santa Clara or Fort Collins, United States
Maybe global
Hybrid
Python
PyTorch
TensorFlow
MLflow
Docker

AI Productivity Engineer

New
Top rated
Aircall
Full-time
Full-time
Posted

The AI Productivity Engineer will take clear ownership of rapid AI adoption across the engineering organization by building AI-powered tools and systems that improve engineering productivity, reducing friction, automating repetitive tasks, and embedding intelligence into workflows. Responsibilities include identifying high-friction areas in engineering workflows, designing and building production-grade AI-powered developer tooling for coding, testing, PR reviews, and debugging, building contextual AI assistants using internal data and tools, exploring, prototyping, and productionizing AI solutions, automating workflows across platforms like GitLab, Jira, CI/CD, Slack, and observability tools, designing and operating internal AI services and orchestration layers, owning solutions end-to-end from discovery to iteration, working hands-on with engineering teams to remove friction and enable tool usage, and measuring success through adoption, impact, and tangible time saved for engineers. The role explicitly excludes building AI features for customer-facing products, speculative AI research without clear outcomes, acting as general internal support, and owning generic ML infrastructure unrelated to developer productivity.

Undisclosed

()

London, United Kingdom
Maybe global
Onsite
Python
OpenAI API
Prompt Engineering
MLOps
Docker

Defense / Edge Tech Lead

New
Top rated
Deepgram
Full-time
Full-time
Posted

As the Defense / Edge Tech Lead, you will own the technical direction for deploying Deepgram's speech-to-text (STT) and text-to-speech (TTS) models to edge and embedded environments. Your responsibilities include leading the technical strategy for edge deployment, defining the architecture for on-device, on-premises, and air-gapped inference across diverse hardware targets. You will optimize models for edge and embedded platforms through quantization, pruning, distillation, and runtime optimization to meet latency, memory, and power constraints. You will partner with hardware vendors like Qualcomm and Motorola for SDK integration, performance benchmarking, and joint go-to-market efforts. Supporting defense customer requirements through AWS NatSec partnerships by translating mission requirements into engineering deliverables is also part of your role. You will design and build edge runtime infrastructure such as model packaging, deployment pipelines, OTA update mechanisms, and telemetry for devices in low or no connectivity environments. Deployments must be hardened for security-sensitive environments with features like secure boot chains, encrypted model storage, tamper detection, and audit logging. You will benchmark and validate performance across hardware platforms, establishing test suites for latency, accuracy, power consumption, and resource utilization. Collaboration with Research and Engine teams to influence model architectures toward edge-friendly designs is expected. Furthermore, you provide technical leadership to cross-functional teams on defense and edge projects, set engineering standards, review designs, and mentor engineers on systems and optimization practices.

$185,000 – $245,000
Undisclosed
YEAR

(USD)

United States
Maybe global
Remote
C++
Model Evaluation
MLOps
TensorFlow
Docker

SDET II

New
Top rated
Netomi
Full-time
Full-time
Posted

Testing of AI based conversational products; Monitoring and improving quality assurance process ensuring any agreed-upon standards and procedures are followed; Providing a high level of data quality awareness across multiple teams; Evaluating and identifying where enhancements in accuracy of models are required; Detailed testing feedback preparation to help the team to improve AI models.

Undisclosed

()

Toronto, Canada
Maybe global
Onsite
Python
Java
CI/CD
Docker
Kubernetes

Want to see more AI Egnineer jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Need help with something? Here are our most frequently asked questions.

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What are Docker AI jobs?","answer":"Docker AI jobs involve developing, deploying, and maintaining AI applications using containerization technology. These positions focus on creating reproducible AI workflows, packaging machine learning models with dependencies, and ensuring consistent execution across environments. Professionals in these roles typically work on MLOps pipelines, containerized AI applications, and implement solutions that seamlessly transition from development to production."},{"question":"What roles commonly require Docker skills?","answer":"Machine Learning Engineers, Data Scientists, AI Developers, and DevOps Engineers working on AI systems commonly require containerization skills. These professionals use containers to package models, ensure reproducibility, and streamline deployment pipelines. Full-stack developers building AI-powered applications and MLOps specialists implementing continuous integration workflows also frequently need proficiency with containerized environments and deployment strategies."},{"question":"What skills are typically required alongside Docker?","answer":"Alongside containerization expertise, employers typically seek proficiency in AI frameworks like TensorFlow, PyTorch, and Hugging Face. Familiarity with Docker Compose for multi-container applications, version control systems, and CI/CD pipelines is essential. Additional valuable skills include YAML configuration, cloud deployment knowledge, GPU acceleration techniques, and experience with MLOps practices that facilitate model development, testing, and production deployment."},{"question":"What experience level do Docker AI jobs usually require?","answer":"AI positions requiring containerization skills typically seek mid-level professionals with 2-4 years of practical experience. Entry-level roles may accept candidates with demonstrated proficiency in basic container commands, Dockerfile creation, and image management. Senior positions often demand extensive experience integrating containers into production ML pipelines, optimizing container resources, and implementing advanced deployment strategies across cloud and edge environments."},{"question":"What is the salary range for Docker AI jobs?","answer":"Compensation for AI professionals with containerization expertise varies based on location, experience level, industry, and additional technical skills. Junior roles typically start at competitive market rates, while senior positions command premium salaries. The most lucrative opportunities combine deep learning expertise, container orchestration experience, and cloud platform knowledge. Specialized industries like finance or healthcare often offer higher compensation for these in-demand skill combinations."},{"question":"Are Docker AI jobs in demand?","answer":"Containerization skills remain highly sought after in AI development, with strong demand driven by organizations implementing MLOps practices and scalable AI deployment strategies. Recent partnerships like Anaconda-Docker and trends in serverless AI containers have intensified hiring needs. The emergence of specialized tools like Docker Model Runner, Docker Offload, and Docker AI Catalog reflects the growing importance of containerized workflows in modern AI development and deployment practices."},{"question":"What is the difference between Docker and Kubernetes in AI roles?","answer":"In AI roles, containerization focuses on packaging individual applications with dependencies for consistent execution, while Kubernetes orchestrates multiple containers at scale. ML engineers might use Docker to create reproducible model environments but implement Kubernetes to manage production deployments across clusters. While containerization handles the model packaging, Kubernetes addresses the scalability, load balancing, and automated recovery needed for production AI systems serving multiple users simultaneously."}]