More Docker. Easy Access. New Streamlined Plans. Learn more.

ai/chat-demo

Verified Publisher

By Docker

Updated 3 months ago

Full AI Chat Application stack, including the backend API, frontend interface, and model server.

Compose
29

345

View compose

Full Stack Chat Demo

This Compose setup lets you easily launch the full AI Chat Application stack, including the backend API, frontend interface, and model server. Ideal for real-time AI interactions, it runs fully containerized with Docker Compose.

Features

  • Full Stack Setup: Backend, frontend, and model services included.
  • Easy Model Configuration: Change models with a single environment variable.
  • GPU and CPU Support: Easily switch between CPU or GPU mode.
  • Live AI Chat: Real-time AI response integration.

Prerequisites

  • Docker: Ensure Docker is installed.
  • Docker Compose: Use the latest Docker Desktop or Docker Compose CLI for best compatibility.
  • Tag Selection:
    • CPU Mode: If you do not have a GPU, pull the latest tag for CPU compatibility.
    • GPU Mode: If you have an NVIDIA GPU, pull the gpu-latest tag for GPU support.

Note: This application requires at least 8 GB of available memory within the Docker VM and performs best on systems with at least 16 GB of available memory. Systems with higher memory, like 32 GB or more, will experience smoother performance, especially when handling intensive AI tasks.

Quick Start

1. Download the Compose File

Use the "View Compose" button on this page to download the file, saving it as compose.yaml (recommended naming convention).

2. Run the Application

CPU Mode:

  1. Copy the compose file from the latest tag with the View Compose button and save it as compose.yaml.

  2. Run the following command from the same directory as your compose.yaml file:

    docker compose up
    

GPU Mode:

To enable GPU support:

  1. Ensure the nvidia driver is installed.

  2. Copy the compose file from the gpu-latest tag with the View Compose button and save it as compose.yaml.

  3. Run the following command from the same directory as your compose.yaml file:

    docker compose up
    
3. Run with a Custom Model

Specify a different model by setting the MODEL variable:

MODEL=llama3.2:latest docker compose up

Note: Always include the model tag, even when using latest.

Included Services

Backend API
  • Image: ai/chat-demo-backend:latest
  • Ports: 8000:8000
  • Environment:
    • MODEL_HOST=http://ollama:11434
  • Purpose: Relays requests to the model server and streams responses.
Frontend Interface
  • Image: ai/chat-demo-frontend:latest
  • Ports: 3000:3000
  • Environment:
    • PORT=3000
    • HOST=0.0.0.0
  • Purpose: Provides the chat UI for interacting with the AI.
Model Server (Ollama)
  • Image: ai/chat-demo-model:latest
  • Ports: 11434:11434
  • Environment:
    • MODEL=${MODEL:-mistral:latest}
  • Volume:
    • ollama_data: Stores model data.

GPU Requirement: Ensure your system has a compatible NVIDIA GPU if using GPU mode.

Storage

The Compose file includes a volume:

  • Volume: ollama_data