ai/chat-demo-frontend
Responsive frontend for an AI-driven chat application, powered by Remix and Vite.
2.9K
This Docker image provides a responsive frontend for an AI-driven chat application, powered by Remix and Vite. It connects to a FastAPI backend and offers a modern, interactive chat interface with real-time streaming responses.
This image is a component of the full AI Chat Application Demo - ai/chat-demo
. More information about how to run the whole demo can be found on the ai/chat-demo
image.
PORT
: Port the frontend will run on (default is 3000).HOST
: Host for the application (default is "0.0.0.0"
).Pull the Frontend Image
docker pull ai/chat-demo-frontend:latest
Ensure Backend and Model Server are Running
Start the model server, followed by the backend:
# Run the model server
docker run -e MODEL=mistral:latest -p 11434:11434 ai/chat-demo-model:latest
# Run the backend server
docker run -e MODEL_HOST=http://ollama:11434 -p 8000:8000 ai/chat-demo-backend:latest
Run the Frontend Container
Start the frontend container to serve the chat interface on your specified port:
docker run -p 3000:3000 ai/chat-demo-frontend:latest
The frontend app will be accessible at http://localhost:3000
.
PORT
: Sets the port for the frontend server (default is 3000
).HOST
: Sets the host address (default is "0.0.0.0"
).The frontend provides a chat interface that sends user messages to the backend API and displays the assistant’s responses in real time.
The frontend uses the Chat
component for real-time functionality:
/api/v1/chat/stream
endpoint.docker pull ai/chat-demo-frontend