Introducing our new CEO Don Johnson - Read More

eisai/fooocus

By eisai

Updated about 1 month ago

Packed llyasviel/Fooocus into container for Windows

Image
Machine Learning & AI
0

4.1K

Only Windows build version 20348 or newer are supported (Win11, Server2022)
Hyper-V or CUDA toolkits are not required
Windows images are huge, expect 5+ minutes for pull.

Preparation

Create the following folders:

fooocus
├───models
├───outputs
└───docker-compose.yaml

Docker Compose

version: "3.8"

networks:
  fooocus:

services:
  fooocus:
    container_name: fooocus
    image: eisai/fooocus
    isolation: process
    networks:
      - fooocus
    cpu_count: 8
    ports:
      - "7865:7865"
    environment:
      - MODEL=default     # Choose from default, anime, realistic
      - ARGS=--always-download-new-model     # See below
    volumes:
      - '.\models:C:\Fooocus\models'
      - '.\outputs:C:\Fooocus\outputs'
    devices:
      - class/5B45201D-F2F2-4F3B-85BB-30FF1F953599     # Passing GPU, This GUID is fixed

Additional Arguments

usage: launch.py [-h] [--listen [IP]] [--port PORT] [--disable-header-check [ORIGIN]]
                 [--web-upload-size WEB_UPLOAD_SIZE] [--external-working-path PATH [PATH ...]]
                 [--output-path OUTPUT_PATH] [--temp-path TEMP_PATH] [--cache-path CACHE_PATH] [--in-browser]
                 [--disable-in-browser] [--gpu-device-id DEVICE_ID]
                 [--async-cuda-allocation | --disable-async-cuda-allocation] [--disable-attention-upcast]
                 [--all-in-fp32 | --all-in-fp16]
                 [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]
                 [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16] [--vae-in-cpu]
                 [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]
                 [--directml [DIRECTML_DEVICE]] [--disable-ipex-hijack] [--preview-option [none,auto,fast,taesd]]
                 [--attention-split | --attention-quad | --attention-pytorch] [--disable-xformers]
                 [--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]
                 [--always-offload-from-vram] [--pytorch-deterministic] [--disable-server-log] [--debug-mode]
                 [--is-windows-embedded-python] [--disable-server-info] [--multi-user] [--share] [--preset PRESET]
                 [--disable-preset-selection] [--language LANGUAGE] [--disable-offload-from-vram] [--theme THEME]
                 [--disable-image-log] [--disable-analytics] [--disable-metadata] [--disable-preset-download]
                 [--always-download-new-model]

options:
  -h, --help            show this help message and exit
  --listen [IP]
  --port PORT
  --disable-header-check [ORIGIN]
  --web-upload-size WEB_UPLOAD_SIZE
  --external-working-path PATH [PATH ...]
  --output-path OUTPUT_PATH
  --temp-path TEMP_PATH
  --cache-path CACHE_PATH
  --in-browser
  --disable-in-browser
  --gpu-device-id DEVICE_ID
  --async-cuda-allocation
  --disable-async-cuda-allocation
  --disable-attention-upcast
  --all-in-fp32
  --all-in-fp16
  --unet-in-bf16
  --unet-in-fp16
  --unet-in-fp8-e4m3fn
  --unet-in-fp8-e5m2
  --vae-in-fp16
  --vae-in-fp32
  --vae-in-bf16
  --vae-in-cpu
  --clip-in-fp8-e4m3fn
  --clip-in-fp8-e5m2
  --clip-in-fp16
  --clip-in-fp32
  --directml [DIRECTML_DEVICE]
  --disable-ipex-hijack
  --preview-option [none,auto,fast,taesd]
  --attention-split
  --attention-quad
  --attention-pytorch
  --disable-xformers
  --always-gpu
  --always-high-vram
  --always-normal-vram
  --always-low-vram
  --always-no-vram
  --always-cpu [CPU_NUM_THREADS]
  --always-offload-from-vram
  --pytorch-deterministic
  --disable-server-log
  --debug-mode
  --is-windows-embedded-python
  --disable-server-info
  --multi-user
  --share               Set whether to share on Gradio.
  --preset PRESET       Apply specified UI preset.
  --disable-preset-selection
                        Disables preset selection in Gradio.
  --language LANGUAGE   Translate UI using json files in [language] folder. For example, [--language example] will use
                        [language/example.json] for translation.
  --disable-offload-from-vram
                        Force loading models to vram when the unload can be avoided. Some Mac users may need this.
  --theme THEME         launches the UI with light or dark theme
  --disable-image-log   Prevent writing images and logs to hard drive.
  --disable-analytics   Disables analytics for Gradio.
  --disable-metadata    Disables saving metadata to images.
  --disable-preset-download
                        Disables downloading models for presets
  --always-download-new-model
                        Always download newer models

Reference

Github Repo
https://github.com/lllyasviel/Fooocus

This image
https://github.com/Eisaichen/fooocus-win-docker

Install an ultralight docker engine on Windows
https://eisaichen.com/?p=76

Docker Pull Command

docker pull eisai/fooocus