botmodels/.env.example
Rodrigo Rodriguez (Pragmatismo) 5a43dc81c7 Rewrite BotModels as FastAPI multimodal AI service
Replace Azure Functions architecture with a modern FastAPI-based REST
API providing image, video, speech, and vision capabilities for General
Bots.

Key changes:
- Add FastAPI app with versioned API endpoints and OpenAPI docs
- Implement services for Stable Diffusion, Zeroscope, TTS/Whisper, BLIP2
- Add pydantic schemas for request/response validation
- Configure structured logging with structlog
- Support lazy model loading and GPU acceleration
- Update dependencies from Azure/TensorFlow stack to PyTorch/diffusers
2025-11-30 07:52:56 -03:00

38 lines
789 B
Text

# Server Configuration
ENV=development
HOST=0.0.0.0
PORT=8085
LOG_LEVEL=INFO
# Security - IMPORTANT: Change this in production!
API_KEY=change-me-in-production
# Model Paths
# These can be local paths or model identifiers for HuggingFace Hub
IMAGE_MODEL_PATH=./models/stable-diffusion-v1-5
VIDEO_MODEL_PATH=./models/zeroscope-v2
SPEECH_MODEL_PATH=./models/tts
VISION_MODEL_PATH=./models/blip2
WHISPER_MODEL_PATH=./models/whisper
# Device Configuration
# Options: cuda, cpu, mps (for Apple Silicon)
DEVICE=cuda
# Image Generation Defaults
IMAGE_STEPS=4
IMAGE_WIDTH=512
IMAGE_HEIGHT=512
IMAGE_GPU_LAYERS=20
IMAGE_BATCH_SIZE=1
# Video Generation Defaults
VIDEO_FRAMES=24
VIDEO_FPS=8
VIDEO_WIDTH=320
VIDEO_HEIGHT=576
VIDEO_GPU_LAYERS=15
VIDEO_BATCH_SIZE=1
# Storage
OUTPUT_DIR=./outputs