Skip to content

Getting Started

This guide will help you get MediaMoth up and running using Docker Compose.

Prerequisites

Before you begin, ensure you have:

  • Docker (version 20.10 or later)
  • Docker Compose (version 2.0 or later)
  • At least 4GB of available RAM for all services
  • Ports available: 3000 (web client), 9092 (Kafka), 5432 (PostgreSQL), 6379 (Redis), 9200 (Elasticsearch)

Quick Start with Docker Compose

1. Download the Docker Compose Configuration

Create a new directory for your MediaMoth installation and download the docker-compose file:

bash
mkdir mediamoth
cd mediamoth
curl -O https://raw.githubusercontent.com/mediamoth/mediamoth/main/docker-compose.example.yaml
mv docker-compose.example.yaml docker-compose.yaml

Or create a docker-compose.yaml file with the following content:

yaml
version: '3.8'

services:
  kafka:
    image: confluentinc/cp-kafka:latest
    container_name: kafka
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "nc", "-vz", "localhost", "9092"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
    environment:
      - KAFKA_NODE_ID=1
      - KAFKA_PROCESS_ROLES=broker,controller
      - KAFKA_CONTROLLER_QUORUM_VOTERS=1@kafka:9093
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
      - KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
      - KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1
      - KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1
      - KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
      - KAFKA_NUM_PARTITIONS=3
      - KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
      - CLUSTER_ID=MkU3OEVBNTcwNTJENDM2Qk
    networks:
      - mediamoth-network
    volumes:
      - kafka_data:/var/lib/kafka/data

  postgres-db:
    image: mediamoth/postgres-db:latest
    container_name: postgres-db
    restart: unless-stopped
    environment:
      POSTGRES_USER: mediamoth
      POSTGRES_PASSWORD: mediamoth
      POSTGRES_DB: mediamoth

      POSTGRES_DATABASE_VIDEO_SERVICE: video-service
      POSTGRES_SCHEMAS_VIDEO_SERVICE: public
      POSTGRES_USER_VIDEO_SERVICE: video_user:video_pass

      POSTGRES_DATABASE_MEDIA_SERVICE: media-service
      POSTGRES_SCHEMAS_MEDIA_SERVICE: public
      POSTGRES_USER_MEDIA_SERVICE: media_user:media_pass

      POSTGRES_DATABASE_JOB_SERVICE: job-service
      POSTGRES_SCHEMAS_JOB_SERVICE: public
      POSTGRES_USER_JOB_SERVICE: job_user:job_pass

      POSTGRES_DATABASE_WORKFLOW_SERVICE: workflow-service
      POSTGRES_SCHEMAS_WORKFLOW_SERVICE: public
      POSTGRES_USER_WORKFLOW_SERVICE: workflow_user:workflow_pass
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U mediamoth"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
    networks:
      - mediamoth-network

  redis:
    image: redis:7-alpine
    container_name: redis
    restart: unless-stopped
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 10s
    networks:
      - mediamoth-network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3
    container_name: elasticsearch
    restart: unless-stopped
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - xpack.security.enrollment.enabled=false
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 60s
    networks:
      - mediamoth-network

  media-service:
    image: mediamoth/media-service:latest
    container_name: media-service
    restart: unless-stopped
    command: ["dual"]
    depends_on:
      postgres-db:
        condition: service_healthy
      kafka:
        condition: service_healthy
    networks:
      - mediamoth-network

  workflow-service:
    image: mediamoth/workflow-service:latest
    container_name: workflow-service
    restart: unless-stopped
    command: ["dual"]
    depends_on:
      postgres-db:
        condition: service_healthy
      kafka:
        condition: service_healthy
      redis:
        condition: service_healthy
      search-service:
        condition: service_started
    networks:
      - mediamoth-network

  job-service:
    image: mediamoth/job-service:latest
    container_name: job-service
    restart: unless-stopped
    command: ["dual"]
    depends_on:
      - postgres-db
      - kafka
      - media-service
      - workflow-service
      - video-service
    networks:
      - mediamoth-network

  search-service:
    image: mediamoth/search-service:latest
    container_name: search-service
    restart: unless-stopped
    command: ["dual"]
    depends_on:
      - kafka
      - elasticsearch
    networks:
      - mediamoth-network

  video-service:
    image: mediamoth/video-service:latest
    container_name: video-service
    restart: unless-stopped
    command: ["/app/main", "dual"]
    depends_on:
      postgres-db:
        condition: service_healthy
      kafka:
        condition: service_healthy
    networks:
      - mediamoth-network

  mediamoth-client:
    image: mediamoth/mediamoth-client:latest
    container_name: mediamoth-client
    restart: unless-stopped
    ports:
      - "3000:3000"
    depends_on:
      - job-service
      - media-service
      - workflow-service
      - video-service
      - search-service
    networks:
      - mediamoth-network
    environment:
      GRPC_JOB_SERVICE_URL: "job-service:50051"
      GRPC_MEDIA_SERVICE_URL: "media-service:50051"
      GRPC_SEARCH_SERVICE_URL: "search-service:50051"
      GRPC_WORKFLOW_SERVICE_URL: "workflow-service:50051"

volumes:
  postgres_data:
  elasticsearch_data:
  kafka_data:
  redis_data:

networks:
  mediamoth-network:
    name: mediamoth-network
    driver: bridge

2. Start MediaMoth

Start all services with a single command:

bash
docker-compose up -d

This will:

  • Download all required Docker images
  • Create and start all MediaMoth services
  • Set up the required databases and message queues
  • Start the web client on port 3000

3. Verify Services are Running

Check that all services are healthy:

bash
docker-compose ps

You should see all services in the "running" state. The initial startup may take 1-2 minutes as services wait for dependencies to become healthy.

To view logs from all services:

bash
docker-compose logs -f

Or view logs from a specific service:

bash
docker-compose logs -f mediamoth-client

4. Access the Web Interface

Once all services are running, open your browser and navigate to:

http://localhost:3000

You should see the MediaMoth web interface where you can create pipelines, submit jobs, and monitor media processing tasks.

Service Architecture

MediaMoth consists of several microservices:

  • Media Service: Manages media assets and metadata
  • Workflow Service: Orchestrates pipeline execution and job workflows
  • Job Service: Handles job scheduling and execution
  • Video Service: Processes video conversion and transcoding tasks
  • Search Service: Provides search capabilities using Elasticsearch
  • Client: Web interface for interacting with MediaMoth

Stopping MediaMoth

To stop all services:

bash
docker-compose down

To stop and remove all data (databases, volumes):

bash
docker-compose down -v

Advanced Configuration

SSH Access for Remote Processing (Optional)

If you need the video service to access remote storage via SSH, uncomment and configure these lines in the video-service section:

yaml
environment:
  SSH_KNOWN_HOSTS: "your-remote-host.com ssh-rsa AAAAB3NzaC1yc2E..."
volumes:
  - ~/.ssh/id_rsa:/root/.ssh/id_rsa:ro

Replace the SSH_KNOWN_HOSTS value with your actual known hosts entry, and adjust the SSH key path as needed.

What's Next?

Troubleshooting

Services won't start

Check that you have enough available RAM and that required ports are not already in use:

bash
docker-compose logs

Port conflicts

If port 3000 is already in use, you can change it in the docker-compose.yaml:

yaml
ports:
  - "8080:3000"  # Access on localhost:8080 instead

Getting Help

Released under the MIT License.