Welcome back, intrepid SpaceTimeDB architect! You’ve come a long way, learning how to build powerful, real-time applications, design schemas, write efficient reducers, and handle client synchronization. So far, our focus has largely been on the “development” aspect—getting things working. But what happens when your amazing multiplayer game or collaborative app is ready for the world? That’s where production best practices come in!

This chapter is your guide to transitioning your SpaceTimeDB application from a local development environment to a robust, scalable, and secure production deployment. We’ll cover essential topics like environment configuration, deployment strategies, how to monitor your application in the wild, and crucial security considerations. By the end of this chapter, you’ll have a solid understanding of what it takes to confidently launch and maintain your SpaceTimeDB-powered systems, ensuring they’re ready for prime time.

Ready to make your SpaceTimeDB project shine in production? Let’s dive in!

Core Concepts for Production Readiness

Moving to production means thinking beyond just “does it work?” to “does it work reliably, securely, and efficiently for everyone, all the time?” This requires a shift in mindset and a focus on several key areas.

1. Environment Configuration: Keeping Dev and Prod Separate

One of the first things you’ll encounter is the need for different settings between your development environment and your production environment. For example, you might want verbose logging during development but only critical errors in production. Database connection strings, API keys, and external service URLs will definitely differ.

What is it? Environment configuration refers to managing settings and parameters that change based on the deployment environment (development, staging, production, etc.). Why is it important?

  • Security: Prevents sensitive production credentials from being exposed in development code.
  • Flexibility: Allows you to tune application behavior (e.g., logging verbosity, performance settings) for each environment.
  • Reliability: Ensures your application connects to the correct services and databases. How it functions: The most common and secure way to handle environment-specific configurations is by using environment variables. Your application reads these variables at runtime, adapting its behavior accordingly.

SpaceTimeDB modules (written in Rust) can access environment variables using standard Rust libraries. For example, std::env::var("MY_SETTING").

2. Deployment Strategies: Getting Your Code to the Cloud

Once your SpaceTimeDB module and client application are ready, you need a way to package and deploy them.

A. Containerization with Docker

Containerization has become the de-facto standard for deploying modern applications. Docker allows you to package your application and all its dependencies into a single, isolated unit called a container.

What is it? Docker packages your application, its dependencies, and its configuration into a portable image. This image can then be run as a container on any system that has Docker installed. Why is it important?

  • Consistency: “Works on my machine” becomes “works everywhere” because the environment is standardized.
  • Isolation: Containers run in isolation from each other and the host system.
  • Portability: Easily move your application between different environments (local, staging, production). How it functions: You define a Dockerfile that specifies how to build your application’s image. This includes things like the base operating system, installing dependencies, copying your code, and defining the command to run your application.

For SpaceTimeDB, you would containerize your compiled SpaceTimeDB module and client applications separately. The SpaceTimeDB server itself can also be run in a container.

B. Orchestration with Kubernetes (Briefly)

For complex, large-scale deployments, especially those involving multiple services and high availability, Kubernetes is the industry leader for container orchestration. It automates the deployment, scaling, and management of containerized applications. While a deep dive into Kubernetes is beyond this chapter, understand that if your SpaceTimeDB application grows significantly, you’ll likely deploy it to a Kubernetes cluster.

C. Continuous Integration/Continuous Deployment (CI/CD)

CI/CD pipelines automate the process of building, testing, and deploying your application.

What is it?

  • Continuous Integration (CI): Developers frequently merge their code changes into a central repository. Automated builds and tests run to detect integration issues early.
  • Continuous Deployment (CD): After successful CI, changes are automatically deployed to production. Why is it important?
  • Speed: Faster release cycles.
  • Reliability: Automated tests catch bugs before deployment.
  • Consistency: Standardized deployment process. How it functions: Tools like GitHub Actions, GitLab CI/CD, Jenkins, or CircleCI monitor your code repository. When changes are pushed, they trigger a pipeline that compiles your SpaceTimeDB module, runs tests, builds Docker images, and pushes them to a container registry, eventually deploying them to your chosen environment.
flowchart TD dev[Developer Code] --> git[Git Repository] git --> ci_trigger[CI/CD Trigger] ci_trigger --> ci_build[CI Build & Test Module] ci_build --> ci_docker[Build Docker Image] ci_docker --> registry[Push to Container Registry] registry --> cd_deploy[CD Deploy to Production] cd_deploy --> prod_stdb[Production SpaceTimeDB Server] cd_deploy --> prod_app[Production Client App]

Figure 17.1: Simplified CI/CD Pipeline for SpaceTimeDB applications

3. Observability: Seeing What’s Happening in Production

Once your application is live, you need to know if it’s healthy, performing well, and if users are encountering issues. This is where observability comes in, typically through logging, monitoring, and alerting.

A. Logging

What is it? Recording events, errors, and information about your application’s execution. Why is it important? Essential for debugging issues, understanding application behavior, and auditing. How it functions:

  • Structured Logging: Instead of plain text, log messages are formatted (e.g., JSON) with key-value pairs ({"level": "info", "message": "User connected", "user_id": "abc"}). This makes logs easier to search and analyze with log management tools (e.g., ELK Stack, Splunk, Datadog).
  • Log Levels: Use different severities (DEBUG, INFO, WARN, ERROR, CRITICAL) to control the verbosity. In production, you might only log INFO level and above to reduce noise.

SpaceTimeDB reducers and client logic should use structured logging. Rust’s log crate combined with a formatter like env_logger or tracing can achieve this.

B. Monitoring

What is it? Collecting and analyzing metrics (numerical data points) about your application and infrastructure. Why is it important? Provides insights into performance, resource utilization, and overall system health. Helps identify trends and potential problems before they become critical. How it functions:

  • Infrastructure Metrics: CPU usage, memory consumption, network I/O, disk space for your SpaceTimeDB server and client hosts.
  • Application Metrics: SpaceTimeDB-specific metrics like:
    • Number of active client connections.
    • Reducer execution times.
    • Number of reducer invocations.
    • Database query latency.
    • Error rates from reducers or client synchronization.
  • Tools: Prometheus, Grafana, Datadog, New Relic.

C. Alerting

What is it? Automatically notifying relevant personnel when specific metrics or log patterns indicate a problem. Why is it important? Allows for proactive response to issues, minimizing downtime and impact on users. How it functions: You define thresholds for your monitored metrics (e.g., “CPU usage > 80% for 5 minutes,” “Error rate > 5%”). When a threshold is breached, an alert is triggered via email, Slack, PagerDuty, etc.

4. Security in Production: Protecting Your Application and Data

Security is paramount in production. A single vulnerability can compromise your entire system.

A. Network Security

What is it? Protecting your application’s network access. Why is it important? Prevents unauthorized access and attacks. How it functions:

  • Firewalls: Restrict incoming and outgoing network traffic to only what’s necessary.
  • Virtual Private Clouds (VPCs): Isolate your cloud resources in a private network.
  • TLS/SSL: Encrypt all data in transit (e.g., between clients and the SpaceTimeDB server, between your SpaceTimeDB server and other backend services). SpaceTimeDB clients typically connect over WebSockets, which should always be secured with WSS (WebSocket Secure).

B. Authentication and Authorization

What is it?

  • Authentication: Verifying a user’s identity (“Who are you?”).
  • Authorization: Determining what an authenticated user is allowed to do (“What can you access?”). Why is it important? Controls access to your SpaceTimeDB data and reducers, preventing unauthorized operations. How it functions:
  • External Identity Providers (IdPs): Integrate with services like Auth0, AWS Cognito, Google Firebase Auth, or OAuth2/OpenID Connect providers. Your client application authenticates users with the IdP, receives a token, and then uses this token to connect to SpaceTimeDB.
  • SpaceTimeDB Permissions: Use SpaceTimeDB’s built-in permission system (if available, check current docs) or implement custom authorization logic within your reducers to check user roles/permissions based on data stored in tables.

C. Data at Rest Encryption

What is it? Encrypting data when it’s stored on disk. Why is it important? Protects sensitive data even if the underlying storage is compromised. How it functions: Cloud providers typically offer disk encryption by default for their storage services. Ensure your SpaceTimeDB server’s data directories are on encrypted volumes.

D. Secrets Management

What is it? Securely storing and accessing sensitive information like API keys, database credentials, and private keys. Why is it important? Prevents hardcoding secrets in your code, which is a major security risk. How it functions: Use dedicated secrets management services (e.g., AWS Secrets Manager, HashiCorp Vault, Kubernetes Secrets) that encrypt and control access to your secrets. Your application retrieves secrets from these services at runtime.

5. High Availability & Disaster Recovery

Ensuring your application remains available even during failures and can recover from catastrophic events.

A. High Availability (HA)

What is it? Designing your system to operate continuously without interruption for a long time. Why is it important? Minimizes downtime, crucial for real-time applications where users expect constant connectivity. How it functions:

  • Redundancy: Running multiple instances of your SpaceTimeDB server (if SpaceTimeDB supports clustering/replication for HA, which is common for real-time databases).
  • Load Balancing: Distributing client connections across multiple SpaceTimeDB instances.
  • Failover: Automatically switching to a healthy instance if one fails.

B. Disaster Recovery (DR)

What is it? A plan to recover your application and data after a major outage or disaster (e.g., entire data center failure). Why is it important? Ensures business continuity and prevents data loss. How it functions:

  • Backups: Regularly backing up your SpaceTimeDB database (both schema and data) to an offsite location.
  • Recovery Point Objective (RPO): The maximum amount of data you are willing to lose (e.g., 5 minutes of data).
  • Recovery Time Objective (RTO): The maximum amount of time you can tolerate for your application to be down.
  • Testing: Regularly testing your backup and restore procedures to ensure they work.

6. Schema Evolution & Migrations

Your application will evolve, and so will your database schema. How do you handle changes without disrupting live users?

What is it? The process of applying changes to your SpaceTimeDB schema (tables, reducers, views) in a controlled and safe manner in production. Why is it important? Allows your application to evolve while maintaining data integrity and minimizing downtime. How it functions:

  • Backward Compatibility: Design schema changes so older client versions can still interact with the new schema (e.g., adding nullable columns, avoiding renaming columns).
  • Migration Scripts: Use version-controlled scripts to apply schema changes. For SpaceTimeDB, this might involve using the CLI to apply new module versions that define schema changes.
  • Phased Rollouts: Gradually deploy new versions of your application and schema, monitoring for issues.
  • Rollback Plan: Always have a plan to revert to the previous state if a migration causes problems.

7. Performance Optimization in Production

While you optimize during development, production reveals real-world bottlenecks.

What is it? Continuously identifying and addressing performance issues in your live application. Why is it important? Ensures a smooth, responsive user experience and efficient resource utilization. How it functions:

  • Indexing: Ensure appropriate indexes are defined on your SpaceTimeDB tables to speed up common queries.
  • Efficient Reducers: Profile your reducers to identify slow operations. Optimize data access patterns.
  • Client-Side Optimization: Minimize data fetching, optimize client-side rendering, and manage subscription lifecycles effectively.
  • Load Testing: Simulate high user traffic to identify breaking points before they occur in production.

Step-by-Step Implementation: Dockerizing Your SpaceTimeDB Module and Basic Logging

Let’s put some of these concepts into practice. We’ll take a simple SpaceTimeDB module and containerize it using Docker, then demonstrate how to use environment variables for configuration and implement basic structured logging.

For this exercise, we’ll assume you have a basic SpaceTimeDB module (e.g., from a previous chapter) that defines a simple table and a reducer.

First, let’s create a minimal lib.rs for our SpaceTimeDB module that logs a message when a reducer is called.

1. Prepare Your SpaceTimeDB Module

Create a new directory for your module, let’s call it my_stdb_module. Inside my_stdb_module, create a Cargo.toml and src/lib.rs.

my_stdb_module/Cargo.toml:

[package]
name = "my_stdb_module"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib"]

[dependencies]
# Use the latest stable version of SpacetimeDB as of 2026-03-14
# Check official SpacetimeDB documentation for the most accurate current version
spacetimedb = "2.2.0" # Placeholder, verify latest on spacetimedb.com/docs or GitHub releases
log = "0.4.21" # For logging
env_logger = "0.11.3" # For basic environment-based logger

Note: Please verify the actual latest stable versions for spacetimedb, log, and env_logger on their respective official documentation or crates.io as of your build date.

my_stdb_module/src/lib.rs:

use spacetimedb::{
    spacetimedb,
    table,
    ReducerContext,
    log,
};

// Initialize logger once
#[spacetimedb::init]
fn init() {
    // Only initialize env_logger if it hasn't been initialized yet
    // This prevents "logger already initialized" errors in certain environments
    let _ = env_logger::builder()
        .filter_level(log::LevelFilter::Info) // Default to INFO level
        .is_test(true) // Disable color output when running as a test
        .try_init();

    log::info!("SpaceTimeDB module initialized!");
}


// Define a simple table
#[table]
pub struct Counter {
    #[primarykey]
    pub id: u32,
    pub value: u32,
}

// Define a reducer that increments the counter and logs
#[spacetimedb(reducer)]
pub fn increment_counter(ctx: ReducerContext, id: u32) {
    // Log the reducer call with structured data
    log::info!(
        target: "my_stdb_module::reducer",
        "Reducer `increment_counter` called by user: {:?} with id: {}",
        ctx.sender,
        id
    );

    let mut counter = Counter::filter_by_id(&id).unwrap_or_else(|| Counter { id, value: 0 });
    counter.value += 1;
    counter.insert();

    log::info!("Counter with id {} incremented to {}", id, counter.value);
}

Explanation:

  1. We include log and env_logger crates.
  2. The init function ensures env_logger is set up when the module starts. We use try_init() to avoid panicking if the logger is already initialized (e.g., in a test environment). We set a default Info level.
  3. The increment_counter reducer now uses log::info! to output messages. We’re using target for better log categorization and including relevant data like ctx.sender and id in the message.

2. Compile Your SpaceTimeDB Module

Navigate to my_stdb_module in your terminal and compile it:

spacetime-cli build

This will create a .wasm file in your target/spacetime directory. This is the compiled SpaceTimeDB module that the SpaceTimeDB server will load.

3. Dockerize the SpaceTimeDB Server with Your Module

Now, let’s create a Dockerfile to run the SpaceTimeDB server and load our module. We’ll use a multi-stage build to keep the final image small.

Create a file named Dockerfile in the root of your project (one level up from my_stdb_module):

# Stage 1: Build the SpaceTimeDB module
FROM rust:1.76.0-slim-bookworm AS module-builder

WORKDIR /app
COPY my_stdb_module ./my_stdb_module
RUN cd my_stdb_module && cargo build --target wasm32-unknown-unknown --release

# Stage 2: Create the final SpaceTimeDB server image
# Use the official SpacetimeDB CLI image or a minimal base image
# As of 2026-03-14, SpacetimeDB v2.x is stable.
# Assume an official CLI image exists, or build from source if not.
# For simplicity, we'll use a common Linux base and install the CLI if needed.
# In a real scenario, you'd use clockworklabs/spacetimedb-cli or similar if available.
FROM debian:bookworm-slim

# Install SpaceTimeDB CLI (replace with official installation method if different)
# This is a placeholder; consult official SpacetimeDB docs for production CLI installation.
# For this example, we'll assume the CLI is installed and available.
# A more robust Dockerfile would download and install the specific CLI version.
# For educational purposes, let's simulate having `spacetime-cli` installed.
# In a true production setup, you'd likely have a pre-built SpaceTimeDB server binary
# or a Docker image provided by Clockwork Labs.
# For this example, we'll focus on running the server and loading *your* module.
# Let's assume we copy a pre-installed `spacetime-cli` or `spacetimedb-server` binary.
# For this exercise, we'll use the `spacetime-cli` to run the `spacetime-db` command.

# Install necessary runtime dependencies for the CLI if it's a Rust binary
# (e.g., openssl if linked dynamically)
# RUN apt-get update && apt-get install -y --no-install-recommends \
#     libssl-dev \
#     && rm -rf /var/lib/apt/lists/*

# For demonstration, let's assume `spacetime-cli` is already in a base image or pre-installed.
# If not, you'd download and install it here.
# For this example, we'll simulate it by copying a local binary (not ideal for real prod, but works for concept)
# Or, assume `clockworklabs/spacetimedb` is the base image. Let's make this explicit.

# Let's use a more realistic approach, assuming SpacetimeDB provides a base image.
# If no official base image, we'd manually install the `spacetime-cli` binary.
# For now, let's simplify and assume `spacetime-cli` is present.
# In a real scenario, you'd use: FROM clockworklabs/spacetimedb:2.x.y

# For this guide, let's assume we're running `spacetime-cli db` within a generic Linux container.
# This would require `spacetime-cli` to be available.
# A more direct approach is to install `spacetime-cli` here:
RUN apt-get update && apt-get install -y curl && \
    curl -L https://install.spacetimedb.com | sh && \
    apt-get remove -y curl && apt-get autoremove -y && \
    rm -rf /var/lib/apt/lists/*
ENV PATH="/root/.spacetime/bin:${PATH}"

WORKDIR /app

# Copy the compiled module from the builder stage
COPY --from=module-builder /app/my_stdb_module/target/wasm32-unknown-unknown/release/my_stdb_module.wasm ./modules/my_stdb_module.wasm

# Expose the default SpacetimeDB port
EXPOSE 3000

# Command to run the SpaceTimeDB server
# We'll load our module and use environment variables for persistence
ENTRYPOINT ["spacetime-cli", "db", "--module", "./modules/my_stdb_module.wasm"]
# Use `--data-dir` for persistence. This should be a volume in production.
# We'll set this via an environment variable in docker-compose.

Explanation:

  1. Multi-stage build:
    • module-builder stage: Uses a Rust image to compile our my_stdb_module into a .wasm file. This keeps the final image lean.
    • Final stage: Uses a minimal debian:bookworm-slim base.
  2. SpaceTimeDB CLI Installation: We use curl | sh for spacetime-cli installation, which is a common pattern for CLI tools, but in production, you might prefer downloading a specific binary version for better control and security.
  3. Copy Module: The compiled .wasm module is copied from the module-builder stage into the final image.
  4. Expose Port: SpaceTimeDB typically listens on port 3000.
  5. ENTRYPOINT: This defines the command that runs when the container starts. We’re telling spacetime-cli db to load our compiled module. We’ll manage persistence via Docker volumes and environment variables.

4. Create a docker-compose.yml for Local Testing

To easily run and manage our SpaceTimeDB server, we’ll use docker-compose. This allows us to define services, networks, and volumes.

Create a docker-compose.yml file in the same directory as your Dockerfile:

version: '3.8'

services:
  spacetimedb:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      # Production-like settings
      RUST_LOG: "info,my_stdb_module=debug" # Log INFO by default, but DEBUG for our module
      SPACETIMEDB_DATA_DIR: "/data" # Where SpaceTimeDB will store its persistent data
    volumes:
      - stdb_data:/data # Mount a named volume for persistent data

volumes:
  stdb_data:

Explanation:

  1. spacetimedb service: Defines our SpaceTimeDB server.
  2. build: Tells Docker Compose to build the image using our Dockerfile.
  3. ports: Maps container port 3000 to host port 3000, allowing client connections.
  4. environment:
    • RUST_LOG: This is crucial for controlling logging. We set it to info for general output but debug specifically for our my_stdb_module target. In a true production setup, you might remove debug for your module unless actively troubleshooting.
    • SPACETIMEDB_DATA_DIR: We define a directory inside the container for SpaceTimeDB’s data. This allows us to mount a volume for persistence.
  5. volumes: We mount a named Docker volume (stdb_data) to the /data directory inside the container. This ensures that even if the container is removed, your SpaceTimeDB data persists. This is critical for production!

5. Run Your Dockerized SpaceTimeDB Server

Now, from your project root (where Dockerfile and docker-compose.yml are), run:

docker compose up --build

You should see output indicating the SpaceTimeDB server starting, and then your init function’s log message:

spacetimedb-1  | [2026-03-14T10:00:00Z INFO  my_stdb_module] SpaceTimeDB module initialized!

6. Test with a Client (or CLI)

Open another terminal and use the spacetime-cli to connect and invoke your reducer.

spacetime-cli connect ws://localhost:3000 # Connect to your local SpaceTimeDB instance
spacetime-cli call increment_counter 1 # Call the reducer with ID 1

Back in your Docker logs, you should now see the INFO and DEBUG level messages from your reducer:

spacetimedb-1  | [2026-03-14T10:00:05Z INFO  my_stdb_module::reducer] Reducer `increment_counter` called by user: Some(0x...) with id: 1
spacetimedb-1  | [2026-03-14T10:00:05Z INFO  my_stdb_module] Counter with id 1 incremented to 1

Notice how RUST_LOG makes our specific module’s logs more verbose, while keeping the general SpaceTimeDB server logs at INFO level. This is a powerful technique for debugging in production without overwhelming your log systems.

Mini-Challenge: Adding a Health Check Endpoint

A crucial part of production readiness is being able to tell if your application is actually running and healthy. Load balancers and orchestration systems (like Kubernetes) use health checks to determine if they should send traffic to an instance.

Your Challenge: Modify your my_stdb_module to include a simple “health check” mechanism. While SpaceTimeDB itself doesn’t expose HTTP endpoints directly from modules, you can create a simple health_check reducer that just returns true. A client (or an automated script) could then call this reducer to confirm the module is loaded and operational.

Steps:

  1. Add a new reducer called health_check to my_stdb_module/src/lib.rs.
  2. This reducer should take no arguments and simply log an INFO message indicating it was called, then return true.
  3. Recompile your module (spacetime-cli build).
  4. Rebuild and restart your Docker container (docker compose up --build).
  5. Use spacetime-cli call health_check to verify it works and observe the logs.

Hint: Remember that reducers can return values. The ReducerContext is optional if not used.

// Example of a health_check reducer
#[spacetimedb(reducer)]
pub fn health_check() -> bool {
    log::info!(target: "my_stdb_module::health_check", "Health check reducer called.");
    true
}

What to Observe/Learn:

  • How to extend your SpaceTimeDB module with simple diagnostic reducers.
  • How logs provide visibility into the module’s internal state and activity.
  • The concept of a health check, even if simplified for SpaceTimeDB’s reducer-based model.

Common Pitfalls & Troubleshooting

  1. Misconfigured Environment Variables:

    • Pitfall: Your application behaves differently in production than in development, or fails to start, due to incorrect environment variable values (e.g., wrong database URL, missing API key).
    • Troubleshooting:
      • Verify: Double-check the environment variables set in your deployment configuration (Docker Compose, Kubernetes deployment, cloud service settings).
      • Log: Temporarily increase logging verbosity to DEBUG and log the values of critical environment variables at startup (be careful not to log sensitive secrets!).
      • Access: Ensure your application has the necessary permissions to read environment variables in the deployed environment.
  2. Insufficient Logging/Monitoring:

    • Pitfall: Your application crashes or performs poorly in production, but you have no idea why because there aren’t enough logs or metrics.
    • Troubleshooting:
      • Structured Logs: Implement structured logging from the start. It’s much harder to add later.
      • Log Levels: Use appropriate log levels (INFO, WARN, ERROR) for production. Enable DEBUG only when actively troubleshooting.
      • Key Metrics: Identify critical metrics (CPU, memory, network, SpaceTimeDB connections, reducer latency, error rates) and set up monitoring and alerting for them. Don’t wait for an incident to realize you’re blind.
  3. Schema Migration Issues:

    • Pitfall: Deploying a new SpaceTimeDB module with schema changes causes data loss, application errors, or downtime for existing users.
    • Troubleshooting:
      • Test Migrations: Always test your schema migration scripts thoroughly in a staging environment that mirrors production data.
      • Backward Compatibility: Design schema changes to be backward-compatible whenever possible. If not, plan for a clear deprecation and transition strategy.
      • Atomic Deployments: Use deployment strategies that apply schema changes and update application code in an atomic way or with zero downtime (e.g., blue/green deployments).
      • Rollback Plan: Have a clear rollback strategy in case a migration fails. This might involve reverting to a previous module version and restoring a database backup.

Summary

Phew! You’ve navigated the complexities of taking your SpaceTimeDB application to production. This is where the rubber meets the road, and understanding these best practices is key to success.

Here are the key takeaways from this chapter:

  • Environment Separation: Always distinguish between development and production configurations, primarily using environment variables for sensitive data and dynamic settings.
  • Containerization is King: Dockerize your SpaceTimeDB modules and client applications for consistent, portable, and isolated deployments.
  • Automate with CI/CD: Implement Continuous Integration and Deployment pipelines to streamline your release process, ensuring faster, more reliable deployments.
  • Observability is Non-Negotiable: Leverage structured logging, comprehensive monitoring, and proactive alerting to understand your application’s health and performance in real-time.
  • Security First: Implement robust network security, integrate proper authentication and authorization, encrypt data, and manage secrets securely.
  • Plan for the Worst: Design for High Availability to minimize downtime and develop a Disaster Recovery plan, including regular backups and tested restore procedures.
  • Manage Evolution: Handle schema changes gracefully with backward compatibility and well-tested migration strategies.
  • Optimize Continuously: Keep an eye on performance in production, using indexes, efficient reducer design, and client-side optimizations.

You now have a solid foundation for deploying and maintaining SpaceTimeDB applications in a production environment. This knowledge empowers you to build not just functional, but also resilient and scalable real-time systems.

What’s next? With a fully deployed and observable SpaceTimeDB application, you’re ready to explore even more advanced topics, such as deep dives into specific cloud provider integrations, advanced scaling patterns, or perhaps even contributing back to the SpaceTimeDB ecosystem!

References

  1. SpaceTimeDB Official Documentation: https://spacetimedb.com/docs
  2. Rust log crate: https://docs.rs/log/latest/log/
  3. Rust env_logger crate: https://docs.rs/env_logger/latest/env_logger/
  4. Docker Documentation: https://docs.docker.com/
  5. Docker Compose Documentation: https://docs.docker.com/compose/
  6. Continuous Integration and Delivery (CI/CD) Concepts: https://www.redhat.com/en/topics/devops/what-is-ci-cd
  7. The Twelve-Factor App (Environment Variables): https://12factor.net/config

This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.