# Worker Manager

NEW in v22.9

The worker manager and its functionality was introduced in version 22.9.

The details of this section are intended for more advanced usages and not necessary to get started.

The purpose of the manager is to create consistency and flexibility between development and production environments. Whether you intend to run a single worker, or multiple workers, whether with, or without auto-reload: the experience will be the same.

In general it looks like this:

When you run Sanic, the main process instantiates a WorkerManager. That manager is in charge of running one or more WorkerProcess. There generally are two kinds of processes:

  • server processes, and
  • non-server processes.

For the sake of ease, the User Guide generally will use the term "worker" or "worker process" to mean a server process, and "Manager" to mean the single worker manager running in your main process.

# How Sanic Server starts processes

Sanic will start processes using the spawn (opens new window) start method. This means that for every process/worker, the global scope of your application will be run on its own thread. The practical impact of this that if you do not run Sanic with the CLI, you will need to nest the execution code inside a block to make sure it only runs on __main__.

if __name__ == "__main__":

If you do not, you are likely to see an error message like this:

sanic.exceptions.ServerError: Sanic server could not start: [Errno 98] Address already in use.
This may have happened if you are running Sanic in the global scope and not inside of a `if __name__ == "__main__"` block.
See more information: https://sanic.dev/en/guide/deployment/manager.html#how-sanic-server-starts-processes

The likely fix for this problem is nesting your Sanic run call inside of the __name__ == "__main__" block. If you continue to receive this message after nesting, or if you see this while using the CLI, then it means the port you are trying to use is not available on your machine and you must select another port.

# Starting a worker

All worker processes must send an acknowledgement when starting. This happens under the hood, and you as a developer do not need to do anything. However, the Manager will exist with a status code 1 if one or more workers do not send that ack message. By default, the Manager will wait for five (5) seconds to receive the ack.

If your application crashes after five (5) seconds, likely the issue is some inability for your workers to start. You should review the traceback for errors related to your code.

In the situation when you know that you will need more than five (5) seconds to start, you can monkeypatch the Manager. The threshold does not include anything inside of a listener, and is limited to the execution time of everything in the global scope of your application.

If you run into this issue, it may indicate a need to look deeper into what is causing the slow startup.

from sanic.worker.manager import WorkerManager
WorkerManager.THRESHOLD = 100  # Value is in 0.1s

# Using shared context between worker processes

Python provides a few methods for exchanging objects (opens new window), synchronizing (opens new window), and sharing state (opens new window) between processes. This usually involves objects from the multiprocessing and ctypes modules.

If you are familiar with these objects and how to work with them, you will be happy to know that Sanic provides an API for sharing these objects between your worker processes. If you are not familiar, you are encouraged to read through the Python documentation linked above and try some of the examples before proceeding with implementing shared context.

Similar to how application context allows an applicaiton to share state across the lifetime of the application with app.ctx, shared context provides the same for the special objects mentioned above. This context is available as app.shared_ctx and should ONLY be used to share objects intended for this purpose.

The shared_ctx will:

  • NOT share regular objects like int, dict, or list
  • NOT share state between Sanic instances running on different machines
  • NOT share state to non-worker processes
  • only share state between workers managed by the same Manager

Attaching an inappropriate object to shared_ctx will likely result in a warning, and not an error. You should be careful to not accidentally add an unsafe object to shared_ctx as it may not work as expected. If you are directed here because of one of those warnings, you might have accidentally used an unsafe object in shared_ctx.

In order to create a shared object you must create it in the main process and attach it inside of the main_process_start listener.

from multiprocessing import Queue
async def main_process_start(app):
    app.shared_ctx.queue = Queue()

Trying to attach to the shared_ctx object outside of this listener may result in a RuntimeError.

After creating the objects in the main_process_start listener and attaching to the shared_ctx, they will be available in your workers wherever the application instance is available (example: listeners, middleware, request handlers).

from multiprocessing import Queue
async def handler(request):

# Access to the multiplexer

The application instance has access to an object that provides access to interacting with the Manager and other worker processes. The object is attached as the app.multiplexer property, but it is more easily accessed by its alias: app.m.

For example, you can get access to the current worker state.

async def print_state(request: Request):
{'server': True, 'state': 'ACKED', 'pid': 99999, 'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc), 'starts': 2, 'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)}

The multiplexer also has access to terminate the Manager, or restart worker processes

# shutdown the entire application and all processes
# restart the current worker only
# restart specific workers only (comma delimited)
# restart ALL workers

# Worker state

As shown above, the multiplexer has access to report upon the state of the current running worker. However, it also contains the state for ALL processes running.

async def print_state(request: Request):
    'Sanic-Main': {'pid': 99997},
    'Sanic-Server-0-0': {
        'server': True,
        'state': 'ACKED',
        'pid': 9999,
        'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
        'starts': 2,
        'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)
    'Sanic-Reloader-0': {
        'server': False,
        'state': 'STARTED',
        'pid': 99998,
        'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
        'starts': 1

# Built-in non-server processes

As mentioned, the Manager also has the ability to run non-server processes. Sanic comes with two built-in types of non-server processes, and allows for creating custom processes.

The two built-in processes are

  • the auto-reloader, optionally enabled to watch the file system for changes and trigger a restart
  • inspector, optionally enabled to provide external access to the state of the running instance

# Inspector

Sanic has the ability to expose the state and the functionality of the multiplexer to the CLI. Currently, this requires the CLI command to be run on the same machine as the running Sanic instance. By default the inspector is disabled.

To enable it, set the config value to True.

app.config.INSPECTOR = True

You will now have access to execute any of these CLI commands:

    --inspect                      Inspect the state of a running instance, human readable
    --inspect-raw                  Inspect the state of a running instance, JSON output
    --trigger-reload               Trigger worker processes to reload
    --trigger-shutdown             Trigger all processes to shutdown

This works by exposing a small TCP socket on your machine. You can control the location using configuration values:

app.config.INSPECTOR_HOST =  "localhost"
app.config.INSPECTOR_PORT =  6457


The inspector host and port should not be exposed outside of your local network. The protocol is not secured.

It is expected that this will be secured in the future. However, it is advised to not enable this in production unless you are confident that you trust access to the running environment.

# Running custom processes

To run a managed custom process on Sanic, you must create a callable. If that process is meant to be long-running, then it should handle a shutdown call by a SIGINT or SIGTERM signal.

The simplest method for doing that in Python will be to just wrap your loop in KeyboardInterrupt.

If you intend to run another application, like a bot, then it is likely that it already has capability to handle this signal and you likely do not need to do anything.

from time import sleep
def my_process(foo):
        while True:
    except KeyboardInterrupt:

That callable must be registered in the main_process_ready listener. It is important to note that is is NOT the same location that you should register shared context objects.

async def ready(app: Sanic, _):
    app.manager.manage("MyProcess", my_process, {"foo": "bar"})
#   app.manager.manage(<name>, <callable>, <kwargs>)

# Single process mode

If you would like to opt out of running multiple processes, you can run Sanic in a single process only. In this case, the Manager will not run. You will also not have access to any features that require processes (auto-reload, the inspector, etc).

if __name__ == "__main__":
if __name__ == "__main__":
sanic path.to.server:app --single-process
MIT Licensed
Copyright © 2018-present Sanic Community Organization

~ Made with ❤️ and ☕️ ~