# Worker Manager
The worker manager and its functionality was introduced in version 22.9.
The details of this section are intended for more advanced usages and not necessary to get started.
The purpose of the manager is to create consistency and flexibility between development and production environments. Whether you intend to run a single worker, or multiple workers, whether with, or without auto-reload: the experience will be the same.
In general it looks like this:
When you run Sanic, the main process instantiates a WorkerManager
. That manager is in charge of running one or more WorkerProcess
. There generally are two kinds of processes:
- server processes, and
- non-server processes.
For the sake of ease, the User Guide generally will use the term "worker" or "worker process" to mean a server process, and "Manager" to mean the single worker manager running in your main process.
# How Sanic Server starts processes
Sanic will start processes using the spawn (opens new window) start method. This means that for every process/worker, the global scope of your application will be run on its own thread. The practical impact of this that if you do not run Sanic with the CLI, you will need to nest the execution code inside a block to make sure it only runs on __main__
.
if __name__ == "__main__":
app.run()
If you do not, you are likely to see an error message like this:
sanic.exceptions.ServerError: Sanic server could not start: [Errno 98] Address already in use.
This may have happened if you are running Sanic in the global scope and not inside of a `if __name__ == "__main__"` block.
See more information: https://sanic.dev/en/guide/deployment/manager.html#how-sanic-server-starts-processes
The likely fix for this problem is nesting your Sanic run call inside of the __name__ == "__main__"
block. If you continue to receive this message after nesting, or if you see this while using the CLI, then it means the port you are trying to use is not available on your machine and you must select another port.
# Starting a worker
All worker processes must send an acknowledgement when starting. This happens under the hood, and you as a developer do not need to do anything. However, the Manager will exit with a status code 1
if one or more workers do not send that ack
message, or a worker process throws an exception while trying to start. If no exceptions are encountered, the Manager will wait for up to thirty (30) seconds for the acknowledgement.
In the situation when you know that you will need more time to start, you can monkeypatch the Manager. The threshold does not include anything inside of a listener, and is limited to the execution time of everything in the global scope of your application.
If you run into this issue, it may indicate a need to look deeper into what is causing the slow startup.
from sanic.worker.manager import WorkerManager
WorkerManager.THRESHOLD = 100 # Value is in 0.1s
See worker ack for more information.
As stated above, Sanic will use spawn (opens new window) to start worker processes. If you would like to change this behavior and are aware of the implications of using different start methods, you can modify as shown here.
from sanic import Sanic
Sanic.start_method = "fork"
# Worker ack
When all of your workers are running in a subprocess a potential problem is created: deadlock. This can occur when the child processes cease to function, but the main process is unaware that this happened. Therefore, Sanic servers will automatically send an ack
message (short for acknowledge) to the main process after startup.
In version 22.9, the ack
timeout was short and limited to 5s
. In version 22.12, the timeout was lengthened to 30s
. If your application is shutting down after thirty seconds then it might be necessary to manually increase this threshhold.
The value of WorkerManager.THRESHOLD
is in 0.1s
increments. Therefore, to set it to one minute, you should set the value to 600
.
This value should be set as early as possible in your application, and should ideally happen in the global scope. Setting it after the main process has started will not work.
from sanic.worker.manager import WorkerManager
WorkerManager.THRESHOLD = 600
# Zero downtime restarts
By default, when restarting workers, Sanic will teardown the existing process first before starting a new one.
If you are intending to use the restart functionality in production then you may be interested in having zero-downtime reloading. This can be accomplished by forcing the reloader to change the order to start a new process, wait for it to ack, and then teardown the old process.
From the multiplexer, use the zero_downtime
argument
app.m.restart(zero_downtime=True)
Added in v22.12
# Using shared context between worker processes
Python provides a few methods for exchanging objects (opens new window), synchronizing (opens new window), and sharing state (opens new window) between processes. This usually involves objects from the multiprocessing
and ctypes
modules.
If you are familiar with these objects and how to work with them, you will be happy to know that Sanic provides an API for sharing these objects between your worker processes. If you are not familiar, you are encouraged to read through the Python documentation linked above and try some of the examples before proceeding with implementing shared context.
Similar to how application context allows an applicaiton to share state across the lifetime of the application with app.ctx
, shared context provides the same for the special objects mentioned above. This context is available as app.shared_ctx
and should ONLY be used to share objects intended for this purpose.
The shared_ctx
will:
- NOT share regular objects like
int
,dict
, orlist
- NOT share state between Sanic instances running on different machines
- NOT share state to non-worker processes
- only share state between server workers managed by the same Manager
Attaching an inappropriate object to shared_ctx
will likely result in a warning, and not an error. You should be careful to not accidentally add an unsafe object to shared_ctx
as it may not work as expected. If you are directed here because of one of those warnings, you might have accidentally used an unsafe object in shared_ctx
.
In order to create a shared object you must create it in the main process and attach it inside of the main_process_start
listener.
from multiprocessing import Queue
@app.main_process_start
async def main_process_start(app):
app.shared_ctx.queue = Queue()
Trying to attach to the shared_ctx
object outside of this listener may result in a RuntimeError
.
After creating the objects in the main_process_start
listener and attaching to the shared_ctx
, they will be available in your workers wherever the application instance is available (example: listeners, middleware, request handlers).
from multiprocessing import Queue
@app.get("")
async def handler(request):
request.app.shared_ctx.queue.put(1)
...
# Access to the multiplexer
The application instance has access to an object that provides access to interacting with the Manager and other worker processes. The object is attached as the app.multiplexer
property, but it is more easily accessed by its alias: app.m
.
For example, you can get access to the current worker state.
@app.on_request
async def print_state(request: Request):
print(request.app.m.name)
print(request.app.m.pid)
print(request.app.m.state)
Sanic-Server-0-0
99999
{'server': True, 'state': 'ACKED', 'pid': 99999, 'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc), 'starts': 2, 'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)}
The multiplexer
also has access to terminate the Manager, or restart worker processes
# shutdown the entire application and all processes
app.m.name.terminate()
# restart the current worker only
app.m.name.restart()
# restart specific workers only (comma delimited)
app.m.name.restart("Sanic-Server-4-0,Sanic-Server-7-0")
# restart ALL workers
app.m.name.restart(all_workers=True) # Available v22.12+
# Worker state
As shown above, the multiplexer
has access to report upon the state of the current running worker. However, it also contains the state for ALL processes running.
@app.on_request
async def print_state(request: Request):
print(request.app.m.workers)
{
'Sanic-Main': {'pid': 99997},
'Sanic-Server-0-0': {
'server': True,
'state': 'ACKED',
'pid': 9999,
'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
'starts': 2,
'restart_at': datetime.datetime(2022, 10, 1, 0, 0, 12, 861332, tzinfo=datetime.timezone.utc)
},
'Sanic-Reloader-0': {
'server': False,
'state': 'STARTED',
'pid': 99998,
'start_at': datetime.datetime(2022, 10, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc),
'starts': 1
}
}
# Built-in non-server processes
As mentioned, the Manager also has the ability to run non-server processes. Sanic comes with two built-in types of non-server processes, and allows for creating custom processes.
The two built-in processes are
- the auto-reloader, optionally enabled to watch the file system for changes and trigger a restart
- inspector, optionally enabled to provide external access to the state of the running instance
# Inspector
Sanic has the ability to expose the state and the functionality of the multiplexer
to the CLI. Currently, this requires the CLI command to be run on the same machine as the running Sanic instance. By default the inspector is disabled.
To enable it, set the config value to True
.
app.config.INSPECTOR = True
You will now have access to execute any of these CLI commands:
sanic inspect reload Trigger a reload of the server workers
sanic inspect shutdown Shutdown the application and all processes
sanic inspect scale N Scale the number of workers to N
sanic inspect <custom> Run a custom command
This works by exposing a small HTTP service on your machine. You can control the location using configuration values:
app.config.INSPECTOR_HOST = "localhost"
app.config.INSPECTOR_PORT = 6457
Learn more to find out what is possible with the Inspector.
# Running custom processes
To run a managed custom process on Sanic, you must create a callable. If that process is meant to be long-running, then it should handle a shutdown call by a SIGINT
or SIGTERM
signal.
The simplest method for doing that in Python will be to just wrap your loop in KeyboardInterrupt
.
If you intend to run another application, like a bot, then it is likely that it already has capability to handle this signal and you likely do not need to do anything.
from time import sleep
def my_process(foo):
try:
while True:
sleep(1)
except KeyboardInterrupt:
print("done")
That callable must be registered in the main_process_ready
listener. It is important to note that is is NOT the same location that you should register shared context objects.
@app.main_process_ready
async def ready(app: Sanic, _):
# app.manager.manage(<name>, <callable>, <kwargs>)
app.manager.manage("MyProcess", my_process, {"foo": "bar"})
# Single process mode
If you would like to opt out of running multiple processes, you can run Sanic in a single process only. In this case, the Manager will not run. You will also not have access to any features that require processes (auto-reload, the inspector, etc).
sanic path.to.server:app --single-process
if __name__ == "__main__":
app.run(single_process=True)
if __name__ == "__main__":
app.prepare(single_process=True)
Sanic.serve_single()