Setting Up the Project
First, let’s set up our Celery instance in a file named task.py
. This setup involves configuring Celery with RabbitMQ as the message broker and an RPC backend for storing task results:
from celery import Celery from celery.utils.log import get_task_logger # Initialize Celery application app = Celery("tasks", broker='amqp://username:password@localhost', backend='rpc://') # Create a logger logger = get_task_logger(__name__) @app.task def add(x, y): logger.info(f'Starting to add {x} + {y}') try: result = x + y logger.info(f'Task completed with result {result}') return result except Exception as e: logger.error('Error occurred', exc_info=True) raise e
In the code above, we define a Celery application named tasks
configured with a RabbitMQ broker. The logger is utilized to record the operations and any errors encountered during the execution of tasks.
Invoking Asynchronous Tasks
Next, let’s write a main.py
to invoke our asynchronous task and handle the result:
from celery.result import AsyncResult from tasks import add # Sending an asynchronous task result: AsyncResult = add.delay(1, 2) # Checking if the task is ready and retrieving the result print(result.ready()) # Prints False if the task is not yet ready print(result.get(timeout=10)) # Waits for the result up to 10 seconds
Here, add.delay(1, 2)
sends an asynchronous task to add the numbers 1 and 2. The AsyncResult
object allows us to check if the task is completed and to fetch the result once it is available.
Running the Celery Worker
To execute the tasks, we need to run a Celery worker. Due to compatibility issues with Windows, we use the --pool=solo
option:
.venv\Scripts\python.exe -m celery -A tasks worker --loglevel=info -E --pool=solo
The --pool=solo
option is crucial for running Celery on Windows as it avoids issues that arise from the default prefork pool, which is not fully supported on Windows platforms.