Create Multi-Column Index in SQLAlchemy

Defining Multi-Column Indexes Using ORM Declarative Mapping

from sqlalchemy import create_engine, Column, Integer, String, Index
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

engine = create_engine('postgresql://username:password@localhost/mydatabase')
Base = declarative_base()

class Employee(Base):
    __tablename__ = 'employees'
    id = Column(Integer, primary_key=True)
    last_name = Column(String)
    first_name = Column(String)
    department_id = Column(Integer)

    __table_args__ = (
        Index('idx_employees_last_first', 'last_name', 'first_name'),


Creating Indexes After Table Definition

from sqlalchemy import create_engine, MetaData, Table, Index

engine = create_engine('postgresql://username:password@localhost/mydatabase')
metadata = MetaData(bind=engine)

# Reflect the existing table
employees = Table('employees', metadata, autoload_with=engine)

# Create the index
index = Index('idx_employees_last_first', employees.c.last_name, employees.c.first_name)


Set Battery Charge Limit in Ubuntu

ls /sys/class/power_supply/
ls /sys/class/power_supply/BAT0
sudo sh -c "echo 60 > /sys/class/power_supply/BAT0/charge_control_end_threshold"
cat /sys/class/power_supply/BAT0/status

Create Battery Charge Threshold Service

sudo nano /etc/systemd/system/battery-charge-end-threshold.service
Description=Set Battery Charge Maximum Limit

ExecStart=/bin/bash -c 'echo 60 > /sys/class/power_supply/BAT0/charge_control_end_threshold'

sudo systemctl enable battery-charge-end-threshold.service
sudo systemctl daemon-reload
sudo systemctl start battery-charge-end-threshold.service


Variables vs. Type Aliases in Python

In Python, variables can have type annotations to indicate the type of value they are expected to hold. This is particularly useful for static type checkers and for improving code readability. When defining a variable with a type annotation, you explicitly specify the type:

from typing import Type

class A:

tp: Type[A] = A

In this example:

  • tp is a variable with a type annotation.
  • Type[A] indicates that tp should hold a type object corresponding to class A.

Type Aliases

A type alias is a way to give a new name to an existing type. This can make your code more readable, especially when dealing with complex types. Type aliases are defined without an explicit type annotation at the top level of a module:

class A:

Alias = A


  • Alias is a type alias for class A.
  • This does not create a new type but simply provides an alternative name for A.

Using type aliases can simplify type annotations and make your code more descriptive.

Explicit Type Aliases with TypeAlias (PEP 613)

PEP 613 introduced the TypeAlias feature to explicitly define type aliases. This can be especially useful in larger projects or when defining type aliases in class bodies or functions. To use TypeAlias, import it from the typing module (or typing_extensions for Python 3.9 and earlier):

from typing import TypeAlias  # "from typing_extensions" in Python 3.9 and earlier

class A:

Alias: TypeAlias = A

Using TypeAlias makes it clear that Alias is intended to be a type alias, not a variable. This explicitness enhances code readability and maintainability.

Understanding ‘useClient’ and ‘useServer’ in Next.js

useClient and useServer are React hooks introduced in Next.js to optimize and clarify the execution context of components or logic within an application. These hooks are part of Next.js’s ongoing enhancements to support React Server Components, enabling developers to specify more clearly whether a component should run on the client-side or server-side.


The useServer hook is a clear indication that the enclosed code or component is intended to run only on the server. This is particularly useful for operations that are sensitive or need direct access to server-side resources such as databases or environment variables that should not be exposed to the client. Here’s a quick example:

'use server'

function ServerComponent() {
  const data = useServer(() => {
    // Fetch data or perform operations that are server-only
    return fetchSecretData();

  return <div>Secret Data: {data}</div>;
import { useServer } from 'next/server';

function ServerComponent() {
  const serverData = useServer(() => {
    // Simulate fetching server-only data
    const data = fetchServerData();
    return data;

  return <div>Loaded server-only data: {serverData}</div>;

function fetchServerData() {
  // Pretend to fetch data that should not be exposed to the client
  return "Secret server info";

In this example, fetchSecretData is a function that you wouldn’t want to expose to the client-side due to security concerns or computational reasons. By using useServer, you ensure that this function only runs on the server.


Conversely, useClient is used to denote that the enclosed code or component should run only on the client-side. This is suitable for interactions that depend solely on the browser’s capabilities, such as DOM manipulations or client-side state handling that doesn’t need to pre-render on the server. Here’s how you might use it:

'use client'

function ClientComponent() {
  const [count, setCount] = useClient(() => {
    // Only run this hook in the client-side environment
    const [localCount, setLocalCount] = useState(0);
    return [localCount, setLocalCount];

  return (
      <button onClick={() => setCount(count + 1)}>Increment</button>
      Count: {count}
import { useClient } from 'next/client';
import { useState } from 'react';

function ClientComponent() {
  const [count, setCount] = useClient(() => {
    // Initialize state only on the client
    const [localCount, setLocalCount] = useState(0);
    return [localCount, setLocalCount];

  // Button click handler for incrementing the count
  function handleClick() {
    setCount(count + 1);

  return (
      <button onClick={handleClick}>Increment</button>
      Count: {count}

In this example, the state management for count is purely client-side, which makes useClient ideal for encapsulating client-specific logic.

When to Use useServer vs. useClient

Deciding whether to use useServer or useClient boils down to understanding where your code needs to execute for optimal performance and security. Here are some guidelines:

  • Use useServer if:
    • You need to access server-side resources or perform actions securely, away from the client’s reach.
    • You want to pre-render data or perform computations during server-side rendering (SSR) for SEO benefits or faster page loads.
  • Use useClient if:
    • Your component or logic must interact with browser-specific APIs or client-side resources like the local storage.
    • You are handling state or effects that should only occur in the client’s environment, such as animations or user input events.

Calling Python Celery Tasks from a Different Machine Using send_task


To follow along, you will need:

  • Python installed on both the client and worker machines.
  • Celery and a message broker (RabbitMQ) installed. Redis will be used as the result backend.
  • Basic knowledge of Python and familiarity with Celery.

Step 1: Setup the Worker

First, let’s set up the Celery worker. On the worker machine, create a file named

from celery import Celery

app = Celery("tasks", broker='amqp://username:password@localhost',

def add(x, y):
    return x + y

Here, we define a simple task named add that takes two arguments and returns their sum. Adjust the broker and backend URLs to point to your actual RabbitMQ and Redis services.

Step 2: Start the Celery Worker

Run the following command on the worker machine to start the Celery worker:

.venv\Scripts\python.exe -m celery -A tasks worker --loglevel=info -E --pool=solo

This command starts a Celery worker that listens for tasks to execute.

Step 3: Setup the Client

On the client machine, you don’t need the full task definitions—only the Celery app configuration and the task signatures. Create a file named

from celery import Celery

app = Celery("tasks", broker='amqp://username:password@localhost',

result = app.send_task('celery_project.tasks.add', args=[4, 4])

Here, send_task is used to dispatch the task. It requires the name of the task (which must match the name given in the worker’s task decorator) and the arguments for the task.

Step 4: Calling the Task from the Client

Run the script on the client machine:


This script sends the add task to the worker machine via the message broker, and then fetches the result using result.get().

Or Use Minimal Task Definitions approach

On the client side, you only need a minimal definition of the tasks to send them. You can redefine the tasks in a simple module that just includes the task names, without their implementations:

from celery import Celery

app = Celery('client_tasks', broker='pyamqp://guest@your_broker_ip//')

def add(x, y):
    pass  # Implementation is not needed on the client

Then on the client:

from client_tasks import add
result = add.delay(4, 4)

Using Celery in Python with tasks defined in different modules



To get started, you will need Python installed on your system. Additionally, you will need RabbitMQ and Redis. You can install RabbitMQ and Redis on your local machine or use Docker containers.

Python Dependencies

Install Celery using pip:

pip install celery

Project Structure

Here’s a simple project structure to organize your Celery tasks:

├──    # Celery configuration and instance
├──         # Module for 'add' task
├──         # Module for 'multiply' task
└──          # Main script to execute tasks

Celery Configuration

In, we configure our Celery application:

from celery import Celery

app = Celery("tasks", broker='amqp://username:password@localhost',
             include=['task1', 'task2'])

if __name__ == '__main__':
  • broker: The URL of the RabbitMQ server.
  • backend: The URL of the Redis server used to store task results.
  • include: List of modules to include so Celery knows where to find the defined tasks.

Defining Tasks

Tasks are defined in and

from celery_app import app
from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

def add(x, y):'Starting to add {x} + {y}')
    result = x + y'Task completed with result {result}')
    return result

from celery_app import app
from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

def multiply(x, y):'Starting to multiply {x} * {y}')
    result = x * y'Task completed with result {result}')
    return result

Running Tasks

In, we initiate and execute tasks asynchronously:

from task1 import add
from task2 import multiply

result1 = add.delay(1, 2)
result2 = multiply.delay(2, 3)

print("add: " + str(result1.get(timeout=10)))
print("multiply: " + str(result2.get(timeout=10)))

Running Celery Worker

To run the Celery worker, use the following command:

.venv\Scripts\python.exe -m celery -A celery_app worker --loglevel=info -E --pool=solo

Get Result from Asynchronous Celery Tasks in Python

Setting Up the Project

First, let’s set up our Celery instance in a file named This setup involves configuring Celery with RabbitMQ as the message broker and an RPC backend for storing task results:

from celery import Celery
from celery.utils.log import get_task_logger

# Initialize Celery application
app = Celery("tasks", broker='amqp://username:password@localhost', backend='rpc://')

# Create a logger
logger = get_task_logger(__name__)

def add(x, y):'Starting to add {x} + {y}')
        result = x + y'Task completed with result {result}')
        return result
    except Exception as e:
        logger.error('Error occurred', exc_info=True)
        raise e

In the code above, we define a Celery application named tasks configured with a RabbitMQ broker. The logger is utilized to record the operations and any errors encountered during the execution of tasks.

Invoking Asynchronous Tasks

Next, let’s write a to invoke our asynchronous task and handle the result:

from celery.result import AsyncResult
from tasks import add

# Sending an asynchronous task
result: AsyncResult = add.delay(1, 2)

# Checking if the task is ready and retrieving the result
print(result.ready())  # Prints False if the task is not yet ready
print(result.get(timeout=10))  # Waits for the result up to 10 seconds

Here, add.delay(1, 2) sends an asynchronous task to add the numbers 1 and 2. The AsyncResult object allows us to check if the task is completed and to fetch the result once it is available.

Running the Celery Worker

To execute the tasks, we need to run a Celery worker. Due to compatibility issues with Windows, we use the --pool=solo option:

.venv\Scripts\python.exe -m celery -A tasks worker --loglevel=info -E --pool=solo

The --pool=solo option is crucial for running Celery on Windows as it avoids issues that arise from the default prefork pool, which is not fully supported on Windows platforms.

Simplifying Asynchronous Task Execution with Celery in Python

Setting up the Celery Application

First, we need to set up our Celery application. This involves specifying the message broker and defining tasks. A message broker is a mechanism responsible for transferring data between the application and Celery workers. In our example, we use RabbitMQ as the broker.

Here is the code snippet for setting up a Celery application, saved in a file named

from celery import Celery

# Create a Celery instance
app = Celery("tasks", broker='amqp://username:password@localhost')

# Define a simple task to add two numbers
def add(x, y):
    return x + y

In this setup, Celery is initialized with a name (“tasks”) and a broker URL, which includes the username, password, and server location (in this case, localhost for local development).

Defining a Task

We define a simple task using the @app.task decorator. This task, add, takes two parameters, x and y, and returns their sum. The decorator marks this function as a task that Celery can manage.

Calling the Task Asynchronously

To call our add task asynchronously, we use the following code snippet in

from tasks import add

# Call the add task asynchronously
result = add.delay(1, 2)
print("Task sent to the Celery worker!")

The delay method is a convenient shortcut provided by Celery to execute the task asynchronously. When add.delay(1, 2) is called, Celery sends this task to the queue and then it’s picked up by a worker.

Running Celery Workers

To execute the tasks in the queue, we need to run Celery workers. Assuming you’ve activated a virtual environment, you can start a Celery worker using the following command:

.venv\Scripts\celery.exe -A tasks worker --loglevel=info

This command starts a Celery worker with a log level of info, which provides a moderate amount of logging output. Here, -A tasks tells Celery that our application is defined in the file.