Understanding ‘useClient’ and ‘useServer’ in Next.js

useClient and useServer are React hooks introduced in Next.js to optimize and clarify the execution context of components or logic within an application. These hooks are part of Next.js’s ongoing enhancements to support React Server Components, enabling developers to specify more clearly whether a component should run on the client-side or server-side.

useServer

The useServer hook is a clear indication that the enclosed code or component is intended to run only on the server. This is particularly useful for operations that are sensitive or need direct access to server-side resources such as databases or environment variables that should not be exposed to the client. Here’s a quick example:

'use server'

function ServerComponent() {
  const data = useServer(() => {
    // Fetch data or perform operations that are server-only
    return fetchSecretData();
  });

  return <div>Secret Data: {data}</div>;
}
import { useServer } from 'next/server';

function ServerComponent() {
  const serverData = useServer(() => {
    // Simulate fetching server-only data
    const data = fetchServerData();
    return data;
  });

  return <div>Loaded server-only data: {serverData}</div>;
}

function fetchServerData() {
  // Pretend to fetch data that should not be exposed to the client
  return "Secret server info";
}

In this example, fetchSecretData is a function that you wouldn’t want to expose to the client-side due to security concerns or computational reasons. By using useServer, you ensure that this function only runs on the server.

useClient

Conversely, useClient is used to denote that the enclosed code or component should run only on the client-side. This is suitable for interactions that depend solely on the browser’s capabilities, such as DOM manipulations or client-side state handling that doesn’t need to pre-render on the server. Here’s how you might use it:

'use client'

function ClientComponent() {
  const [count, setCount] = useClient(() => {
    // Only run this hook in the client-side environment
    const [localCount, setLocalCount] = useState(0);
    return [localCount, setLocalCount];
  });

  return (
    <div>
      <button onClick={() => setCount(count + 1)}>Increment</button>
      Count: {count}
    </div>
  );
}
import { useClient } from 'next/client';
import { useState } from 'react';

function ClientComponent() {
  const [count, setCount] = useClient(() => {
    // Initialize state only on the client
    const [localCount, setLocalCount] = useState(0);
    return [localCount, setLocalCount];
  });

  // Button click handler for incrementing the count
  function handleClick() {
    setCount(count + 1);
  }

  return (
    <div>
      <button onClick={handleClick}>Increment</button>
      Count: {count}
    </div>
  );
}

In this example, the state management for count is purely client-side, which makes useClient ideal for encapsulating client-specific logic.

When to Use useServer vs. useClient

Deciding whether to use useServer or useClient boils down to understanding where your code needs to execute for optimal performance and security. Here are some guidelines:

  • Use useServer if:
    • You need to access server-side resources or perform actions securely, away from the client’s reach.
    • You want to pre-render data or perform computations during server-side rendering (SSR) for SEO benefits or faster page loads.
  • Use useClient if:
    • Your component or logic must interact with browser-specific APIs or client-side resources like the local storage.
    • You are handling state or effects that should only occur in the client’s environment, such as animations or user input events.

Calling Python Celery Tasks from a Different Machine Using send_task

Prerequisites

To follow along, you will need:

  • Python installed on both the client and worker machines.
  • Celery and a message broker (RabbitMQ) installed. Redis will be used as the result backend.
  • Basic knowledge of Python and familiarity with Celery.

Step 1: Setup the Worker

First, let’s set up the Celery worker. On the worker machine, create a file named tasks.py:

from celery import Celery

app = Celery("tasks", broker='amqp://username:password@localhost',
             backend='redis://localhost:6379/0')

@app.task(name='celery_project.tasks.add')
def add(x, y):
    return x + y

Here, we define a simple task named add that takes two arguments and returns their sum. Adjust the broker and backend URLs to point to your actual RabbitMQ and Redis services.

Step 2: Start the Celery Worker

Run the following command on the worker machine to start the Celery worker:

.venv\Scripts\python.exe -m celery -A tasks worker --loglevel=info -E --pool=solo

This command starts a Celery worker that listens for tasks to execute.

Step 3: Setup the Client

On the client machine, you don’t need the full task definitions—only the Celery app configuration and the task signatures. Create a file named main.py:

from celery import Celery

app = Celery("tasks", broker='amqp://username:password@localhost',
             backend='redis://localhost:6379/0')

result = app.send_task('celery_project.tasks.add', args=[4, 4])
print(result.get())

Here, send_task is used to dispatch the task. It requires the name of the task (which must match the name given in the worker’s task decorator) and the arguments for the task.

Step 4: Calling the Task from the Client

Run the main.py script on the client machine:

python main.py

This script sends the add task to the worker machine via the message broker, and then fetches the result using result.get().

Or Use Minimal Task Definitions approach

On the client side, you only need a minimal definition of the tasks to send them. You can redefine the tasks in a simple module that just includes the task names, without their implementations:

client_tasks.py:

from celery import Celery

app = Celery('client_tasks', broker='pyamqp://guest@your_broker_ip//')

@app.task(name='your_module_name.tasks.add')
def add(x, y):
    pass  # Implementation is not needed on the client

Then on the client:

from client_tasks import add
result = add.delay(4, 4)
print(result.get(timeout=10))

Using Celery in Python with tasks defined in different modules

Setup

Requirements

To get started, you will need Python installed on your system. Additionally, you will need RabbitMQ and Redis. You can install RabbitMQ and Redis on your local machine or use Docker containers.

Python Dependencies

Install Celery using pip:

pip install celery

Project Structure

Here’s a simple project structure to organize your Celery tasks:

celery_project/
│
├── celery_app.py    # Celery configuration and instance
├── task1.py         # Module for 'add' task
├── task2.py         # Module for 'multiply' task
└── main.py          # Main script to execute tasks

Celery Configuration

In celery_app.py, we configure our Celery application:

from celery import Celery

app = Celery("tasks", broker='amqp://username:password@localhost',
             backend='redis://localhost:6379/0',
             include=['task1', 'task2'])

if __name__ == '__main__':
    app.start()
  • broker: The URL of the RabbitMQ server.
  • backend: The URL of the Redis server used to store task results.
  • include: List of modules to include so Celery knows where to find the defined tasks.

Defining Tasks

Tasks are defined in task1.py and task2.py:

task1.py:

from celery_app import app
from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

@app.task
def add(x, y):
    logger.info(f'Starting to add {x} + {y}')
    result = x + y
    logger.info(f'Task completed with result {result}')
    return result

task2.py:

from celery_app import app
from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

@app.task
def multiply(x, y):
    logger.info(f'Starting to multiply {x} * {y}')
    result = x * y
    logger.info(f'Task completed with result {result}')
    return result

Running Tasks

In main.py, we initiate and execute tasks asynchronously:

from task1 import add
from task2 import multiply

result1 = add.delay(1, 2)
result2 = multiply.delay(2, 3)

print("add: " + str(result1.get(timeout=10)))
print("multiply: " + str(result2.get(timeout=10)))

Running Celery Worker

To run the Celery worker, use the following command:

.venv\Scripts\python.exe -m celery -A celery_app worker --loglevel=info -E --pool=solo

Get Result from Asynchronous Celery Tasks in Python

Setting Up the Project

First, let’s set up our Celery instance in a file named task.py. This setup involves configuring Celery with RabbitMQ as the message broker and an RPC backend for storing task results:

from celery import Celery
from celery.utils.log import get_task_logger

# Initialize Celery application
app = Celery("tasks", broker='amqp://username:password@localhost', backend='rpc://')

# Create a logger
logger = get_task_logger(__name__)

@app.task
def add(x, y):
    logger.info(f'Starting to add {x} + {y}')
    try:
        result = x + y
        logger.info(f'Task completed with result {result}')
        return result
    except Exception as e:
        logger.error('Error occurred', exc_info=True)
        raise e

In the code above, we define a Celery application named tasks configured with a RabbitMQ broker. The logger is utilized to record the operations and any errors encountered during the execution of tasks.

Invoking Asynchronous Tasks

Next, let’s write a main.py to invoke our asynchronous task and handle the result:

from celery.result import AsyncResult
from tasks import add

# Sending an asynchronous task
result: AsyncResult = add.delay(1, 2)

# Checking if the task is ready and retrieving the result
print(result.ready())  # Prints False if the task is not yet ready
print(result.get(timeout=10))  # Waits for the result up to 10 seconds

Here, add.delay(1, 2) sends an asynchronous task to add the numbers 1 and 2. The AsyncResult object allows us to check if the task is completed and to fetch the result once it is available.

Running the Celery Worker

To execute the tasks, we need to run a Celery worker. Due to compatibility issues with Windows, we use the --pool=solo option:

.venv\Scripts\python.exe -m celery -A tasks worker --loglevel=info -E --pool=solo

The --pool=solo option is crucial for running Celery on Windows as it avoids issues that arise from the default prefork pool, which is not fully supported on Windows platforms.

Simplifying Asynchronous Task Execution with Celery in Python

Setting up the Celery Application

First, we need to set up our Celery application. This involves specifying the message broker and defining tasks. A message broker is a mechanism responsible for transferring data between the application and Celery workers. In our example, we use RabbitMQ as the broker.

Here is the code snippet for setting up a Celery application, saved in a file named tasks.py:

from celery import Celery

# Create a Celery instance
app = Celery("tasks", broker='amqp://username:password@localhost')

# Define a simple task to add two numbers
@app.task
def add(x, y):
    return x + y

In this setup, Celery is initialized with a name (“tasks”) and a broker URL, which includes the username, password, and server location (in this case, localhost for local development).

Defining a Task

We define a simple task using the @app.task decorator. This task, add, takes two parameters, x and y, and returns their sum. The decorator marks this function as a task that Celery can manage.

Calling the Task Asynchronously

To call our add task asynchronously, we use the following code snippet in main.py:

from tasks import add

# Call the add task asynchronously
result = add.delay(1, 2)
print("Task sent to the Celery worker!")

The delay method is a convenient shortcut provided by Celery to execute the task asynchronously. When add.delay(1, 2) is called, Celery sends this task to the queue and then it’s picked up by a worker.

Running Celery Workers

To execute the tasks in the queue, we need to run Celery workers. Assuming you’ve activated a virtual environment, you can start a Celery worker using the following command:

.venv\Scripts\celery.exe -A tasks worker --loglevel=info

This command starts a Celery worker with a log level of info, which provides a moderate amount of logging output. Here, -A tasks tells Celery that our application is defined in the tasks.py file.

Fedora 40 Post Install

Firmware

sudo fwupdmgr get-devices 
sudo fwupdmgr refresh --force 
sudo fwupdmgr get-updates 
sudo fwupdmgr update

Media Codecs

sudo dnf groupupdate 'core' 'multimedia' 'sound-and-video' --setopt='install_weak_deps=False' --exclude='PackageKit-gstreamer-plugin' --allowerasing && sync
sudo dnf swap 'ffmpeg-free' 'ffmpeg' --allowerasing
sudo dnf install gstreamer1-plugins-{bad-\*,good-\*,base} gstreamer1-plugin-openh264 gstreamer1-libav --exclude=gstreamer1-plugins-bad-free-devel ffmpeg gstreamer-ffmpeg
sudo dnf install lame\* --exclude=lame-devel
sudo dnf group upgrade --with-optional Multimedia

H/W Video Acceleration

sudo dnf install ffmpeg ffmpeg-libs libva libva-utils
sudo dnf swap libva-intel-media-driver intel-media-driver --allowerasing

Set Hostname

hostnamectl set-hostname YOUR_HOSTNAME

Disable Mitigations

sudo grubby --update-kernel=ALL --args="mitigations=off"

Modern Standby

sudo grubby --update-kernel=ALL --args="mem_sleep_default=s2idle"

Enable nvidia-modeset

sudo grubby --update-kernel=ALL --args="nvidia-drm.modeset=1"

Disable NetworkManager-wait-online.service

sudo systemctl disable NetworkManager-wait-online.service

References
https://github.com/devangshekhawat/Fedora-40-Post-Install-Guide

Disabling UAC in Windows 11 using the registry

  1. Press Windows Key + R to open the Run dialog.
  2. Type regedit and press Enter.
  3. Navigate to the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
  1. In the right-hand pane, find the value named EnableLUA.
  2. Double-click on EnableLUA.
  3. Change the Value data from 1 to 0.
  4. Click OK to save the changes.
  5. Restart your computer for the changes to take effect.

Hide a User from the Windows Login Screen

Getting the exact username

  1. Press Windows Key + R to open the Run dialog.
  2. Type netplwiz and press Enter.
  3. The User Accounts window will list all user accounts on your Windows machine. The usernames are displayed in the “User name” column.

Using the Registry Editor

  1. Open the Registry Editor:

    • Press the Windows key + R.
    • Type “regedit” and press Enter.
  2. Navigate to the Winlogon Key:

    • Go to this path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
  3. Create SpecialAccounts and UserList Keys:

    • Right-click on “Winlogon” and select New -> Key. Name it SpecialAccounts.
    • Right-click on “SpecialAccounts” and select New -> Key. Name it UserList.
  4. Create a DWORD Value:

    • Right-click within the “UserList” key.
    • Select New -> DWORD (32-bit) Value.
    • Name the DWORD the exact username of the account you want to hide.
    • Double-click the new DWORD and set its value data to 0.
  5. Restart: Close the Registry Editor and restart your computer.

Deploy applications in Run as Administrator mode in Windows using Visual Studio and Inno Setup

Add the following line in your [Setup] section. This is the primary way to indicate that your installer requires administrative rights.

[Setup]
...
PrivilegesRequired=admin
...

Consider embedding an appropriate manifest into your application’s executable to have it automatically request elevation when executed outside the installer. This can provide a more seamless experience for the user.

1. Create the Manifest File:

  • Right-click on your project in the Solution Explorer and select Add -> New Item….
  • Choose Application Manifest File (it might be under the General category).
  • The default name is typically app.manifest. Keep this name or adjust it if necessary.

2. Modify the Manifest:

  • Open the newly created app.manifest file. The default content will be similar to this:

<?xml version="1.0" encoding="utf-8"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">
  <assemblyIdentity version="1.0.0.0" name="MyApplication.app"/>
  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
    <security>
      <requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v1">
        </requestedPrivileges>
    </security>
  </trustInfo>
</assembly>
  • Inside the <requestedPrivileges> element, add the following line:
<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

3. Embed the Manifest:

The manifest is now created, but you need to tell Visual Studio to embed it into your executable:

  • Right-click on your project and select Properties.
  • Go to the Application tab.
  • Under Manifest, select Embed manifest with default settings.

Build the Project:

Rebuild your project. The generated executable will now have the UAC manifest embedded, causing your application to request administrative privileges when run.