fastapi-queue 0.1.1

pip install fastapi-queue Copy PIP instructions

Released: Apr 23, 2022

A task queue based on redis that can serve as a peak shaver and protect your app.

Verified details

Maintainers.

Avatar for goodmanwen from gravatar.com

Unverified details

Project links, github statistics.

  • Open issues:

View statistics for this project via Libraries.io , or by using our public dataset on Google BigQuery

License: MIT License

Author: WEN

Tags fastapi-queue, fastapi, queue

Requires: Python >=3.7

Classifiers

  • OSI Approved :: MIT License
  • Microsoft :: Windows
  • POSIX :: Linux
  • Python :: 3
  • Python :: 3.7
  • Python :: 3.8
  • Python :: 3.9
  • Python :: 3.10

Project description

Fastapi-queue.

python task queue without redis

A python implementation of a task queue based on Redis that can serve as a peak shaver and protect your app.

What is fastapi-queue?

Fastapi-queue provides a high-performance redis-based task queue that allows requests sent by clients to the FastAPI server to be cached in the queue for delayed execution. This means that you don't have to worry about overwhelming your back-end data service, nor do you have to worry about requests being immediately rejected due to exceeding the load limit, when there is an influx of requests to your app in a very short period of time.

Why fastapi-queue?

This module is for people who want to use task queues but don't want to start too many dependencies to prevent increased maintenance costs. For example if you want to enjoy the benefits of queues but want to maintain a lightweight application and don't want to install RabbitMQ , then fastapi-queue is your choice, you just need to rely on python runtime and Redis environment.

  • Separate gateway and service nodes.
  • Superior horizontal scalability.
  • Fully asynchronous framework, ultra fast.

Requirements

  • aioredis >= 2.0.0
  • ThreadPoolExecutorPlus >= 0.2.2
  • msgpack >= 1.0.0

Documentation

https://fastapi-queue.readthedocs.io (on going)

Response sequence description

python task queue without redis

Service nodes

Performance

Due to the fully asynchronous support, complex interprocedural calls exhibit a very low processing latency.

(Latency vs. number of request threads on going)

(Maximum capability requests per second vs. number of service nodes on going)

  • The code has been carefully debugged and functions reliably, but I haven't spent much time making it a generic module, which means that if you encounter bugs you'll need to modify the code yourself, and they're usually caused by oversights of detail somewhere.
  • The service has undergone rigorous stress tests and can work for hours under concurrent requests from hundreds of clients, but for reliability of protection, you need to carefully set the upper limit of your load. Where RateLimiter can provide you with a low consumption roughly pre-intercepted function.

For example,

Project details

Release history release notifications | rss feed.

Apr 23, 2022

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages .

Source Distribution

Uploaded Apr 23, 2022 Source

Built Distribution

Uploaded Apr 23, 2022 Python 3

Hashes for fastapi-queue-0.1.1.tar.gz

Hashes for fastapi_queue-0.1.1-py3-none-any.whl.

  • português (Brasil)

Supported by

python task queue without redis

Asynchronous Tasks with Flask and Redis Queue

Share this tutorial.

  • Hacker News

If a long-running task is part of your application's workflow you should handle it in the background, outside the normal flow.

Perhaps your web application requires users to submit a thumbnail (which will probably need to be re-sized) and confirm their email when they register. If your application processed the image and sent a confirmation email directly in the request handler, then the end user would have to wait for them both to finish. Instead, you'll want to pass these tasks off to a task queue and let a separate worker process deal with it, so you can immediately send a response back to the client. The end user can do other things on the client-side and your application is free to respond to requests from other users.

This tutorial looks at how to configure Redis Queue (RQ) to handle long-running tasks in a Flask app.

Celery is a viable solution as well. Check out Asynchronous Tasks with Flask and Celery for more.

Project Setup

Trigger a task, redis queue, task status.

By the end of this tutorial, you will be able to:

  • Integrate Redis Queue into a Flask app and create tasks.
  • Containerize Flask and Redis with Docker.
  • Run long-running tasks in the background with a separate worker process.
  • Set up RQ Dashboard to monitor queues, jobs, and workers.
  • Scale the worker count with Docker.

Our goal is to develop a Flask application that works in conjunction with Redis Queue to handle long-running processes outside the normal request/response cycle.

  • The end user kicks off a new task via a POST request to the server-side
  • Within the view, a task is added to the queue and the task id is sent back to the client-side
  • Using AJAX, the client continues to poll the server to check the status of the task while the task itself is running in the background

flask and redis queue user flow

In the end, the app will look like this:

Want to follow along? Clone down the base project, and then review the code and project structure:

Since we'll need to manage three processes in total (Flask, Redis, worker), we'll use Docker to simplify our workflow so they can be managed from a single terminal window.

To test, run:

Open your browser to http://localhost:5004 . You should see:

flask, redis queue, docker

An event handler in project/client/static/main.js is set up that listens for a button click and sends an AJAX POST request to the server with the appropriate task type: 1 , 2 , or 3 .

On the server-side, a view is already configured to handle the request in project/server/main/views.py :

We just need to wire up Redis Queue.

So, we need to spin up two new processes: Redis and a worker. Add them to the docker-compose.yml file:

Add the task to a new file called tasks.py in "project/server/main":

Update the view to connect to Redis, enqueue the task, and respond with the id:

Don't forget the imports:

Update BaseConfig :

Did you notice that we referenced the redis service (from docker-compose.yml ) in the REDIS_URL rather than localhost or some other IP? Review the Docker Compose docs for more info on connecting to other services via the hostname.

Finally, we can use a Redis Queue worker , to process tasks at the top of the queue.

manage.py :

Here, we set up a custom CLI command to fire the worker.

It's important to note that the @cli.command() decorator will provide access to the application context along with the associated config variables from project/server/config.py when the command is executed.

Add the imports as well:

Add the dependencies to the requirements file:

Build and spin up the new containers:

To trigger a new task, run:

You should see something like:

Turn back to the event handler on the client-side:

Once the response comes back from the original AJAX request, we then continue to call getStatus() with the task id every second. If the response is successful, a new row is added to the table on the DOM.

Update the view:

Add a new task to the queue:

Then, grab the task_id from the response and call the updated endpoint to view the status:

Test it out in the browser as well:

flask, redis queue, docker

RQ Dashboard is a lightweight, web-based monitoring system for Redis Queue.

To set up, first add a new directory to the "project" directory called "dashboard". Then, add a new Dockerfile to that newly created directory:

Simply add the service to the docker-compose.yml file like so:

Build the image and spin up the container:

Navigate to http://localhost:9181 to view the dashboard:

rq dashboard

Kick off a few jobs to fully test the dashboard:

rq dashboard

Try adding a few more workers to see how that affects things:

This has been a basic guide on how to configure Redis Queue to run long-running tasks in a Flask app. You should let the queue handle any processes that could block or slow down the user-facing code.

Looking for some challenges?

  • Deploy this application across a number of DigitalOcean droplets using Kubernetes or Docker Swarm.
  • Write unit tests for the new endpoints. (Mock out the Redis instance with fakeredis )
  • Instead of polling the server, try using Flask-SocketIO to open up a websocket connection.

Grab the code from the repo .

Test-Driven Development with Python, Flask, and Docker

In this course, you'll learn how to set up a development environment with Docker in order to build and deploy a microservice powered by Python and Flask. You'll also apply the practices of Test-Driven Development with pytest as you develop a RESTful API.

Recommended Tutorials

Stay sharp with course updates.

Join our mailing list to be notified about updates and new releases.

Send Us Feedback

Fork me on GitHub

  • Contributing

RQ ( Redis Queue ) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It can be integrated in your web stack easily.

RQ requires Redis >= 3.0.0.

Getting started

First, run a Redis server. You can use an existing one. To put jobs on queues, you don’t have to do anything special, just define your typically lengthy or blocking function:

Then, create a RQ queue:

And enqueue the function call:

Scheduling jobs are similarly easy:

You can also ask RQ to retry failed jobs:

To start executing enqueued function calls in the background, start a worker from your project’s directory:

That’s about it.

Installation

Simply use the following command to install the latest released version:

If you want the cutting edge version (that may well be broken), use this:

Project history

This project has been inspired by the good parts of Celery , Resque and this snippet , and has been created as a lightweight alternative to existing queueing frameworks, with a low barrier to entry.

How to Run Your First Task with RQ, Redis, and Python

Time to read: 7 minutes

  • Facebook logo
  • Twitter Logo Follow us on Twitter
  • LinkedIn logo

header - How to Run Your First Task with RQ, Redis, and Python

As a developer, it can be very useful to learn how to run functions in the background while being able to monitor the queue in another tab or different system. This is incredibly helpful when managing heavy workloads that might not work efficiently when called all at once, or when making large numbers of calls to a database that returns data slowly over time rather than all at once.

In this tutorial we will implement a RQ queue in Python with the help of Redis to schedule and execute tasks in a timely manner.

Tutorial Requirements

  • Python 3.6  or newer. If your operating system does not provide a Python interpreter, you can go to python.org  to download an installer.

Let’s talk about task queues

Task queues are a great way to allow tasks to work asynchronously outside of the main application flow. There are many task queues in Python to assist you in your project, however, we’ll be discussing a solution today known as RQ.

RQ, also known as Redis Queue , is a Python library that allows developers to enqueue jobs to be processed in the background with workers . The RQ workers will be called when it's time to execute the queue in the background. Using a connection to Redis , it’s no surprise that this library is super lightweight and offers support for those getting started for the first time.

By using this particular task queue, it is possible to process jobs in the background with little to no hassle.

Set up the environment

Create a project directory in your terminal called “rq-test” to follow along.

Install a virtual environment and copy and paste the commands to install rq and related packages. If you are using a Unix or MacOS system, enter the following commands:

If you are on a Windows machine, enter the following commands in a prompt window:

RQ requires a Redis installation  on your machine which can be done using the following commands using wget . Redis is on version 6.0.6 at the time of this article publication.

If you are using a Unix or MacOS system, enter these commands to install Redis. This is my personal favorite way to install Redis, but there are alternatives below:

If you have Homebrew installed, you can type brew install redis in the terminal and refer to this GitHub gist to install Redis on the Mac . For developers using Ubuntu Linux, the command sudo apt-get install redis would get the job done as well.

Run the Redis server in a separate terminal window on the default port with the command src/redis-server from the directory where it's installed.

For Windows users, you would have to follow a separate tutorial to run Redis on Windows . Download the latest zip file on GitHub  and extract the contents. Run the redis-server.exe file that was extracted from the zip file to start the Redis server.

The output should look similar to the following after running Redis:

Build out the tasks

In this case, a task for Redis Queue is merely a Python function. For this article, we’ll tell the task to print a message to the terminal for a “x” amount of seconds to demonstrate the use of RQ.

Copy and paste the following code to a file named “tasks.py” in your directory.

These are simple tasks that print out numbers and text on the terminal so that we can see if the tasks are executed properly. Using the time.sleep(1) function from the Python time library  will allow your task to be suspended for the given number of seconds and overall extend the time of the task so that we can examine their progress.

Feel free to alter this code after the tutorial and create your own tasks. Some other popular tasks are sending a fax message or email by connecting to your email client.

Create your queue

Create another file in the root directory and name it “app.py”. Copy and paste the following code:

The queue object sets up a connection to Redis and initializes a queue based on that connection. This queue can hold all the jobs required to run in the background with workers.

As seen in the code, the tasks.print_task function is added using the enqueue function. This means that the task added to the queue will be executed immediately

The enqueue_in function is another nifty RQ function because it expects a timedelta in order to schedule the specified job. In this case, seconds is specified, but this variable can be changed according to the time schedule expected for your usage. Check out other ways to schedule a job  on this GitHub README.

Since we are testing out the RQ queue, I have enqueued both the tasks.print_task and tasks.print_numbers functions so that we can see their output on the terminal. The third argument passed in is a "5" which also stands for the argument passed into the respective functions. In this case, we are expecting to see print_task() print "Hello World!" five times and for print_numbers() to print 5 numbers in order.

If you have created any additional task, be sure to import your tasks at the top of the file so that all the tasks in your Python file can be accessed.

Run the queue

For the purposes of this article, the gif demo below will show a perfect execution of the tasks in queue so no exceptions will be raised.

The Redis server should still be running in a tab from earlier in the tutorial at this point. If it stopped, run the command src/redis-server inside the redis-6.0.6 folder on one tab, or for developers with a Windows machine, start redis-cli.exe . Open another tab solely to run the RQ scheduler with the command rq worker --with-scheduler .

This should be the output after running the command above.

The worker command activated a worker process in order to connect to Redis and look for any jobs assigned to the queue from the code in app.py .

Lastly, open a third tab in the terminal for the root project directory. Start up the virtual environment again with the command source venv/bin/activate . Then type python app.py to run the project.

Go back to the tab that is running rq worker --with-scheduler . Wait 5 more seconds after the first task is executed to see the next task. Although the live demo gif below wasn’t able to capture the best timing due to having to run the program and record, it is noticeable that there was a pause between tasks until execution and that both tasks were completed within 15 seconds.

Here’s the sample output inside of the rqworker tab:

As seen in the output above, if the tasks written in task.py  had a line to return anything, then the result of both tasks are kept for 500 seconds which is the default. A developer can  alter the return value's time to live by passing in a result_ttl parameter when adding tasks to the queue.

Handle exceptions and try again

If a job were to fail, you can always set up a log to keep track of the error messages, or you can use the RQ queue to enqueue and retry failed jobs. By using RQ's FailedJobRegistry package, you can keep track of the jobs that failed during runtime. The RQ documentation discusses how it handles the exceptions  and how data regarding the job can help the developer figure out how to resubmit the job.  

However, RQ also supports developers in handling exceptions in their own way by injecting your own logic to the rq workers . This may be a helpful option for you if you are executing many tasks in your project and those that failed are not worth retrying.

Force a failed task to retry

Since this is an introductory article to run your first task with RQ, let's try to purposely fail one of the tasks from earlier to test out RQ's retry object.

Go to the tasks.py  file and alter the print_task() function so that random numbers can be generated and determine if the function will be executed or not. We will be using the random Python library  to assist us in generating numbers. Don't forget to include the import random at the top of the file.

Copy and paste the following lines of code to change the print_task() function in the tasks.py  file.

Go back to the app.py  file to change the queue. Instead of using the enqueue_in function to execute the tasks.print_task function, delete the line and replace it with queue.enqueue(tasks.print_task, 5, retry=Retry(max=2)) .

The retry object is imported with rq so make sure you add from rq import Retry at the top of the file as well in order to use this functionality. This object accepts max and interval arguments to specify when the particular function will be retried. In the newly changed line, the tasks.print_task function will pass in the function we want to retry, the argument parameter "5" which stands for the seconds of execution, and lastly the maximum amount of times we want the queue to retry.

The tasks in queue should now look like this:

When running the print_task task, there is a 50/50 chance that tasks.print_task() will execute properly since we're only generating a 1 or 2, and the print statement will only happen if you generate a 1. A RuntimeError will be raised otherwise and the queue will retry the task immediately as many times as it takes to successfully print "Hello World!".

What’s next for task queues?

Congratulations! You have successfully learned and implemented the basics of scheduling tasks in the RQ queue. Perhaps now you can tell the worker command to add a task that prints out an infinite number of "Congratulations" messages in a timely manner!

Otherwise, check out these different tasks that you can build in to your Redis Queue:

  • Schedule Twilio SMS to a list of contacts  quickly!
  • Use Redis Queue to generate a fan fiction with OpenAI GPT-3
  • Queue Emails with Twilio SendGrid using Redis Queue

Let me know what you have been building by reaching out to me over email!

Diane Phan is a developer on the Developer Voices team. She loves to help programmers tackle difficult challenges that might prevent them from bringing their projects to life. She can be reached at dphan [at] twilio.com or LinkedIn .

Related Posts

SendGrid OpenAI Header

Related Resources

Twilio docs, from apis to sdks to sample apps.

API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.

Resource Center

The latest ebooks, industry reports, and webinars.

Learn from customer engagement experts to improve your own communication.

Twilio's developer community hub

Best practices, code samples, and inspiration to build communications and digital engagement experiences.

  • Dynos (app containers)
  • Stacks (operating system images)
  • Networking & DNS
  • Platform Policies
  • Platform Principles
  • Command Line
  • Deploying with Git
  • Deploying with Docker
  • Deployment Integrations
  • Continuous Integration
  • Working with Bundler
  • Rails Support
  • Background Jobs in Python
  • Working with Django
  • Working with Maven
  • Java Database Operations
  • Java Advanced Topics
  • Working with Spring Boot
  • Go Dependency Management
  • Postgres Basics
  • Postgres Getting Started
  • Postgres Performance
  • Postgres Data Transfer & Preservation
  • Postgres Availability
  • Postgres Special Topics
  • Heroku Data For Redis
  • Apache Kafka on Heroku
  • Other Data Stores
  • App Performance
  • All Add-ons
  • Collaboration
  • App Security
  • Single Sign-on (SSO)
  • Infrastructure Networking
  • Enterprise Accounts
  • Enterprise Teams
  • Heroku Connect Administration
  • Heroku Connect Reference
  • Heroku Connect Troubleshooting
  • Patterns & Best Practices
  • Platform API
  • App Webhooks
  • Heroku Labs
  • Add-on Development Tasks
  • Add-on APIs
  • Add-on Guidelines & Requirements
  • Building CLI Plugins
  • Developing Buildpacks
  • Accounts & Billing
  • Troubleshooting & Support
  • Integrating with Salesforce

Background Tasks in Python with RQ

Last updated July 18, 2022

Table of Contents

Configuration, create a worker, queuing jobs, troubleshooting.

RQ (Redis Queue) makes it easy to add background tasks to your Python applications on Heroku. RQ uses a Redis database as a queue to process background jobs. To get started using RQ, you need to configure your application and then run a worker process in your application.

To setup RQ and its dependencies, install it using pip:

Be sure to add rq to your requirements.txt file as well.

Now that we have everything we need to create a worker process, let’s create one.

Create a file called worker.py . This module will listen to queued tasks and process them as they are received.

Now you can run your new worker process:

To send a job to your new Redis Queue, you need to send your jobs to Redis in your code. Let’s pretend we have a blocking function in an external module, utils.py :

In your application, create a RQ queue:

And enqueue the function call:

The blocking function will automatically be executed in the background worker process.

To deploy your new worker system to Heroku, you need to add the worker process to the Procfile in the root of the project.

Additionally, provision an instance of Redis with the Heroku Data for Redis (heroku-redis) addon and deploy with a git push .

Once everything’s pushed up, scale your workers according to your needs:

View worker process output by filtering the logs with the -p flag and the name of the worker process type.

The worker process can be manually invoked for further isolation.

Keep reading

Log in to submit feedback.

Redis Queue (RQ)

Redis Queue (RQ) is a Python task queue implementation that uses Redis to keep track of tasks in the queue that need to be executed.

Redis Queue (RQ) task queue Python project logo.

RQ resources

Asynchronous Tasks in Python with Redis Queue is a quickstart-style tutorial that shows how to use RQ to fetch data from the Mars Rover web API and process URLs for each of the photos taken by NASA's Mars rover. There is also a follow-up post on Scheduling Tasks in Python with Redis Queue and RQ Scheduler that shows how to schedule tasks in advance, which is a common way of working with task queues .

The RQ intro post contains information on design decisions and how to use RQ in your projects.

Build a Ghostwriting App for Scary Halloween Stories with OpenAI's GPT-3 Engine and Task Queues in Python is a fun tutorial that uses RQ with OpenAI's GPT-3 API randomly write original stories inspired by creepy Halloween tales.

International Space Station notifications with Python and Redis Queue (RQ) shows how to combine the RQ task queue library with Flask to send text message notifications every time a condition is met - in this blog post's case that the ISS is currently flying over your location on Earth.

Asynchronous Tasks with Flask and Redis Queue looks at how to configure RQ to handle long-running tasks in a Flask app.

How We Spotted and Fixed a Performance Degradation in Our Python Code is a quick story about how an engineering team moving from Celery to RQ fixed some deficiencies in their RQ performance as they started to understand the difference between how the two tools execute workers.

Flask by Example - Implementing a Redis Task Queue explains how to install and use RQ in a Flask application.

rq-dashboard is an awesome Flask -based dashboard for viewing queues, workers and other critical information when using RQ.

Sending Confirmation Emails with Flask, Redis Queue, and Amazon SES shows how RQ fits into a real-world application that uses many libraries and third party APIs.

Background Tasks in Python using Redis Queue gives a code example for web scraping data from the Goodreads website. Note that the first sentence in the post is not accurate: it's not the Python language that is linear, but instead the way workers in WSGI servers handle a single request at a time by blocking. Nevertheless, the example is a good one for understanding how RQ can execute.

Do you want to learn more about task queues, or another topic?

How do I execute code outside the HTTP request-response cycle?

I've built a Python web app, now how do I deploy it?

How do I log errors that occur in my application?

Table of Contents

Full stack python.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Python task queue using Redis

closeio/tasktiger

Folders and files, repository files navigation.

TaskTiger is a Python task queue using Redis.

(Interested in working on projects like this? Close is looking for great engineers to join our team)

Per-task fork or synchronous worker

By default, TaskTiger forks a subprocess for each task, This comes with several benefits: Memory leaks caused by tasks are avoided since the subprocess is terminated when the task is finished. A hard time limit can be set for each task, after which the task is killed if it hasn't completed. To ensure performance, any necessary Python modules can be preloaded in the parent process.

TaskTiger also supports synchronous workers, which allows for better performance due to no forking overhead, and tasks have the ability to reuse network connections. To prevent memory leaks from accumulating, workers can be set to shutdown after a certain amount of time, at which point a supervisor can restart them. Workers also automatically exit on on hard timeouts to prevent an inconsistent process state.

Unique queues

TaskTiger has the option to avoid duplicate tasks in the task queue. In some cases it is desirable to combine multiple similar tasks. For example, imagine a task that indexes objects (e.g. to make them searchable). If an object is already present in the task queue and hasn't been processed yet, a unique queue will ensure that the indexing task doesn't have to do duplicate work. However, if the task is already running while it's queued, the task will be executed another time to ensure that the indexing task always picks up the latest state.

TaskTiger can ensure to never execute more than one instance of tasks with similar arguments by acquiring a lock. If a task hits a lock, it is requeued and scheduled for later executions after a configurable interval.

Task retrying

TaskTiger lets you retry exceptions (all exceptions or a list of specific ones) and comes with configurable retry intervals (fixed, linear, exponential, custom).

Flexible queues

Tasks can be easily queued in separate queues. Workers pick tasks from a randomly chosen queue and can be configured to only process specific queues, ensuring that all queues are processed equally. TaskTiger also supports subqueues which are separated by a period. For example, you can have per-customer queues in the form process_emails.CUSTOMER_ID and start a worker to process process_emails and any of its subqueues. Since tasks are picked from a random queue, all customers get equal treatment: If one customer is queueing many tasks it can't block other customers' tasks from being processed. A maximum queue size can also be enforced.

Batch queues

Batch queues can be used to combine multiple queued tasks into one. That way, your task function can process multiple sets of arguments at the same time, which can improve performance. The batch size is configurable.

Scheduled and periodic tasks

Tasks can be scheduled for execution at a specific time. Tasks can also be executed periodically (e.g. every five seconds).

Structured logging

TaskTiger supports JSON-style logging via structlog, allowing more flexibility for tools to analyze the log. For example, you can use TaskTiger together with Logstash, Elasticsearch, and Kibana.

The structlog processor tasktiger.logging.tasktiger_processor can be used to inject the current task id into all log messages.

Reliability

TaskTiger atomically moves tasks between queue states, and will re-execute tasks after a timeout if a worker crashes.

Error handling

If an exception occurs during task execution and the task is not set up to be retried, TaskTiger stores the execution tracebacks in an error queue. The task can then be retried or deleted manually. TaskTiger can be easily integrated with error reporting services like Rollbar.

Admin interface

A simple admin interface using flask-admin exists as a separate project ( tasktiger-admin ).

Quick start

It is easy to get started with TaskTiger.

Create a file that contains the task(s).

Queue the task using the delay method.

Run a worker (make sure the task code can be found, e.g. using PYTHONPATH ).

Configuration

A TaskTiger object keeps track of TaskTiger's settings and is used to decorate and queue tasks. The constructor takes the following arguments:

Redis connection object. The connection should be initialized with decode_responses=True to avoid encoding problems on Python 3.

Dict with config options. Most configuration options don't need to be changed, and a full list can be seen within TaskTiger 's __init__ method.

Here are a few commonly used options:

ALWAYS_EAGER

If set to True , all tasks except future tasks ( when is a future time) will be executed locally by blocking until the task returns. This is useful for testing purposes.

BATCH_QUEUES

Set up queues that will be processed in batch, i.e. multiple jobs are taken out of the queue at the same time and passed as a list to the worker method. Takes a dict where the key represents the queue name and the value represents the batch size. Note that the task needs to be declared as batch=True . Also note that any subqueues will be automatically treated as batch queues, and the batch value of the most specific subqueue name takes precedence.

ONLY_QUEUES

If set to a non-empty list of queue names, a worker only processes the given queues (and their subqueues), unless explicit queues are passed to the command line.

setup_structlog

If set to True, sets up structured logging using structlog when initializing TaskTiger. This makes writing custom worker scripts easier since it doesn't require the user to set up structlog in advance.

Task decorator

TaskTiger provides a task decorator to specify task options. Note that simple tasks don't need to be decorated. However, decorating the task allows you to use an alternative syntax to queue the task, which is compatible with Celery:

Task options

Tasks support a variety of options that can be specified either in the task decorator, or when queueing a task. For the latter, the delay method must be called on the TaskTiger object, and any options in the task decorator are overridden.

When queueing a task, the task needs to be defined in a module other than the Python file which is being executed. In other words, the task can't be in the __main__ module. TaskTiger will give you back an error otherwise.

The following options are supported by both delay and the task decorator:

Name of the queue where the task will be queued.

hard_timeout

If the task runs longer than the given number of seconds, it will be killed and marked as failed.

Boolean to indicate whether the task will only be queued if there is no similar task with the same function, arguments, and keyword arguments in the queue. Note that multiple similar tasks may still be executed at the same time since the task will still be inserted into the queue if another one is being processed. Requeueing an already scheduled unique task will not change the time it was originally scheduled to execute at.

If set, this implies unique=True and specifies the list of kwargs to use to construct the unique key. By default, all args and kwargs are serialized and hashed.

Boolean to indicate whether to hold a lock while the task is being executed (for the given args and kwargs). If a task with similar args/kwargs is queued and tries to acquire the lock, it will be retried later.

If set, this implies lock=True and specifies the list of kwargs to use to construct the lock key. By default, all args and kwargs are serialized and hashed.

max_queue_size

A maximum queue size can be enforced by setting this to an integer value. The QueueFullException exception will be raised when queuing a task if this limit is reached. Tasks in the active , scheduled , and queued states are counted against this limit.

Takes either a datetime (for an absolute date) or a timedelta (relative to now). If given, the task will be scheduled for the given time.

Boolean to indicate whether to retry the task when it fails (either because of an exception or because of a timeout). To restrict the list of failures, use retry_on . Unless retry_method is given, the configured DEFAULT_RETRY_METHOD is used.

If a list is given, it implies retry=True . The task will be only retried on the given exceptions (or its subclasses). To retry the task when a hard timeout occurs, use JobTimeoutException .

retry_method

If given, implies retry=True . Pass either:

  • a function that takes the retry number as an argument, or,
  • a tuple (f, args) , where f takes the retry number as the first argument, followed by the additional args.

The function needs to return the desired retry interval in seconds, or raise StopRetry to stop retrying. The following built-in functions can be passed for common scenarios and return the appropriate tuple:

fixed(delay, max_retries)

Returns a method that returns the given delay (in seconds) or raises StopRetry if the number of retries exceeds max_retries .

linear(delay, increment, max_retries)

Like fixed , but starts off with the given delay and increments it by the given increment after every retry.

exponential(delay, factor, max_retries)

Like fixed , but starts off with the given delay and multiplies it by the given factor after every retry.

For example, to retry a task 3 times (for a total of 4 executions), and wait 60 seconds between executions, pass retry_method=fixed(60, 3) .

runner_class

If given, a Python class can be specified to influence task running behavior. The runner class should inherit tasktiger.runner.BaseRunner and implement the task execution behavior. The default implementation is available in tasktiger.runner.DefaultRunner . The following behavior can be achieved:

Execute specific code before or after the task is executed (in the forked child process), or customize the way task functions are called in either single or batch processing.

Note that if you want to execute specific code for all tasks, you should use the CHILD_CONTEXT_MANAGERS configuration option.

  • Control the hard timeout behavior of a task.
  • Execute specific code in the main worker process after a task failed permanently.

This is an advanced feature and the interface and requirements of the runner class can change in future TaskTiger versions.

The following options can be only specified in the task decorator:

If set to True , the task will receive a list of dicts with args and kwargs and can process multiple tasks of the same type at once. Example: [{"args": [1], "kwargs": {}}, {"args": [2], "kwargs": {}}] Note that the list will only contain multiple items if the worker has set up BATCH_QUEUES for the specific queue (see the Configuration section).

If given, makes a task execute periodically. Pass either:

  • a function that takes the current datetime as an argument.
  • a tuple (f, args) , where f takes the current datetime as the first argument, followed by the additional args.

The schedule function must return the next task execution datetime, or None to prevent periodic execution. The function is executed to determine the initial task execution date when a worker is initialized, and to determine the next execution date when the task is about to get executed.

For most common scenarios, the below mentioned built-in functions can be passed:

periodic(seconds=0, minutes=0, hours=0, days=0, weeks=0, start_date=None, end_date=None)

Use equal, periodic intervals, starting from start_date (defaults to 2000-01-01T00:00Z , a Saturday, if not given), ending at end_date (or never, if not given). For example, to run a task every five minutes indefinitely, use schedule=periodic(minutes=5) . To run a task every every Sunday at 4am UTC, you could use schedule=periodic(weeks=1, start_date=datetime.datetime(2000, 1, 2, 4)) .

cron_expr(expr, start_date=None, end_date=None)

start_date , to specify the periodic task start date. It defaults to 2000-01-01T00:00Z , a Saturday, if not given. end_date , to specify the periodic task end date. The task repeats forever if end_date is not given. For example, to run a task every hour indefinitely, use schedule=cron_expr("0 * * * *") . To run a task every Sunday at 4am UTC, you could use schedule=cron_expr("0 4 * * 0") .

Custom retrying

In some cases the task retry options may not be flexible enough. For example, you might want to use a different retry method depending on the exception type, or you might want to like to suppress logging an error if a task fails after retries. In these cases, RetryException can be raised within the task function. The following options are supported:

Specify a custom retry method for this retry. If not given, the task's default retry method is used, or, if unspecified, the configured DEFAULT_RETRY_METHOD . Note that the number of retries passed to the retry method is always the total number of times this method has been executed, regardless of which retry method was used.

original_traceback

If RetryException is raised from within an except block and original_traceback is True, the original traceback will be logged (i.e. the stacktrace at the place where the caught exception was raised). False by default.

If set to False and the task fails permanently, a warning will be logged instead of an error, and the task will be removed from Redis when it completes. True by default.

Example usage:

The tasktiger command is used on the command line to invoke a worker. To invoke multiple workers, multiple instances need to be started. This can be easily done e.g. via Supervisor. The following Supervisor configuration file can be placed in /etc/supervisor/tasktiger.ini and runs 4 TaskTiger workers as the ubuntu user. For more information, read Supervisor's documentation.

Workers support the following options:

-q , --queues

If specified, only the given queue(s) are processed. Multiple queues can be separated by comma. Any subqueues of the given queues will be also processed. For example, -q first,second will process items from first , second , and subqueues such as first.CUSTOMER1 , first.CUSTOMER2 .

-e , --exclude-queues

If specified, exclude the given queue(s) from processing. Multiple queues can be separated by comma. Any subqueues of the given queues will also be excluded unless a more specific queue is specified with the -q option. For example, -q email,email.incoming.CUSTOMER1 -e email.incoming will process items from the email queue and subqueues like email.outgoing.CUSTOMER1 or email.incoming.CUSTOMER1 , but not email.incoming or email.incoming.CUSTOMER2 .

-m , --module

Module(s) to import when launching the worker. This improves task performance since the module doesn't have to be reimported every time a task is forked. Multiple modules can be separated by comma.

Another way to preload modules is to set up a custom TaskTiger launch script, which is described below.

-h , --host

Redis server hostname (if different from localhost ).

-p , --port

Redis server port (if different from 6379 ).

-a , --password

Redis server password (if required).

Redis server database number (if different from 0 ).

-M , --max-workers-per-queue

Maximum number of workers that are allowed to process a queue.

--store-tracebacks/--no-store-tracebacks

Store tracebacks with execution history (config defaults to True ).

Can be fork (default) or sync . Whether to execute tasks in a separate process via fork, or execute them synchronously in the same proces. See "Features" section for the benefits of either approach.

--exit-after

Exit the worker after the time in minutes has elapsed. This is mainly useful with the synchronous executor to prevent memory leaks from accumulating.

In some cases it is convenient to have a custom TaskTiger launch script. For example, your application may have a manage.py command that sets up the environment and you may want to launch TaskTiger workers using that script. To do that, you can use the run_worker_with_args method, which launches a TaskTiger worker and parses any command line arguments. Here is an example:

Inspect, requeue and delete tasks

TaskTiger provides access to the Task class which lets you inspect queues and perform various actions on tasks.

Each queue can have tasks in the following states:

  • queued : Tasks that are queued and waiting to be picked up by the workers.
  • active : Tasks that are currently being processed by the workers.
  • scheduled : Tasks that are scheduled for later execution.
  • error : Tasks that failed with an error.

To get a list of all tasks for a given queue and state, use Task.tasks_from_queue . The method gives you back a tuple containing the total number of tasks in the queue (useful if the tasks are truncated) and a list of tasks in the queue, latest first. Using the skip and limit keyword arguments, you can fetch arbitrary slices of the queue. If you know the task ID, you can fetch a given task using Task.from_id . Both methods let you load tracebacks from failed task executions using the load_executions keyword argument, which accepts an integer indicating how many executions should be loaded.

Tasks can also be constructed and queued using the regular constructor, which takes the TaskTiger instance, the function name and the options described in the Task options section. The task can then be queued using its delay method. Note that the when argument needs to be passed to the delay method, if applicable. Unique tasks can be reconstructed using the same arguments.

The Task object has the following properties:

  • id : The task ID.
  • data : The raw data as a dict from Redis.
  • executions : A list of failed task executions (as dicts). An execution dict contains the processing time in time_started and time_failed , the worker host in host , the exception name in exception_name and the full traceback in traceback .
  • serialized_func , args , kwargs : The serialized function name with all of its arguments.
  • func : The imported (executable) function

The Task object has the following methods:

  • cancel : Cancel a scheduled task.
  • delay : Queue the task for execution.
  • delete : Remove the task from the error queue.
  • execute : Run the task without queueing it.
  • n_executions : Queries and returns the number of past task executions.
  • retry : Requeue the task from the error queue for execution.
  • update_scheduled_time : Updates a scheduled task's date to the given date.

The current task can be accessed within the task function while it's being executed: In case of a non-batch task, the current_task property of the TaskTiger instance returns the current Task instance. In case of a batch task the current_tasks property must be used which returns a list of tasks that are currently being processed (in the same order as they were passed to the task).

Example 1: Queueing a unique task and canceling it without a reference to the original task.

Example 2: Inspecting queues and retrying a task by ID.

Example 3: Accessing the task instances within a batch task function to determine how many times the currently processing tasks were previously executed.

Pause queue processing

The --max-workers-per-queue option uses queue locks to control the number of workers that can simultaneously process the same queue. When using this option a system lock can be placed on a queue which will keep workers from processing tasks from that queue until it expires. Use the set_queue_system_lock() method of the TaskTiger object to set this lock.

Rollbar error handling

TaskTiger comes with Rollbar integration for error handling. When a task errors out, it can be logged to Rollbar, grouped by queue, task function name and exception type. To enable logging, initialize rollbar with the StructlogRollbarHandler provided in the tasktiger.rollbar module. The handler takes a string as an argument which is used to prefix all the messages reported to Rollbar. Here is a custom worker launch script:

Cleaning Up Error'd Tasks

Error'd tasks occasionally need to be purged from Redis, so TaskTiger exposes a purge_errored_tasks method to help. It might be useful to set this up as a periodic task as follows:

Running The Test Suite

Tests can be run locally using the provided docker compose file. After installing docker, tests should be runnable with:

Tests can be more granularly run using normal pytest flags. For example:

Releasing a New Version

  • Make sure the code has been thoroughly reviewed and tested in a realistic production environment.
  • Update setup.py and CHANGELOG.md . Make sure you include any breaking changes.
  • Run python setup.py sdist and twine upload dist/<PACKAGE_TO_UPLOAD> .
  • Push a new tag pointing to the released commit, format: v0.13 for example.
  • Mark the tag as a release in GitHub's UI and include in the description the changelog entry for the version. An example would be: https://github.com/closeio/tasktiger/releases/tag/v0.13 .

Releases 16

Contributors 29.

  • Python 98.2%
  • Dockerfile 0.1%
  • Python »
  • 3.13.0b1 Documentation »
  • What’s New in Python »
  • What’s New In Python 3.13
  • Theme Auto Light Dark |

What’s New In Python 3.13 ¶

Thomas Wouters

This article explains the new features in Python 3.13, compared to 3.12.

For full details, see the changelog .

PEP 719 – Python 3.13 Release Schedule

Prerelease users should be aware that this document is currently in draft form. It will be updated substantially as Python 3.13 moves towards release, so it’s worth checking back even after reading earlier versions.

Summary – Release Highlights ¶

Python 3.13 beta is the pre-release of the next version of the Python programming language, with a mix of changes to the language, the implementation and the standard library. The biggest changes to the implementation include a new interactive interpreter, and experimental support for dropping the Global Interpreter Lock ( PEP 703 ) and a Just-In-Time compiler ( PEP 744 ). The library changes contain removal of deprecated APIs and modules, as well as the usual improvements in user-friendliness and correctness.

Interpreter improvements:

A greatly improved interactive interpreter and improved error messages .

Color support in the new interactive interpreter , as well as in tracebacks and doctest output. This can be disabled through the PYTHON_COLORS and NO_COLOR environment variables.

PEP 744 : A basic JIT compiler was added. It is currently disabled by default (though we may turn it on later). Performance improvements are modest – we expect to be improving this over the next few releases.

PEP 667 : The locals() builtin now has defined semantics when mutating the returned mapping. Python debuggers and similar tools may now more reliably update local variables in optimized frames even during concurrent code execution.

New typing features:

PEP 696 : Type parameters ( typing.TypeVar , typing.ParamSpec , and typing.TypeVarTuple ) now support defaults.

PEP 702 : Support for marking deprecations in the type system using the new warnings.deprecated() decorator.

PEP 742 : typing.TypeIs was added, providing more intuitive type narrowing behavior.

PEP 705 : typing.ReadOnly was added, to mark an item of a typing.TypedDict as read-only for type checkers.

Free-threading:

PEP 703 : CPython 3.13 has experimental support for running with the global interpreter lock disabled when built with --disable-gil . See Free-threaded CPython for more details.

Platform support:

PEP 730 : Apple’s iOS is now an officially supported platform. Official Android support ( PEP 738 ) is in the works as well.

Removed modules:

PEP 594 : The remaining 19 “dead batteries” have been removed from the standard library: aifc , audioop , cgi , cgitb , chunk , crypt , imghdr , mailcap , msilib , nis , nntplib , ossaudiodev , pipes , sndhdr , spwd , sunau , telnetlib , uu and xdrlib .

Also removed were the tkinter.tix and lib2to3 modules, and the 2to3 program.

Release schedule changes:

PEP 602 (“Annual Release Cycle for Python”) has been updated:

Python 3.9 - 3.12 have one and a half years of full support, followed by three and a half years of security fixes.

Python 3.13 and later have two years of full support, followed by three years of security fixes.

New Features ¶

A better interactive interpreter ¶.

On Unix-like systems like Linux or macOS, Python now uses a new interactive shell. When the user starts the REPL from an interactive terminal, and both curses and readline are available, the interactive shell now supports the following new features:

Colorized prompts.

Multiline editing with history preservation.

Interactive help browsing using F1 with a separate command history.

History browsing using F2 that skips output as well as the >>> and … prompts.

“Paste mode” with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt).

The ability to issue REPL-specific commands like help , exit , and quit without the need to use call parentheses after the command name.

If the new interactive shell is not desired, it can be disabled via the PYTHON_BASIC_REPL environment variable.

For more on interactive mode, see Interactive Mode .

(Contributed by Pablo Galindo Salgado, Łukasz Langa, and Lysandros Nikolaou in gh-111201 based on code from the PyPy project.)

Improved Error Messages ¶

The interpreter now colorizes error messages when displaying tracebacks by default. This feature can be controlled via the new PYTHON_COLORS environment variable as well as the canonical NO_COLOR and FORCE_COLOR environment variables. See also Controlling color . (Contributed by Pablo Galindo Salgado in gh-112730 .)

A common mistake is to write a script with the same name as a standard library module. When this results in errors, we now display a more helpful error message:

Similarly, if a script has the same name as a third-party module it attempts to import, and this results in errors, we also display a more helpful error message:

(Contributed by Shantanu Jain in gh-95754 .)

When an incorrect keyword argument is passed to a function, the error message now potentially suggests the correct keyword argument. (Contributed by Pablo Galindo Salgado and Shantanu Jain in gh-107944 .)

Classes have a new __static_attributes__ attribute, populated by the compiler, with a tuple of names of attributes of this class which are accessed through self.X from any function in its body. (Contributed by Irit Katriel in gh-115775 .)

Defined mutation semantics for locals() ¶

Historically, the expected result of mutating the return value of locals() has been left to individual Python implementations to define.

Through PEP 667 , Python 3.13 standardises the historical behaviour of CPython for most code execution scopes, but changes optimized scopes (functions, generators, coroutines, comprehensions, and generator expressions) to explicitly return independent snapshots of the currently assigned local variables, including locally referenced nonlocal variables captured in closures.

To ensure debuggers and similar tools can reliably update local variables in scopes affected by this change, FrameType.f_locals now returns a write-through proxy to the frame’s local and locally referenced nonlocal variables in these scopes, rather than returning an inconsistently updated shared dict instance with undefined runtime semantics.

See PEP 667 for more details, including related C API changes and deprecations.

(PEP and implementation contributed by Mark Shannon and Tian Gao in gh-74929 . Documentation updates provided by Guido van Rossum and Alyssa Coghlan.)

Incremental Garbage Collection ¶

The cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.

Support For Mobile Platforms ¶

iOS is now a PEP 11 supported platform. arm64-apple-ios (iPhone and iPad devices released after 2013) and arm64-apple-ios-simulator (Xcode iOS simulator running on Apple Silicon hardware) are now tier 3 platforms.

x86_64-apple-ios-simulator (Xcode iOS simulator running on older x86_64 hardware) is not a tier 3 supported platform, but will be supported on a best-effort basis.

See PEP 730 : for more details.

(PEP written and implementation contributed by Russell Keith-Magee in gh-114099 .)

Experimental JIT Compiler ¶

When CPython is configured using the --enable-experimental-jit option, a just-in-time compiler is added which may speed up some Python programs.

The internal architecture is roughly as follows.

We start with specialized Tier 1 bytecode . See What’s new in 3.11 for details.

When the Tier 1 bytecode gets hot enough, it gets translated to a new, purely internal Tier 2 IR , a.k.a. micro-ops (“uops”).

The Tier 2 IR uses the same stack-based VM as Tier 1, but the instruction format is better suited to translation to machine code.

We have several optimization passes for Tier 2 IR, which are applied before it is interpreted or translated to machine code.

There is a Tier 2 interpreter, but it is mostly intended for debugging the earlier stages of the optimization pipeline. The Tier 2 interpreter can be enabled by configuring Python with --enable-experimental-jit=interpreter .

When the JIT is enabled, the optimized Tier 2 IR is translated to machine code, which is then executed.

The machine code translation process uses a technique called copy-and-patch . It has no runtime dependencies, but there is a new build-time dependency on LLVM.

The --enable-experimental-jit flag has the following optional values:

no (default) – Disable the entire Tier 2 and JIT pipeline.

yes (default if the flag is present without optional value) – Enable the JIT. To disable the JIT at runtime, pass the environment variable PYTHON_JIT=0 .

yes-off – Build the JIT but disable it by default. To enable the JIT at runtime, pass the environment variable PYTHON_JIT=1 .

interpreter – Enable the Tier 2 interpreter but disable the JIT. The interpreter can be disabled by running with PYTHON_JIT=0 .

(On Windows, use PCbuild/build.bat --experimental-jit to enable the JIT or --experimental-jit-interpreter to enable the Tier 2 interpreter.)

See PEP 744 for more details.

(JIT by Brandt Bucher, inspired by a paper by Haoran Xu and Fredrik Kjolstad. Tier 2 IR by Mark Shannon and Guido van Rossum. Tier 2 optimizer by Ken Jin.)

Free-threaded CPython ¶

CPython will run with the global interpreter lock (GIL) disabled when configured using the --disable-gil option at build time. This is an experimental feature and therefore isn’t used by default. Users need to either compile their own interpreter, or install one of the experimental builds that are marked as free-threaded . See PEP 703 “Making the Global Interpreter Lock Optional in CPython” for more detail.

Free-threaded execution allows for full utilization of the available processing power by running threads in parallel on available CPU cores. While not all software will benefit from this automatically, programs designed with threading in mind will run faster on multicore hardware.

Work is still ongoing: expect some bugs and a substantial single-threaded performance hit.

The free-threaded build still supports optionally running with the GIL enabled at runtime using the environment variable PYTHON_GIL or the command line option -X gil .

To check if the current interpreter is configured with --disable-gil , use sysconfig.get_config_var("Py_GIL_DISABLED") . To check if the GIL is actually disabled in the running process, the sys._is_gil_enabled() function can be used.

C-API extension modules need to be built specifically for the free-threaded build. Extensions that support running with the GIL disabled should use the Py_mod_gil slot. Extensions using single-phase init should use PyUnstable_Module_SetGIL() to indicate whether they support running with the GIL disabled. Importing C extensions that don’t use these mechanisms will cause the GIL to be enabled, unless the GIL was explicitly disabled with the PYTHON_GIL environment variable or the -X gil=0 option.

pip 24.1b1 or newer is required to install packages with C extensions in the free-threaded build.

Other Language Changes ¶

Allow the count argument of str.replace() to be a keyword. (Contributed by Hugo van Kemenade in gh-106487 .)

Compiler now strip indents from docstrings. This will reduce the size of bytecode cache (e.g. .pyc file). For example, cache file size for sqlalchemy.orm.session in SQLAlchemy 2.0 is reduced by about 5%. This change will affect tools using docstrings, like doctest . (Contributed by Inada Naoki in gh-81283 .)

The compile() built-in can now accept a new flag, ast.PyCF_OPTIMIZED_AST , which is similar to ast.PyCF_ONLY_AST except that the returned AST is optimized according to the value of the optimize argument. (Contributed by Irit Katriel in gh-108113 ).

multiprocessing , concurrent.futures , compileall : Replace os.cpu_count() with os.process_cpu_count() to select the default number of worker threads and processes. Get the CPU affinity if supported. (Contributed by Victor Stinner in gh-109649 .)

os.path.realpath() now resolves MS-DOS style file names even if the file is not accessible. (Contributed by Moonsik Park in gh-82367 .)

Fixed a bug where a global declaration in an except block is rejected when the global is used in the else block. (Contributed by Irit Katriel in gh-111123 .)

Many functions now emit a warning if a boolean value is passed as a file descriptor argument. This can help catch some errors earlier. (Contributed by Serhiy Storchaka in gh-82626 .)

Added a new environment variable PYTHON_FROZEN_MODULES . It determines whether or not frozen modules are ignored by the import machinery, equivalent of the -X frozen_modules command-line option. (Contributed by Yilei Yang in gh-111374 .)

Add support for the perf profiler working without frame pointers through the new environment variable PYTHON_PERF_JIT_SUPPORT and command-line option -X perf_jit (Contributed by Pablo Galindo in gh-118518 .)

The new PYTHON_HISTORY environment variable can be used to change the location of a .python_history file. (Contributed by Levi Sabah, Zackery Spytz and Hugo van Kemenade in gh-73965 .)

Add PythonFinalizationError exception. This exception derived from RuntimeError is raised when an operation is blocked during the Python finalization .

The following functions now raise PythonFinalizationError, instead of RuntimeError :

_thread.start_new_thread() .

subprocess.Popen .

os.fork() .

os.forkpty() .

(Contributed by Victor Stinner in gh-114570 .)

Added name and mode attributes for compressed and archived file-like objects in modules bz2 , lzma , tarfile and zipfile . (Contributed by Serhiy Storchaka in gh-115961 .)

Allow controlling Expat >=2.6.0 reparse deferral ( CVE-2023-52425 ) by adding five new methods:

xml.etree.ElementTree.XMLParser.flush()

xml.etree.ElementTree.XMLPullParser.flush()

xml.parsers.expat.xmlparser.GetReparseDeferralEnabled()

xml.parsers.expat.xmlparser.SetReparseDeferralEnabled()

xml.sax.expatreader.ExpatParser.flush()

(Contributed by Sebastian Pipping in gh-115623 .)

The ssl.create_default_context() API now includes ssl.VERIFY_X509_PARTIAL_CHAIN and ssl.VERIFY_X509_STRICT in its default flags.

ssl.VERIFY_X509_STRICT may reject pre- RFC 5280 or malformed certificates that the underlying OpenSSL implementation otherwise would accept. While disabling this is not recommended, you can do so using:

(Contributed by William Woodruff in gh-112389 .)

The configparser.ConfigParser now accepts unnamed sections before named ones if configured to do so. (Contributed by Pedro Sousa Lacerda in gh-66449 .)

annotation scope within class scopes can now contain lambdas and comprehensions. Comprehensions that are located within class scopes are not inlined into their parent scope. (Contributed by Jelle Zijlstra in gh-109118 and gh-118160 .)

Classes have a new __firstlineno__ attribute, populated by the compiler, with the line number of the first line of the class definition. (Contributed by Serhiy Storchaka in gh-118465 .)

from __future__ import ... statements are now just normal relative imports if dots are present before the module name. (Contributed by Jeremiah Gabriel Pascual in gh-118216 .)

New Modules ¶

Improved modules ¶.

Add parameter deprecated in methods add_argument() and add_parser() which allows to deprecate command-line options, positional arguments and subcommands. (Contributed by Serhiy Storchaka in gh-83648 .)

Add 'w' type code ( Py_UCS4 ) that can be used for Unicode strings. It can be used instead of 'u' type code, which is deprecated. (Contributed by Inada Naoki in gh-80480 .)

Add clear() method in order to implement MutableSequence . (Contributed by Mike Zimin in gh-114894 .)

The constructors of node types in the ast module are now stricter in the arguments they accept, and have more intuitive behaviour when arguments are omitted.

If an optional field on an AST node is not included as an argument when constructing an instance, the field will now be set to None . Similarly, if a list field is omitted, that field will now be set to an empty list, and if a ast.expr_context field is omitted, it defaults to Load() . (Previously, in all cases, the attribute would be missing on the newly constructed AST node instance.)

If other arguments are omitted, a DeprecationWarning is emitted. This will cause an exception in Python 3.15. Similarly, passing a keyword argument that does not map to a field on the AST node is now deprecated, and will raise an exception in Python 3.15.

These changes do not apply to user-defined subclasses of ast.AST , unless the class opts in to the new behavior by setting the attribute ast.AST._field_types .

(Contributed by Jelle Zijlstra in gh-105858 , gh-117486 , and gh-118851 .)

ast.parse() now accepts an optional argument optimize which is passed on to the compile() built-in. This makes it possible to obtain an optimized AST. (Contributed by Irit Katriel in gh-108113 .)

asyncio.loop.create_unix_server() will now automatically remove the Unix socket when the server is closed. (Contributed by Pierre Ossman in gh-111246 .)

asyncio.DatagramTransport.sendto() will now send zero-length datagrams if called with an empty bytes object. The transport flow control also now accounts for the datagram header when calculating the buffer size. (Contributed by Jamie Phan in gh-115199 .)

Add asyncio.Server.close_clients() and asyncio.Server.abort_clients() methods which allow to more forcefully close an asyncio server. (Contributed by Pierre Ossman in gh-113538 .)

asyncio.as_completed() now returns an object that is both an asynchronous iterator and a plain iterator of awaitables. The awaitables yielded by asynchronous iteration include original task or future objects that were passed in, making it easier to associate results with the tasks being completed. (Contributed by Justin Arthur in gh-77714 .)

When asyncio.TaskGroup.create_task() is called on an inactive asyncio.TaskGroup , the given coroutine will be closed (which prevents a RuntimeWarning about the given coroutine being never awaited). (Contributed by Arthur Tacca and Jason Zhang in gh-115957 .)

Improved behavior of asyncio.TaskGroup when an external cancellation collides with an internal cancellation. For example, when two task groups are nested and both experience an exception in a child task simultaneously, it was possible that the outer task group would hang, because its internal cancellation was swallowed by the inner task group.

In the case where a task group is cancelled externally and also must raise an ExceptionGroup , it will now call the parent task’s cancel() method. This ensures that a asyncio.CancelledError will be raised at the next await , so the cancellation is not lost.

An added benefit of these changes is that task groups now preserve the cancellation count ( asyncio.Task.cancelling() ).

In order to handle some corner cases, asyncio.Task.uncancel() may now reset the undocumented _must_cancel flag when the cancellation count reaches zero.

(Inspired by an issue reported by Arthur Tacca in gh-116720 .)

Add asyncio.Queue.shutdown() (along with asyncio.QueueShutDown ) for queue termination. (Contributed by Laurie Opperman and Yves Duprat in gh-104228 .)

Accept a tuple of separators in asyncio.StreamReader.readuntil() , stopping when one of them is encountered. (Contributed by Bruce Merry in gh-81322 .)

Add base64.z85encode() and base64.z85decode() functions which allow encoding and decoding Z85 data. See Z85 specification for more information. (Contributed by Matan Perelman in gh-75299 .)

Add copy.replace() function which allows to create a modified copy of an object, which is especially useful for immutable objects. It supports named tuples created with the factory function collections.namedtuple() , dataclass instances, various datetime objects, Signature objects, Parameter objects, code object , and any user classes which define the __replace__() method. (Contributed by Serhiy Storchaka in gh-108751 .)

Add dbm.gnu.gdbm.clear() and dbm.ndbm.ndbm.clear() methods that remove all items from the database. (Contributed by Donghee Na in gh-107122 .)

Add new dbm.sqlite3 backend, and make it the default dbm backend. (Contributed by Raymond Hettinger and Erlend E. Aasland in gh-100414 .)

Change the output of dis module functions to show logical labels for jump targets and exception handlers, rather than offsets. The offsets can be added with the new -O command line option or the show_offsets parameter. (Contributed by Irit Katriel in gh-112137 .)

Color is added to the output by default. This can be controlled via the new PYTHON_COLORS environment variable as well as the canonical NO_COLOR and FORCE_COLOR environment variables. See also Controlling color . (Contributed by Hugo van Kemenade in gh-117225 .)

The doctest.DocTestRunner.run() method now counts the number of skipped tests. Add doctest.DocTestRunner.skips and doctest.TestResults.skipped attributes. (Contributed by Victor Stinner in gh-108794 .)

email.utils.getaddresses() and email.utils.parseaddr() now return ('', '') 2-tuples in more situations where invalid email addresses are encountered instead of potentially inaccurate values. Add optional strict parameter to these two functions: use strict=False to get the old behavior, accept malformed inputs. getattr(email.utils, 'supports_strict_parsing', False) can be used to check if the strict parameter is available. (Contributed by Thomas Dwyer and Victor Stinner for gh-102988 to improve the CVE-2023-27043 fix.)

fractions ¶

Formatting for objects of type fractions.Fraction now supports the standard format specification mini-language rules for fill, alignment, sign handling, minimum width and grouping. (Contributed by Mark Dickinson in gh-111320 .)

The cyclic garbage collector is now incremental, which changes the meanings of the results of gc.get_threshold() and gc.set_threshold() as well as gc.get_count() and gc.get_stats() .

gc.get_threshold() returns a three-item tuple for backwards compatibility. The first value is the threshold for young collections, as before; the second value determines the rate at which the old collection is scanned (the default is 10, and higher values mean that the old collection is scanned more slowly). The third value is meaningless and is always zero.

gc.set_threshold() ignores any items after the second.

gc.get_count() and gc.get_stats() return the same format of results as before. The only difference is that instead of the results referring to the young, aging and old generations, the results refer to the young generation and the aging and collecting spaces of the old generation.

In summary, code that attempted to manipulate the behavior of the cycle GC may not work exactly as intended, but it is very unlikely to be harmful. All other code will work just fine.

Add glob.translate() function that converts a path specification with shell-style wildcards to a regular expression. (Contributed by Barney Gale in gh-72904 .)

importlib ¶

Previously deprecated importlib.resources functions are un-deprecated:

is_resource() open_binary() open_text() path() read_binary() read_text()

All now allow for a directory (or tree) of resources, using multiple positional arguments.

For text-reading functions, the encoding and errors must now be given as keyword arguments.

The contents() remains deprecated in favor of the full-featured Traversable API. However, there is now no plan to remove it.

(Contributed by Petr Viktorin in gh-106532 .)

The io.IOBase finalizer now logs the close() method errors with sys.unraisablehook . Previously, errors were ignored silently by default, and only logged in Python Development Mode or on Python built on debug mode . (Contributed by Victor Stinner in gh-62948 .)

ipaddress ¶

Add the ipaddress.IPv4Address.ipv6_mapped property, which returns the IPv4-mapped IPv6 address. (Contributed by Charles Machalow in gh-109466 .)

Fix is_global and is_private behavior in IPv4Address , IPv6Address , IPv4Network and IPv6Network .

itertools ¶

Added a strict option to itertools.batched() . This raises a ValueError if the final batch is shorter than the specified batch size. (Contributed by Raymond Hettinger in gh-113202 .)

Add the allow_code parameter in module functions. Passing allow_code=False prevents serialization and de-serialization of code objects which are incompatible between Python versions. (Contributed by Serhiy Storchaka in gh-113626 .)

A new function fma() for fused multiply-add operations has been added. This function computes x * y + z with only a single round, and so avoids any intermediate loss of precision. It wraps the fma() function provided by C99, and follows the specification of the IEEE 754 “fusedMultiplyAdd” operation for special cases. (Contributed by Mark Dickinson and Victor Stinner in gh-73468 .)

mimetypes ¶

Add the guess_file_type() function which works with file path. Passing file path instead of URL in guess_type() is soft deprecated . (Contributed by Serhiy Storchaka in gh-66543 .)

The mmap.mmap class now has an seekable() method that can be used when a seekable file-like object is required. The seek() method now returns the new absolute position. (Contributed by Donghee Na and Sylvie Liberman in gh-111835 .)

mmap.mmap now has a trackfd parameter on Unix; if it is False , the file descriptor specified by fileno will not be duplicated. (Contributed by Zackery Spytz and Petr Viktorin in gh-78502 .)

mmap.mmap is now protected from crashing on Windows when the mapped memory is inaccessible due to file system errors or access violations. (Contributed by Jannis Weigend in gh-118209 .)

Move opcode.ENABLE_SPECIALIZATION to _opcode.ENABLE_SPECIALIZATION . This field was added in 3.12, it was never documented and is not intended for external usage. (Contributed by Irit Katriel in gh-105481 .)

Removed opcode.is_pseudo , opcode.MIN_PSEUDO_OPCODE and opcode.MAX_PSEUDO_OPCODE , which were added in 3.12, were never documented or exposed through dis , and were not intended to be used externally.

Add os.process_cpu_count() function to get the number of logical CPUs usable by the calling thread of the current process. (Contributed by Victor Stinner in gh-109649 .)

Add a low level interface for Linux’s timer notification file descriptors via os.timerfd_create() , os.timerfd_settime() , os.timerfd_settime_ns() , os.timerfd_gettime() , and os.timerfd_gettime_ns() , os.TFD_NONBLOCK , os.TFD_CLOEXEC , os.TFD_TIMER_ABSTIME , and os.TFD_TIMER_CANCEL_ON_SET (Contributed by Masaru Tsuchiyama in gh-108277 .)

os.cpu_count() and os.process_cpu_count() can be overridden through the new environment variable PYTHON_CPU_COUNT or the new command-line option -X cpu_count . This option is useful for users who need to limit CPU resources of a container system without having to modify the container (application code). (Contributed by Donghee Na in gh-109595 .)

Add support of os.lchmod() and the follow_symlinks argument in os.chmod() on Windows. Note that the default value of follow_symlinks in os.lchmod() is False on Windows. (Contributed by Serhiy Storchaka in gh-59616 .)

Add support of os.fchmod() and a file descriptor in os.chmod() on Windows. (Contributed by Serhiy Storchaka in gh-113191 .)

os.posix_spawn() now accepts env=None , which makes the newly spawned process use the current process environment. (Contributed by Jakub Kulik in gh-113119 .)

os.posix_spawn() gains an os.POSIX_SPAWN_CLOSEFROM attribute for use in file_actions= on platforms that support posix_spawn_file_actions_addclosefrom_np() . (Contributed by Jakub Kulik in gh-113117 .)

os.mkdir() and os.makedirs() on Windows now support passing a mode value of 0o700 to apply access control to the new directory. This implicitly affects tempfile.mkdtemp() and is a mitigation for CVE-2024-4030 . Other values for mode continue to be ignored. (Contributed by Steve Dower in gh-118486 .)

Add os.path.isreserved() to check if a path is reserved on the current system. This function is only available on Windows. (Contributed by Barney Gale in gh-88569 .)

On Windows, os.path.isabs() no longer considers paths starting with exactly one (back)slash to be absolute. (Contributed by Barney Gale and Jon Foster in gh-44626 .)

Add support of dir_fd and follow_symlinks keyword arguments in shutil.chown() . (Contributed by Berker Peksag and Tahia K in gh-62308 )

Add pathlib.UnsupportedOperation , which is raised instead of NotImplementedError when a path operation isn’t supported. (Contributed by Barney Gale in gh-89812 .)

Add pathlib.Path.from_uri() , a new constructor to create a pathlib.Path object from a ‘file’ URI ( file:// ). (Contributed by Barney Gale in gh-107465 .)

Add pathlib.PurePath.full_match() for matching paths with shell-style wildcards, including the recursive wildcard “ ** ”. (Contributed by Barney Gale in gh-73435 .)

Add pathlib.PurePath.parser class attribute that stores the implementation of os.path used for low-level path parsing and joining: either posixpath or ntpath .

Add recurse_symlinks keyword-only argument to pathlib.Path.glob() and rglob() . (Contributed by Barney Gale in gh-77609 .)

Add follow_symlinks keyword-only argument to is_file() , is_dir() , owner() , group() . (Contributed by Barney Gale in gh-105793 , and Kamil Turek in gh-107962 .)

Return files and directories from pathlib.Path.glob() and rglob() when given a pattern that ends with “ ** ”. In earlier versions, only directories were returned. (Contributed by Barney Gale in gh-70303 .)

Add ability to move between chained exceptions during post mortem debugging in pm() using the new exceptions [exc_number] command for Pdb. (Contributed by Matthias Bussonnier in gh-106676 .)

Expressions/statements whose prefix is a pdb command are now correctly identified and executed. (Contributed by Tian Gao in gh-108464 .)

sys.path[0] will no longer be replaced by the directory of the script being debugged when sys.flags.safe_path is set (via the -P command line option or PYTHONSAFEPATH environment variable). (Contributed by Tian Gao and Christian Walther in gh-111762 .)

zipapp is supported as a debugging target. (Contributed by Tian Gao in gh-118501 .)

breakpoint() and pdb.set_trace() now enter the debugger immediately rather than on the next line of code to be executed. This change prevents the debugger from breaking outside of the context when breakpoint() is positioned at the end of the context. (Contributed by Tian Gao in gh-118579 .)

Add queue.Queue.shutdown() (along with queue.ShutDown ) for queue termination. (Contributed by Laurie Opperman and Yves Duprat in gh-104750 .)

Add a command-line interface . (Contributed by Hugo van Kemenade in gh-54321 .)

Rename re.error to re.PatternError for improved clarity. re.error is kept for backward compatibility.

.pth files are now decoded by UTF-8 first, and then by the locale encoding if the UTF-8 decoding fails. (Contributed by Inada Naoki in gh-117802 .)

A ResourceWarning is now emitted if a sqlite3.Connection object is not closed explicitly. (Contributed by Erlend E. Aasland in gh-105539 .)

Add filter keyword-only parameter to sqlite3.Connection.iterdump() for filtering database objects to dump. (Contributed by Mariusz Felisiak in gh-91602 .)

statistics ¶

Add statistics.kde() for kernel density estimation. This makes it possible to estimate a continuous probability density function from a fixed number of discrete samples. Also added statistics.kde_random() for sampling from the estimated probability density function. (Contributed by Raymond Hettinger in gh-115863 .)

subprocess ¶

The subprocess module now uses the os.posix_spawn() function in more situations. Notably in the default case of close_fds=True on more recent versions of platforms including Linux, FreeBSD, and Solaris where the C library provides posix_spawn_file_actions_addclosefrom_np() . On Linux this should perform similar to our existing Linux vfork() based code. A private control knob subprocess._USE_POSIX_SPAWN can be set to False if you need to force subprocess not to ever use os.posix_spawn() . Please report your reason and platform details in the CPython issue tracker if you set this so that we can improve our API selection logic for everyone. (Contributed by Jakub Kulik in gh-113117 .)

Add the sys._is_interned() function to test if the string was interned. This function is not guaranteed to exist in all implementations of Python. (Contributed by Serhiy Storchaka in gh-78573 .)

On Windows, the default mode 0o700 used by tempfile.mkdtemp() now limits access to the new directory due to changes to os.mkdir() . This is a mitigation for CVE-2024-4030 . (Contributed by Steve Dower in gh-118486 .)

On Windows, time.monotonic() now uses the QueryPerformanceCounter() clock to have a resolution better than 1 us, instead of the GetTickCount64() clock which has a resolution of 15.6 ms. (Contributed by Victor Stinner in gh-88494 .)

On Windows, time.time() now uses the GetSystemTimePreciseAsFileTime() clock to have a resolution better than 1 μs, instead of the GetSystemTimeAsFileTime() clock which has a resolution of 15.6 ms. (Contributed by Victor Stinner in gh-63207 .)

Add tkinter widget methods: tk_busy_hold() , tk_busy_configure() , tk_busy_cget() , tk_busy_forget() , tk_busy_current() , and tk_busy_status() . (Contributed by Miguel, klappnase and Serhiy Storchaka in gh-72684 .)

The tkinter widget method wm_attributes() now accepts the attribute name without the minus prefix to get window attributes, e.g. w.wm_attributes('alpha') and allows to specify attributes and values to set as keyword arguments, e.g. w.wm_attributes(alpha=0.5) . Add new optional keyword-only parameter return_python_dict : calling w.wm_attributes(return_python_dict=True) returns the attributes as a dict instead of a tuple. (Contributed by Serhiy Storchaka in gh-43457 .)

Add new optional keyword-only parameter return_ints in the Text.count() method. Passing return_ints=True makes it always returning the single count as an integer instead of a 1-tuple or None . (Contributed by Serhiy Storchaka in gh-97928 .)

Add support of the “vsapi” element type in the element_create() method of tkinter.ttk.Style . (Contributed by Serhiy Storchaka in gh-68166 .)

Add the after_info() method for Tkinter widgets. (Contributed by Cheryl Sabella in gh-77020 .)

Add the PhotoImage method copy_replace() to copy a region from one image to other image, possibly with pixel zooming and/or subsampling. Add from_coords parameter to PhotoImage methods copy() , zoom() and subsample() . Add zoom and subsample parameters to PhotoImage method copy() . (Contributed by Serhiy Storchaka in gh-118225 .)

Add the PhotoImage methods read() to read an image from a file and data() to get the image data. Add background and grayscale parameters to PhotoImage method write() . (Contributed by Serhiy Storchaka in gh-118271 .)

traceback ¶

Add show_group parameter to traceback.TracebackException.format_exception_only() to format the nested exceptions of a BaseExceptionGroup instance, recursively. (Contributed by Irit Katriel in gh-105292 .)

Add the field exc_type_str to TracebackException , which holds a string display of the exc_type . Deprecate the field exc_type which holds the type object itself. Add parameter save_exc_type (default True ) to indicate whether exc_type should be saved. (Contributed by Irit Katriel in gh-112332 .)

SimpleNamespace constructor now allows specifying initial values of attributes as a positional argument which must be a mapping or an iterable of key-value pairs. (Contributed by Serhiy Storchaka in gh-108191 .)

Add typing.get_protocol_members() to return the set of members defining a typing.Protocol . Add typing.is_protocol() to check whether a class is a typing.Protocol . (Contributed by Jelle Zijlstra in gh-104873 .)

Add typing.ReadOnly , a special typing construct to mark an item of a typing.TypedDict as read-only for type checkers. See PEP 705 for more details.

Add typing.NoDefault , a sentinel object used to represent the defaults of some parameters in the typing module. (Contributed by Jelle Zijlstra in gh-116126 .)

unicodedata ¶

The Unicode database has been updated to version 15.1.0. (Contributed by James Gerity in gh-109559 .)

Add support for adding source control management (SCM) ignore files to a virtual environment’s directory. By default, Git is supported. This is implemented as opt-in via the API which can be extended to support other SCMs ( venv.EnvBuilder and venv.create() ), and opt-out via the CLI (using --without-scm-ignore-files ). (Contributed by Brett Cannon in gh-108125 .)

The new warnings.deprecated() decorator provides a way to communicate deprecations to static type checkers and to warn on usage of deprecated classes and functions. A runtime deprecation warning may also be emitted when a decorated function or class is used at runtime. See PEP 702 . (Contributed by Jelle Zijlstra in gh-104003 .)

xml.etree.ElementTree ¶

Add the close() method for the iterator returned by iterparse() for explicit cleaning up. (Contributed by Serhiy Storchaka in gh-69893 .)

zipimport ¶

Gains support for ZIP64 format files. Everybody loves huge code right? (Contributed by Tim Hatch in gh-94146 .)

Optimizations ¶

textwrap.indent() is now ~30% faster than before for large input. (Contributed by Inada Naoki in gh-107369 .)

The subprocess module uses os.posix_spawn() in more situations including the default where close_fds=True on many modern platforms. This should provide a noteworthy performance increase launching processes on FreeBSD and Solaris. See the subprocess section above for details. (Contributed by Jakub Kulik in gh-113117 .)

Several standard library modules have had their import times significantly improved. For example, the import time of the typing module has been reduced by around a third by removing dependencies on re and contextlib . Other modules to enjoy import-time speedups include importlib.metadata , threading , enum , functools and email.utils . (Contributed by Alex Waygood, Shantanu Jain, Adam Turner, Daniel Hollas and others in gh-109653 .)

Removed Modules And APIs ¶

Pep 594: dead batteries (and other module removals) ¶.

PEP 594 removed 19 modules from the standard library, deprecated in Python 3.11:

aifc . (Contributed by Victor Stinner in gh-104773 .)

audioop . (Contributed by Victor Stinner in gh-104773 .)

chunk . (Contributed by Victor Stinner in gh-104773 .)

cgi and cgitb .

cgi.FieldStorage can typically be replaced with urllib.parse.parse_qsl() for GET and HEAD requests, and the email.message module or multipart PyPI project for POST and PUT .

cgi.parse() can be replaced by calling urllib.parse.parse_qs() directly on the desired query string, except for multipart/form-data input, which can be handled as described for cgi.parse_multipart() .

cgi.parse_header() can be replaced with the functionality in the email package, which implements the same MIME RFCs. For example, with email.message.EmailMessage :

cgi.parse_multipart() can be replaced with the functionality in the email package (e.g. email.message.EmailMessage and email.message.Message ) which implements the same MIME RFCs, or with the multipart PyPI project.

(Contributed by Victor Stinner in gh-104773 .)

crypt module and its private _crypt extension. The hashlib module is a potential replacement for certain use cases. Otherwise, the following PyPI projects can be used:

bcrypt : Modern password hashing for your software and your servers.

passlib : Comprehensive password hashing framework supporting over 30 schemes.

argon2-cffi : The secure Argon2 password hashing algorithm.

legacycrypt : ctypes wrapper to the POSIX crypt library call and associated functionality.

crypt_r : Fork of the crypt module, wrapper to the crypt_r(3) library call and associated functionality.

imghdr : use the projects filetype , puremagic , or python-magic instead. (Contributed by Victor Stinner in gh-104773 .)

mailcap . The mimetypes module provides an alternative. (Contributed by Victor Stinner in gh-104773 .)

msilib . (Contributed by Zachary Ware in gh-104773 .)

nis . (Contributed by Victor Stinner in gh-104773 .)

nntplib : the nntplib PyPI project can be used instead. (Contributed by Victor Stinner in gh-104773 .)

ossaudiodev : use the pygame project for audio playback. (Contributed by Victor Stinner in gh-104780 .)

pipes : use the subprocess module instead. (Contributed by Victor Stinner in gh-104773 .)

sndhdr : use the projects filetype , puremagic , or python-magic instead. (Contributed by Victor Stinner in gh-104773 .)

spwd : the python-pam project can be used instead. (Contributed by Victor Stinner in gh-104773 .)

sunau . (Contributed by Victor Stinner in gh-104773 .)

telnetlib , use the projects telnetlib3 ` or :pypi:`Exscript instead. (Contributed by Victor Stinner in gh-104773 .)

uu : the base64 module is a modern alternative. (Contributed by Victor Stinner in gh-104773 .)

xdrlib . (Contributed by Victor Stinner in gh-104773 .)

Remove the 2to3 program and the lib2to3 module, deprecated in Python 3.11. (Contributed by Victor Stinner in gh-104780 .)

Remove the tkinter.tix module, deprecated in Python 3.6. The third-party Tix library which the module wrapped is unmaintained. (Contributed by Zachary Ware in gh-75552 .)

configparser ¶

Remove the undocumented configparser.LegacyInterpolation class, deprecated in the docstring since Python 3.2, and with a deprecation warning since Python 3.11. (Contributed by Hugo van Kemenade in gh-104886 .)

Remove deprecated __getitem__() access for importlib.metadata.EntryPoint objects. (Contributed by Jason R. Coombs in gh-113175 .)

Remove locale.resetlocale() function deprecated in Python 3.11: use locale.setlocale(locale.LC_ALL, "") instead. (Contributed by Victor Stinner in gh-104783 .)

logging : Remove undocumented and untested Logger.warn() and LoggerAdapter.warn() methods and logging.warn() function. Deprecated since Python 3.3, they were aliases to the logging.Logger.warning() method, logging.LoggerAdapter.warning() method and logging.warning() function. (Contributed by Victor Stinner in gh-105376 .)

Remove support for using pathlib.Path objects as context managers. This functionality was deprecated and made a no-op in Python 3.9.

Remove undocumented, never working, and deprecated re.template function and re.TEMPLATE flag (and re.T alias). (Contributed by Serhiy Storchaka and Nikita Sobolev in gh-105687 .)

Remove the turtle.RawTurtle.settiltangle() method, deprecated in docs since Python 3.1 and with a deprecation warning since Python 3.11. (Contributed by Hugo van Kemenade in gh-104876 .)

Namespaces typing.io and typing.re , deprecated in Python 3.8, are now removed. The items in those namespaces can be imported directly from typing . (Contributed by Sebastian Rittau in gh-92871 .)

Remove support for the keyword-argument method of creating typing.TypedDict types, deprecated in Python 3.11. (Contributed by Tomas Roun in gh-104786 .)

Remove the following unittest functions, deprecated in Python 3.11:

unittest.findTestCases()

unittest.makeSuite()

unittest.getTestCaseNames()

Use TestLoader methods instead:

unittest.TestLoader.loadTestsFromModule()

unittest.TestLoader.loadTestsFromTestCase()

unittest.TestLoader.getTestCaseNames()

(Contributed by Hugo van Kemenade in gh-104835 .)

Remove the untested and undocumented unittest.TestProgram.usageExit() method, deprecated in Python 3.11. (Contributed by Hugo van Kemenade in gh-104992 .)

Remove cafile , capath and cadefault parameters of the urllib.request.urlopen() function, deprecated in Python 3.6: pass the context parameter instead. Use ssl.SSLContext.load_cert_chain() to load specific certificates, or let ssl.create_default_context() select the system’s trusted CA certificates for you. (Contributed by Victor Stinner in gh-105382 .)

webbrowser ¶

Remove the untested and undocumented webbrowser MacOSX class, deprecated in Python 3.11. Use the MacOSXOSAScript class (introduced in Python 3.2) instead. (Contributed by Hugo van Kemenade in gh-104804 .)

Remove deprecated webbrowser.MacOSXOSAScript._name attribute. Use webbrowser.MacOSXOSAScript.name attribute instead. (Contributed by Nikita Sobolev in gh-105546 .)

New Deprecations ¶

Removed chained classmethod descriptors (introduced in gh-63272 ). This can no longer be used to wrap other descriptors such as property . The core design of this feature was flawed and caused a number of downstream problems. To “pass-through” a classmethod , consider using the __wrapped__ attribute that was added in Python 3.10. (Contributed by Raymond Hettinger in gh-89519 .)

array : array ’s 'u' format code, deprecated in docs since Python 3.3, emits DeprecationWarning since 3.13 and will be removed in Python 3.16. Use the 'w' format code instead. (Contributed by Hugo van Kemenade in gh-80480 .)

ctypes : Deprecate undocumented ctypes.SetPointerType() and ctypes.ARRAY() functions. Replace ctypes.ARRAY(item_type, size) with item_type * size . (Contributed by Victor Stinner in gh-105733 .)

decimal : Deprecate non-standard format specifier “N” for decimal.Decimal . It was not documented and only supported in the C implementation. (Contributed by Serhiy Storchaka in gh-89902 .)

dis : The dis.HAVE_ARGUMENT separator is deprecated. Check membership in hasarg instead. (Contributed by Irit Katriel in gh-109319 .)

Frame objects : Calling frame.clear() on a suspended frame raises RuntimeError (as has always been the case for an executing frame). (Contributed by Irit Katriel in gh-79932 .)

getopt and optparse modules: They are now soft deprecated : the argparse module should be used for new projects. Previously, the optparse module was already deprecated, its removal was not scheduled, and no warnings was emitted: so there is no change in practice. (Contributed by Victor Stinner in gh-106535 .)

gettext : Emit deprecation warning for non-integer numbers in gettext functions and methods that consider plural forms even if the translation was not found. (Contributed by Serhiy Storchaka in gh-88434 .)

glob : The undocumented glob.glob0() and glob.glob1() functions are deprecated. Use glob.glob() and pass a directory to its root_dir argument instead. (Contributed by Barney Gale in gh-117337 .)

http.server : http.server.CGIHTTPRequestHandler now emits a DeprecationWarning as it will be removed in 3.15. Process-based CGI HTTP servers have been out of favor for a very long time. This code was outdated, unmaintained, and rarely used. It has a high potential for both security and functionality bugs. This includes removal of the --cgi flag to the python -m http.server command line in 3.15.

mimetypes : Passing file path instead of URL in guess_type() is soft deprecated . Use guess_file_type() instead. (Contributed by Serhiy Storchaka in gh-66543 .)

re : Passing optional arguments maxsplit , count and flags in module-level functions re.split() , re.sub() and re.subn() as positional arguments is now deprecated. In future Python versions these parameters will be keyword-only . (Contributed by Serhiy Storchaka in gh-56166 .)

pathlib : pathlib.PurePath.is_reserved() is deprecated and scheduled for removal in Python 3.15. Use os.path.isreserved() to detect reserved paths on Windows.

platform : java_ver() is deprecated and will be removed in 3.15. It was largely untested, had a confusing API, and was only useful for Jython support. (Contributed by Nikita Sobolev in gh-116349 .)

pydoc : Deprecate undocumented pydoc.ispackage() function. (Contributed by Zackery Spytz in gh-64020 .)

sqlite3 : Passing more than one positional argument to sqlite3.connect() and the sqlite3.Connection constructor is deprecated. The remaining parameters will become keyword-only in Python 3.15.

Deprecate passing name, number of arguments, and the callable as keyword arguments for the following sqlite3.Connection APIs:

create_function()

create_aggregate()

Deprecate passing the callback callable by keyword for the following sqlite3.Connection APIs:

set_authorizer()

set_progress_handler()

set_trace_callback()

The affected parameters will become positional-only in Python 3.15.

(Contributed by Erlend E. Aasland in gh-107948 and gh-108278 .)

sys : sys._enablelegacywindowsfsencoding() function. Replace it with the PYTHONLEGACYWINDOWSFSENCODING environment variable. (Contributed by Inada Naoki in gh-73427 .)

tarfile : The undocumented and unused tarfile attribute of tarfile.TarFile is deprecated and scheduled for removal in Python 3.16.

traceback : The field exc_type of traceback.TracebackException is deprecated. Use exc_type_str instead.

Creating a typing.NamedTuple class using keyword arguments to denote the fields ( NT = NamedTuple("NT", x=int, y=int) ) is deprecated, and will be disallowed in Python 3.15. Use the class-based syntax or the functional syntax instead. (Contributed by Alex Waygood in gh-105566 .)

When using the functional syntax to create a typing.NamedTuple class or a typing.TypedDict class, failing to pass a value to the ‘fields’ parameter ( NT = NamedTuple("NT") or TD = TypedDict("TD") ) is deprecated. Passing None to the ‘fields’ parameter ( NT = NamedTuple("NT", None) or TD = TypedDict("TD", None) ) is also deprecated. Both will be disallowed in Python 3.15. To create a NamedTuple class with 0 fields, use class NT(NamedTuple): pass or NT = NamedTuple("NT", []) . To create a TypedDict class with 0 fields, use class TD(TypedDict): pass or TD = TypedDict("TD", {}) . (Contributed by Alex Waygood in gh-105566 and gh-105570 .)

typing.no_type_check_decorator() is deprecated, and scheduled for removal in Python 3.15. After eight years in the typing module, it has yet to be supported by any major type checkers. (Contributed by Alex Waygood in gh-106309 .)

typing.AnyStr is deprecated. In Python 3.16, it will be removed from typing.__all__ , and a DeprecationWarning will be emitted when it is imported or accessed. It will be removed entirely in Python 3.18. Use the new type parameter syntax instead. (Contributed by Michael The in gh-107116 .)

User-defined functions : Assignment to a function’s __code__ attribute where the new code object’s type does not match the function’s type, is deprecated. The different types are: plain function, generator, async generator and coroutine. (Contributed by Irit Katriel in gh-81137 .)

wave : Deprecate the getmark() , setmark() and getmarkers() methods of the wave.Wave_read and wave.Wave_write classes. They will be removed in Python 3.15. (Contributed by Victor Stinner in gh-105096 .)

Pending Removal in Python 3.14 ¶

argparse : The type , choices , and metavar parameters of argparse.BooleanOptionalAction are deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248 .)

ast : The following features have been deprecated in documentation since Python 3.8, now cause a DeprecationWarning to be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:

ast.NameConstant

ast.Ellipsis

Use ast.Constant instead. (Contributed by Serhiy Storchaka in gh-90953 .)

collections.abc : Deprecated ByteString . Prefer Sequence or Buffer . For use in typing, prefer a union, like bytes | bytearray , or collections.abc.Buffer . (Contributed by Shantanu Jain in gh-91896 .)

email : Deprecated the isdst parameter in email.utils.localtime() . (Contributed by Alan Williams in gh-72346 .)

importlib : __package__ and __cached__ will cease to be set or taken into consideration by the import system ( gh-97879 ).

importlib.abc deprecated classes:

importlib.abc.ResourceReader

importlib.abc.Traversable

importlib.abc.TraversableResources

Use importlib.resources.abc classes instead:

importlib.resources.abc.Traversable

importlib.resources.abc.TraversableResources

(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963 .)

itertools had undocumented, inefficient, historically buggy, and inconsistent support for copy, deepcopy, and pickle operations. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588 .)

multiprocessing : The default start method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where 'fork' is currently the default ( gh-84559 ). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use the get_context() or set_start_method() APIs to explicitly specify when your code requires 'fork' . See Contexts and start methods .

pathlib : is_relative_to() and relative_to() : passing additional arguments is deprecated.

pkgutil : find_loader() and get_loader() now raise DeprecationWarning ; use importlib.util.find_spec() instead. (Contributed by Nikita Sobolev in gh-97850 .)

master_open() : use pty.openpty() .

slave_open() : use pty.openpty() .

shutil.rmtree() onerror parameter is deprecated in 3.12, and will be removed in 3.14: use the onexc parameter instead.

version and version_info .

execute() and executemany() if named placeholders are used and parameters is a sequence instead of a dict .

date and datetime adapter, date and timestamp converter: see the sqlite3 documentation for suggested replacement recipes.

types.CodeType : Accessing co_lnotab was deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a proper DeprecationWarning in 3.12. May be removed in 3.14. (Contributed by Nikita Sobolev in gh-101866 .)

typing : ByteString , deprecated since Python 3.9, now causes a DeprecationWarning to be emitted when it is used.

urllib : urllib.parse.Quoter is deprecated: it was not intended to be a public API. (Contributed by Gregory P. Smith in gh-88168 .)

xml.etree.ElementTree : Testing the truth value of an Element is deprecated and will raise an exception in Python 3.14.

Pending Removal in Python 3.15 ¶

http.server.CGIHTTPRequestHandler will be removed along with its related --cgi flag to python -m http.server . It was obsolete and rarely used. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.

locale : locale.getdefaultlocale() was deprecated in Python 3.11 and originally planned for removal in Python 3.13 ( gh-90817 ), but removal has been postponed to Python 3.15. Use locale.setlocale() , locale.getencoding() and locale.getlocale() instead. (Contributed by Hugo van Kemenade in gh-111187 .)

threading : Passing any arguments to threading.RLock() is now deprecated. C version allows any numbers of args and kwargs, but they are just ignored. Python version does not allow any arguments. All arguments will be removed from threading.RLock() in Python 3.15. (Contributed by Nikita Sobolev in gh-102029 .)

typing.NamedTuple :

The undocumented keyword argument syntax for creating NamedTuple classes ( NT = NamedTuple("NT", x=int) ) is deprecated, and will be disallowed in 3.15. Use the class-based syntax or the functional syntax instead.

When using the functional syntax to create a NamedTuple class, failing to pass a value to the fields parameter ( NT = NamedTuple("NT") ) is deprecated. Passing None to the fields parameter ( NT = NamedTuple("NT", None) ) is also deprecated. Both will be disallowed in Python 3.15. To create a NamedTuple class with 0 fields, use class NT(NamedTuple): pass or NT = NamedTuple("NT", []) .

typing.TypedDict : When using the functional syntax to create a TypedDict class, failing to pass a value to the fields parameter ( TD = TypedDict("TD") ) is deprecated. Passing None to the fields parameter ( TD = TypedDict("TD", None) ) is also deprecated. Both will be disallowed in Python 3.15. To create a TypedDict class with 0 fields, use class TD(TypedDict): pass or TD = TypedDict("TD", {}) .

Pending Removal in Python 3.16 ¶

array.array 'u' type ( wchar_t ): use the 'w' type instead ( Py_UCS4 ).

Pending Removal in Future Versions ¶

The following APIs were deprecated in earlier Python versions and will be removed, although there is currently no date scheduled for their removal.

argparse : Nesting argument groups and nesting mutually exclusive groups are deprecated.

~bool , bitwise inversion on bool.

bool(NotImplemented) .

Generators: throw(type, exc, tb) and athrow(type, exc, tb) signature is deprecated: use throw(exc) and athrow(exc) instead, the single argument signature.

Currently Python accepts numeric literals immediately followed by keywords, for example 0in x , 1or x , 0if 1else 2 . It allows confusing and ambiguous expressions like [0x1for x in y] (which can be interpreted as [0x1 for x in y] or [0x1f or x in y] ). A syntax warning is raised if the numeric literal is immediately followed by one of keywords and , else , for , if , in , is and or . In a future release it will be changed to a syntax error. ( gh-87999 )

Support for __index__() and __int__() method returning non-int type: these methods will be required to return an instance of a strict subclass of int .

Support for __float__() method returning a strict subclass of float : these methods will be required to return an instance of float .

Support for __complex__() method returning a strict subclass of complex : these methods will be required to return an instance of complex .

Delegation of int() to __trunc__() method.

calendar : calendar.January and calendar.February constants are deprecated and replaced by calendar.JANUARY and calendar.FEBRUARY . (Contributed by Prince Roshan in gh-103636 .)

codeobject.co_lnotab : use the codeobject.co_lines() method instead.

utcnow() : use datetime.datetime.now(tz=datetime.UTC) .

utcfromtimestamp() : use datetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC) .

gettext : Plural value must be an integer.

importlib :

load_module() method: use exec_module() instead.

cache_from_source() debug_override parameter is deprecated: use the optimization parameter instead.

importlib.metadata :

EntryPoints tuple interface.

Implicit None on return values.

mailbox : Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.

os : Calling os.register_at_fork() in multi-threaded process.

pydoc.ErrorDuringImport : A tuple value for exc_info parameter is deprecated, use an exception instance.

re : More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760 .)

sre_compile , sre_constants and sre_parse modules.

ssl options and protocols:

ssl.SSLContext without protocol argument is deprecated.

ssl.SSLContext : set_npn_protocols() and selected_npn_protocol() are deprecated: use ALPN instead.

ssl.OP_NO_SSL* options

ssl.OP_NO_TLS* options

ssl.PROTOCOL_SSLv3

ssl.PROTOCOL_TLS

ssl.PROTOCOL_TLSv1

ssl.PROTOCOL_TLSv1_1

ssl.PROTOCOL_TLSv1_2

ssl.TLSVersion.SSLv3

ssl.TLSVersion.TLSv1

ssl.TLSVersion.TLSv1_1

sysconfig.is_python_build() check_home parameter is deprecated and ignored.

threading methods:

threading.Condition.notifyAll() : use notify_all() .

threading.Event.isSet() : use is_set() .

threading.Thread.isDaemon() , threading.Thread.setDaemon() : use threading.Thread.daemon attribute.

threading.Thread.getName() , threading.Thread.setName() : use threading.Thread.name attribute.

threading.currentThread() : use threading.current_thread() .

threading.activeCount() : use threading.active_count() .

typing.Text ( gh-92332 ).

unittest.IsolatedAsyncioTestCase : it is deprecated to return a value that is not None from a test case.

urllib.parse deprecated functions: urlparse() instead

splitattr()

splithost()

splitnport()

splitpasswd()

splitport()

splitquery()

splittype()

splituser()

splitvalue()

urllib.request : URLopener and FancyURLopener style of invoking requests is deprecated. Use newer urlopen() functions and methods.

wsgiref : SimpleHandler.stdout.write() should not do partial writes.

zipimport.zipimporter.load_module() is deprecated: use exec_module() instead.

CPython Bytecode Changes ¶

The oparg of YIELD_VALUE is now 1 if the yield is part of a yield-from or await, and 0 otherwise. The oparg of RESUME was changed to add a bit indicating whether the except-depth is 1, which is needed to optimize closing of generators. (Contributed by Irit Katriel in gh-111354 .)

C API Changes ¶

You no longer have to define the PY_SSIZE_T_CLEAN macro before including Python.h when using # formats in format codes . APIs accepting the format codes always use Py_ssize_t for # formats. (Contributed by Inada Naoki in gh-104922 .)

The keywords parameter of PyArg_ParseTupleAndKeywords() and PyArg_VaParseTupleAndKeywords() now has type char * const * in C and const char * const * in C++, instead of char * * . It makes these functions compatible with arguments of type const char * const * , const char * * or char * const * in C++ and char * const * in C without an explicit type cast. This can be overridden with the PY_CXX_CONST macro. (Contributed by Serhiy Storchaka in gh-65210 .)

Add PyImport_AddModuleRef() : similar to PyImport_AddModule() , but return a strong reference instead of a borrowed reference . (Contributed by Victor Stinner in gh-105922 .)

Add PyWeakref_GetRef() function: similar to PyWeakref_GetObject() but returns a strong reference , or NULL if the referent is no longer live. (Contributed by Victor Stinner in gh-105927 .)

Add PyObject_GetOptionalAttr() and PyObject_GetOptionalAttrString() , variants of PyObject_GetAttr() and PyObject_GetAttrString() which don’t raise AttributeError if the attribute is not found. These variants are more convenient and faster if the missing attribute should not be treated as a failure. (Contributed by Serhiy Storchaka in gh-106521 .)

Add PyMapping_GetOptionalItem() and PyMapping_GetOptionalItemString() : variants of PyObject_GetItem() and PyMapping_GetItemString() which don’t raise KeyError if the key is not found. These variants are more convenient and faster if the missing key should not be treated as a failure. (Contributed by Serhiy Storchaka in gh-106307 .)

Add fixed variants of functions which silently ignore errors:

PyObject_HasAttrWithError() replaces PyObject_HasAttr() .

PyObject_HasAttrStringWithError() replaces PyObject_HasAttrString() .

PyMapping_HasKeyWithError() replaces PyMapping_HasKey() .

PyMapping_HasKeyStringWithError() replaces PyMapping_HasKeyString() .

New functions return not only 1 for true and 0 for false, but also -1 for error.

(Contributed by Serhiy Storchaka in gh-108511 .)

If Python is built in debug mode or with assertions , PyTuple_SET_ITEM() and PyList_SET_ITEM() now check the index argument with an assertion. (Contributed by Victor Stinner in gh-106168 .)

Add PyModule_Add() function: similar to PyModule_AddObjectRef() and PyModule_AddObject() but always steals a reference to the value. (Contributed by Serhiy Storchaka in gh-86493 .)

Add PyDict_GetItemRef() and PyDict_GetItemStringRef() functions: similar to PyDict_GetItemWithError() but returning a strong reference instead of a borrowed reference . Moreover, these functions return -1 on error and so checking PyErr_Occurred() is not needed. (Contributed by Victor Stinner in gh-106004 .)

Added PyDict_SetDefaultRef() , which is similar to PyDict_SetDefault() but returns a strong reference instead of a borrowed reference . This function returns -1 on error, 0 on insertion, and 1 if the key was already present in the dictionary. (Contributed by Sam Gross in gh-112066 .)

Add PyDict_ContainsString() function: same as PyDict_Contains() , but key is specified as a const char * UTF-8 encoded bytes string, rather than a PyObject * . (Contributed by Victor Stinner in gh-108314 .)

Added PyList_GetItemRef() function: similar to PyList_GetItem() but returns a strong reference instead of a borrowed reference .

Add Py_IsFinalizing() function: check if the main Python interpreter is shutting down . (Contributed by Victor Stinner in gh-108014 .)

Add PyLong_AsInt() function: similar to PyLong_AsLong() , but store the result in a C int instead of a C long . Previously, it was known as the private function _PyLong_AsInt() (with an underscore prefix). (Contributed by Victor Stinner in gh-108014 .)

Python built with configure --with-trace-refs (tracing references) now supports the Limited API . (Contributed by Victor Stinner in gh-108634 .)

Add PyObject_VisitManagedDict() and PyObject_ClearManagedDict() functions which must be called by the traverse and clear functions of a type using Py_TPFLAGS_MANAGED_DICT flag. The pythoncapi-compat project can be used to get these functions on Python 3.11 and 3.12. (Contributed by Victor Stinner in gh-107073 .)

Add PyUnicode_EqualToUTF8AndSize() and PyUnicode_EqualToUTF8() functions: compare Unicode object with a const char * UTF-8 encoded string and return true ( 1 ) if they are equal, or false ( 0 ) otherwise. These functions do not raise exceptions. (Contributed by Serhiy Storchaka in gh-110289 .)

Add PyThreadState_GetUnchecked() function: similar to PyThreadState_Get() , but don’t kill the process with a fatal error if it is NULL. The caller is responsible to check if the result is NULL. Previously, the function was private and known as _PyThreadState_UncheckedGet() . (Contributed by Victor Stinner in gh-108867 .)

Add PySys_AuditTuple() function: similar to PySys_Audit() , but pass event arguments as a Python tuple object. (Contributed by Victor Stinner in gh-85283 .)

PyArg_ParseTupleAndKeywords() now supports non-ASCII keyword parameter names. (Contributed by Serhiy Storchaka in gh-110815 .)

Add PyMem_RawMalloc() , PyMem_RawCalloc() , PyMem_RawRealloc() and PyMem_RawFree() to the limited C API (version 3.13). (Contributed by Victor Stinner in gh-85283 .)

Add PySys_Audit() and PySys_AuditTuple() functions to the limited C API. (Contributed by Victor Stinner in gh-85283 .)

Add PyErr_FormatUnraisable() function: similar to PyErr_WriteUnraisable() , but allow customizing the warning message. (Contributed by Serhiy Storchaka in gh-108082 .)

Add PyList_Extend() and PyList_Clear() functions: similar to Python list.extend() and list.clear() methods. (Contributed by Victor Stinner in gh-111138 .)

Add PyDict_Pop() and PyDict_PopString() functions: remove a key from a dictionary and optionally return the removed value. This is similar to dict.pop() , but without the default value and not raising KeyError if the key is missing. (Contributed by Stefan Behnel and Victor Stinner in gh-111262 .)

Add Py_HashPointer() function to hash a pointer. (Contributed by Victor Stinner in gh-111545 .)

Add PyObject_GenericHash() function that implements the default hashing function of a Python object. (Contributed by Serhiy Storchaka in gh-113024 .)

Add PyTime C API:

PyTime_t type.

PyTime_MIN and PyTime_MAX constants.

Add functions:

PyTime_AsSecondsDouble() .

PyTime_Monotonic() .

PyTime_MonotonicRaw() .

PyTime_PerfCounter() .

PyTime_PerfCounterRaw() .

PyTime_Time() .

PyTime_TimeRaw() .

(Contributed by Victor Stinner and Petr Viktorin in gh-110850 .)

Add PyLong_AsNativeBytes() , PyLong_FromNativeBytes() and PyLong_FromUnsignedNativeBytes() functions to simplify converting between native integer types and Python int objects. (Contributed by Steve Dower in gh-111140 .)

Add PyType_GetFullyQualifiedName() function to get the type’s fully qualified name. Equivalent to f"{type.__module__}.{type.__qualname__}" , or type.__qualname__ if type.__module__ is not a string or is equal to "builtins" . (Contributed by Victor Stinner in gh-111696 .)

Add PyType_GetModuleName() function to get the type’s module name. Equivalent to getting the type.__module__ attribute. (Contributed by Eric Snow and Victor Stinner in gh-111696 .)

Add support for %T , %#T , %N and %#N formats to PyUnicode_FromFormat() : format the fully qualified name of an object type and of a type: call PyType_GetModuleName() . See PEP 737 for more information. (Contributed by Victor Stinner in gh-111696 .)

Add Py_GetConstant() and Py_GetConstantBorrowed() functions to get constants. For example, Py_GetConstant(Py_CONSTANT_ZERO) returns a strong reference to the constant zero. (Contributed by Victor Stinner in gh-115754 .)

Add PyType_GetModuleByDef() to the limited C API (Contributed by Victor Stinner in gh-116936 .)

Add two new functions to the C-API, PyRefTracer_SetTracer() and PyRefTracer_GetTracer() , that allows to track object creation and destruction the same way the tracemalloc module does. (Contributed by Pablo Galindo in gh-93502 .)

Build Changes ¶

The configure option --with-system-libmpdec now defaults to yes . The bundled copy of libmpdecimal will be removed in Python 3.15.

Autoconf 2.71 and aclocal 1.16.4 are now required to regenerate the configure script. (Contributed by Christian Heimes in gh-89886 .)

SQLite 3.15.2 or newer is required to build the sqlite3 extension module. (Contributed by Erlend Aasland in gh-105875 .)

Python built with configure --with-trace-refs (tracing references) is now ABI compatible with the Python release build and debug build . (Contributed by Victor Stinner in gh-108634 .)

Building CPython now requires a compiler with support for the C11 atomic library, GCC built-in atomic functions, or MSVC interlocked intrinsics.

The errno , fcntl , grp , md5 , pwd , resource , termios , winsound , _ctypes_test , _multiprocessing.posixshmem , _scproxy , _stat , _statistics , _testconsole , _testimportmultiple and _uuid C extensions are now built with the limited C API . (Contributed by Victor Stinner in gh-85283 .)

wasm32-wasi is now a PEP 11 tier 2 platform. (Contributed by Brett Cannon in gh-115192 .)

wasm32-emscripten is no longer a PEP 11 supported platform. (Contributed by Brett Cannon in gh-115192 .)

Python now bundles the mimalloc library . It is licensed under the MIT license; see mimalloc license . The bundled mimalloc has custom changes, see gh-113141 for details. (Contributed by Dino Viehland in gh-109914 .)

Porting to Python 3.13 ¶

This section lists previously described changes and other bugfixes that may require changes to your code.

Changes in the Python API ¶

An OSError is now raised by getpass.getuser() for any failure to retrieve a username, instead of ImportError on non-Unix platforms or KeyError on Unix platforms where the password database is empty.

The threading module now expects the _thread module to have an _is_main_interpreter attribute. It is a function with no arguments that returns True if the current interpreter is the main interpreter.

Any library or application that provides a custom _thread module must provide _is_main_interpreter() , just like the module’s other “private” attributes. (See gh-112826 .)

mailbox.Maildir now ignores files with a leading dot. (Contributed by Zackery Spytz in gh-65559 .)

pathlib.Path.glob() and rglob() now return both files and directories if a pattern that ends with “ ** ” is given, rather than directories only. Users may add a trailing slash to match only directories.

The value of the mode attribute of gzip.GzipFile was changed from integer ( 1 or 2 ) to string ( 'rb' or 'wb' ). The value of the mode attribute of the readable file-like object returned by zipfile.ZipFile.open() was changed from 'r' to 'rb' . (Contributed by Serhiy Storchaka in gh-115961 .)

Calling locals() in an optimized scope now produces an independent snapshot on each call, and hence no longer implicitly updates previously returned references. Obtaining the legacy CPython behaviour now requires explicit calls to update the initially returned dictionary with the results of subsequent calls to locals() . (Changed as part of PEP 667 .)

Calling locals() from a comprehension at module or class scope (including via exec or eval ) once more behaves as if the comprehension were running as an independent nested function (i.e. the local variables from the containing scope are not included). In Python 3.12, this had changed to include the local variables from the containing scope when implementing PEP 709 . (Changed as part of PEP 667 .)

Accessing FrameType.f_locals in an optimized scope now returns a write-through proxy rather than a snapshot that gets updated at ill-specified times. If a snapshot is desired, it must be created explicitly with dict or the proxy’s .copy() method. (Changed as part of PEP 667 .)

Changes in the C API ¶

Python.h no longer includes the <ieeefp.h> standard header. It was included for the finite() function which is now provided by the <math.h> header. It should now be included explicitly if needed. Remove also the HAVE_IEEEFP_H macro. (Contributed by Victor Stinner in gh-108765 .)

Python.h no longer includes these standard header files: <time.h> , <sys/select.h> and <sys/time.h> . If needed, they should now be included explicitly. For example, <time.h> provides the clock() and gmtime() functions, <sys/select.h> provides the select() function, and <sys/time.h> provides the futimes() , gettimeofday() and setitimer() functions. (Contributed by Victor Stinner in gh-108765 .)

On Windows, Python.h no longer includes the <stddef.h> standard header file. If needed, it should now be included explicitly. For example, it provides offsetof() function, and size_t and ptrdiff_t types. Including <stddef.h> explicitly was already needed by all other platforms, the HAVE_STDDEF_H macro is only defined on Windows. (Contributed by Victor Stinner in gh-108765 .)

If the Py_LIMITED_API macro is defined, Py_BUILD_CORE , Py_BUILD_CORE_BUILTIN and Py_BUILD_CORE_MODULE macros are now undefined by <Python.h> . (Contributed by Victor Stinner in gh-85283 .)

The old trashcan macros Py_TRASHCAN_SAFE_BEGIN and Py_TRASHCAN_SAFE_END were removed. They should be replaced by the new macros Py_TRASHCAN_BEGIN and Py_TRASHCAN_END .

A tp_dealloc function that has the old macros, such as:

should migrate to the new macros as follows:

Note that Py_TRASHCAN_BEGIN has a second argument which should be the deallocation function it is in. The new macros were added in Python 3.8 and the old macros were deprecated in Python 3.11. (Contributed by Irit Katriel in gh-105111 .)

Functions PyDict_GetItem() , PyDict_GetItemString() , PyMapping_HasKey() , PyMapping_HasKeyString() , PyObject_HasAttr() , PyObject_HasAttrString() , and PySys_GetObject() , which clear all errors which occurred when calling them, now report them using sys.unraisablehook() . You may replace them with other functions as recommended in the documentation. (Contributed by Serhiy Storchaka in gh-106672 .)

PyCode_GetFirstFree() is an unstable API now and has been renamed to PyUnstable_Code_GetFirstFree() . (Contributed by Bogdan Romanyuk in gh-115781 .)

Removed C APIs ¶

Remove many APIs (functions, macros, variables) with names prefixed by _Py or _PY (considered as private API). If your project is affected by one of these removals and you consider that the removed API should remain available, please open a new issue to request a public C API and add cc @vstinner to the issue to notify Victor Stinner. (Contributed by Victor Stinner in gh-106320 .)

Remove functions deprecated in Python 3.9:

PyEval_CallObject() , PyEval_CallObjectWithKeywords() : use PyObject_CallNoArgs() or PyObject_Call() instead. Warning: PyObject_Call() positional arguments must be a tuple and must not be NULL , keyword arguments must be a dict or NULL , whereas removed functions checked arguments type and accepted NULL positional and keyword arguments. To replace PyEval_CallObjectWithKeywords(func, NULL, kwargs) with PyObject_Call() , pass an empty tuple as positional arguments using PyTuple_New(0) .

PyEval_CallFunction() : use PyObject_CallFunction() instead.

PyEval_CallMethod() : use PyObject_CallMethod() instead.

PyCFunction_Call() : use PyObject_Call() instead.

(Contributed by Victor Stinner in gh-105107 .)

Remove old buffer protocols deprecated in Python 3.0. Use Buffer Protocol instead.

PyObject_CheckReadBuffer() : Use PyObject_CheckBuffer() to test if the object supports the buffer protocol. Note that PyObject_CheckBuffer() doesn’t guarantee that PyObject_GetBuffer() will succeed. To test if the object is actually readable, see the next example of PyObject_GetBuffer() .

PyObject_AsCharBuffer() , PyObject_AsReadBuffer() : PyObject_GetBuffer() and PyBuffer_Release() instead:

PyObject_AsWriteBuffer() : Use PyObject_GetBuffer() and PyBuffer_Release() instead:

(Contributed by Inada Naoki in gh-85275 .)

Remove the following old functions to configure the Python initialization, deprecated in Python 3.11:

PySys_AddWarnOptionUnicode() : use PyConfig.warnoptions instead.

PySys_AddWarnOption() : use PyConfig.warnoptions instead.

PySys_AddXOption() : use PyConfig.xoptions instead.

PySys_HasWarnOptions() : use PyConfig.xoptions instead.

PySys_SetPath() : set PyConfig.module_search_paths instead.

Py_SetPath() : set PyConfig.module_search_paths instead.

Py_SetStandardStreamEncoding() : set PyConfig.stdio_encoding instead, and set also maybe PyConfig.legacy_windows_stdio (on Windows).

_Py_SetProgramFullPath() : set PyConfig.executable instead.

Use the new PyConfig API of the Python Initialization Configuration instead ( PEP 587 ), added to Python 3.8. (Contributed by Victor Stinner in gh-105145 .)

Remove PyEval_ThreadsInitialized() function, deprecated in Python 3.9. Since Python 3.7, Py_Initialize() always creates the GIL: calling PyEval_InitThreads() does nothing and PyEval_ThreadsInitialized() always returned non-zero. (Contributed by Victor Stinner in gh-105182 .)

Remove PyEval_AcquireLock() and PyEval_ReleaseLock() functions, deprecated in Python 3.2. They didn’t update the current thread state. They can be replaced with:

PyEval_SaveThread() and PyEval_RestoreThread() ;

low-level PyEval_AcquireThread() and PyEval_RestoreThread() ;

or PyGILState_Ensure() and PyGILState_Release() .

(Contributed by Victor Stinner in gh-105182 .)

Remove private _PyObject_FastCall() function: use PyObject_Vectorcall() which is available since Python 3.8 ( PEP 590 ). (Contributed by Victor Stinner in gh-106023 .)

Remove cpython/pytime.h header file: it only contained private functions. (Contributed by Victor Stinner in gh-106316 .)

Remove _PyInterpreterState_Get() alias to PyInterpreterState_Get() which was kept for backward compatibility with Python 3.8. The pythoncapi-compat project can be used to get PyInterpreterState_Get() on Python 3.8 and older. (Contributed by Victor Stinner in gh-106320 .)

The PyModule_AddObject() function is now soft deprecated : PyModule_Add() or PyModule_AddObjectRef() functions should be used instead. (Contributed by Serhiy Storchaka in gh-86493 .)

Deprecated C APIs ¶

Deprecate the old Py_UNICODE and PY_UNICODE_TYPE types: use directly the wchar_t type instead. Since Python 3.3, Py_UNICODE and PY_UNICODE_TYPE are just aliases to wchar_t . (Contributed by Victor Stinner in gh-105156 .)

Deprecate old Python initialization functions:

PySys_ResetWarnOptions() : clear sys.warnoptions and warnings.filters instead.

Py_GetExecPrefix() : get sys.exec_prefix instead.

Py_GetPath() : get sys.path instead.

Py_GetPrefix() : get sys.prefix instead.

Py_GetProgramFullPath() : get sys.executable instead.

Py_GetProgramName() : get sys.executable instead.

Py_GetPythonHome() : get PyConfig.home or PYTHONHOME environment variable instead.

Functions scheduled for removal in Python 3.15. (Contributed by Victor Stinner in gh-105145 .)

Deprecate the PyImport_ImportModuleNoBlock() function which is just an alias to PyImport_ImportModule() since Python 3.3. Scheduled for removal in Python 3.15. (Contributed by Victor Stinner in gh-105396 .)

Deprecate the PyWeakref_GetObject() and PyWeakref_GET_OBJECT() functions, which return a borrowed reference : use the new PyWeakref_GetRef() function instead, it returns a strong reference . The pythoncapi-compat project can be used to get PyWeakref_GetRef() on Python 3.12 and older. (Contributed by Victor Stinner in gh-105927 .)

Creating immutable types ( Py_TPFLAGS_IMMUTABLETYPE ) with mutable bases using the C API.

Functions to configure the Python initialization, deprecated in Python 3.11:

PySys_SetArgvEx() : set PyConfig.argv instead.

PySys_SetArgv() : set PyConfig.argv instead.

Py_SetProgramName() : set PyConfig.program_name instead.

Py_SetPythonHome() : set PyConfig.home instead.

The Py_InitializeFromConfig() API should be used with PyConfig instead.

Global configuration variables:

Py_DebugFlag : use PyConfig.parser_debug

Py_VerboseFlag : use PyConfig.verbose

Py_QuietFlag : use PyConfig.quiet

Py_InteractiveFlag : use PyConfig.interactive

Py_InspectFlag : use PyConfig.inspect

Py_OptimizeFlag : use PyConfig.optimization_level

Py_NoSiteFlag : use PyConfig.site_import

Py_BytesWarningFlag : use PyConfig.bytes_warning

Py_FrozenFlag : use PyConfig.pathconfig_warnings

Py_IgnoreEnvironmentFlag : use PyConfig.use_environment

Py_DontWriteBytecodeFlag : use PyConfig.write_bytecode

Py_NoUserSiteDirectory : use PyConfig.user_site_directory

Py_UnbufferedStdioFlag : use PyConfig.buffered_stdio

Py_HashRandomizationFlag : use PyConfig.use_hash_seed and PyConfig.hash_seed

Py_IsolatedFlag : use PyConfig.isolated

Py_LegacyWindowsFSEncodingFlag : use PyPreConfig.legacy_windows_fs_encoding

Py_LegacyWindowsStdioFlag : use PyConfig.legacy_windows_stdio

Py_FileSystemDefaultEncoding : use PyConfig.filesystem_encoding

Py_HasFileSystemDefaultEncoding : use PyConfig.filesystem_encoding

Py_FileSystemDefaultEncodeErrors : use PyConfig.filesystem_errors

Py_UTF8Mode : use PyPreConfig.utf8_mode (see Py_PreInitialize() )

The bundled copy of libmpdecimal .

PyImport_ImportModuleNoBlock() : use PyImport_ImportModule() .

PyWeakref_GET_OBJECT() : use PyWeakref_GetRef() instead.

PyWeakref_GetObject() : use PyWeakref_GetRef() instead.

Py_UNICODE_WIDE type: use wchar_t instead.

Py_UNICODE type: use wchar_t instead.

Python initialization functions:

Py_TPFLAGS_HAVE_FINALIZE : no needed since Python 3.8.

PyErr_Fetch() : use PyErr_GetRaisedException() .

PyErr_NormalizeException() : use PyErr_GetRaisedException() .

PyErr_Restore() : use PyErr_SetRaisedException() .

PyModule_GetFilename() : use PyModule_GetFilenameObject() .

PyOS_AfterFork() : use PyOS_AfterFork_Child() .

PySlice_GetIndicesEx() .

PyUnicode_AsDecodedObject() .

PyUnicode_AsDecodedUnicode() .

PyUnicode_AsEncodedObject() .

PyUnicode_AsEncodedUnicode() .

PyUnicode_READY() : not needed since Python 3.12.

_PyErr_ChainExceptions() .

PyBytesObject.ob_shash member: call PyObject_Hash() instead.

PyDictObject.ma_version_tag member.

PyThread_create_key() : use PyThread_tss_alloc() .

PyThread_delete_key() : use PyThread_tss_free() .

PyThread_set_key_value() : use PyThread_tss_set() .

PyThread_get_key_value() : use PyThread_tss_get() .

PyThread_delete_key_value() : use PyThread_tss_delete() .

PyThread_ReInitTLS() : no longer needed.

Remove undocumented PY_TIMEOUT_MAX constant from the limited C API. (Contributed by Victor Stinner in gh-110014 .)

Regression Test Changes ¶

Python built with configure --with-pydebug now supports a -X presite=package.module command-line option. If used, it specifies a module that should be imported early in the lifecycle of the interpreter, before site.py is executed. (Contributed by Łukasz Langa in gh-110769 .)

Table of Contents

  • Summary – Release Highlights
  • A Better Interactive Interpreter
  • Improved Error Messages
  • Defined mutation semantics for locals()
  • Incremental Garbage Collection
  • Support For Mobile Platforms
  • Experimental JIT Compiler
  • Free-threaded CPython
  • Other Language Changes
  • New Modules
  • unicodedata
  • xml.etree.ElementTree
  • Optimizations
  • PEP 594: dead batteries (and other module removals)
  • configparser
  • Pending Removal in Python 3.14
  • Pending Removal in Python 3.15
  • Pending Removal in Python 3.16
  • Pending Removal in Future Versions
  • CPython Bytecode Changes
  • New Features
  • Build Changes
  • Changes in the Python API
  • Changes in the C API
  • Removed C APIs
  • Deprecated C APIs
  • Regression Test Changes

Previous topic

What’s New in Python

What’s New In Python 3.12

  • Report a Bug
  • Show Source

IMAGES

  1. Queue in Python

    python task queue without redis

  2. Mastering Queue Data Structures in Python: Comprehensive Guide

    python task queue without redis

  3. How to build a hybrid (Python/JavaScript) asynchronous task queue

    python task queue without redis

  4. Queue in Python: Working With Queue Data Structure in Python

    python task queue without redis

  5. Queue Implementation Using List

    python task queue without redis

  6. Queue Using Python

    python task queue without redis

VIDEO

  1. Python Programming Task 2

  2. 📊 Python Task 3: Smart Student Stats

  3. TP03

  4. A Command-line python task manager

  5. TP03

  6. Scrapy + Redis? Spider From a Queue? & 60K Subs Stream

COMMENTS

  1. Task Queues

    The Celery distributed task queue is the most commonly used Python library for handling asynchronous tasks and scheduling. The RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. RQ is backed by Redis and is designed to have a low barrier to entry. Taskmaster is a lightweight simple ...

  2. Python task queue alternatives and frameworks

    There seem to be different implementations of task/job queues for Python 3: WorQ had its last update in 2016. Then there are "cloud" based solutions like Google's Task Queue API or AWS's Cloud Watch Events, but that's more of a last resort. For my project I am looking for a stable and active task queue implementation.

  3. Welcome to Django Q

    Django Q is a native Django task queue, scheduler and worker application using Python multiprocessing. ... Redis, Disque, IronMQ, SQS, MongoDB or ORM; Rollbar and Sentry support; Django Q is tested with: Python 3.7 and 3.8, Django 2.2.x and 3.1.x. Currently available in English, German and French.

  4. Developing an Asynchronous Task Queue in Python

    Download and install Redis if you do not already have it installed. Then, install the Python interface: (env)$ pip install redis==4 .5.5. We'll break the logic up into four files: redis_queue.py creates new queues and tasks via the SimpleQueue and SimpleTask classes, respectively.

  5. GitHub

    Taskiq is an asynchronous distributed task queue for python. This project takes inspiration from big projects such as Celery and Dramatiq . But taskiq can send and run both the sync and async functions, has integration with popular async frameworks, such as FastAPI and AioHTTP. Also, we use PEP-612 to provide the best autosuggestions possible.

  6. Asynchronous Tasks With Django and Celery

    Celery is a distributed task queue for UNIX systems. It allows you to offload work from your Python app. Once you integrate Celery into your app, you can send time-intensive tasks to Celery's task queue. That way, your web app can continue to respond quickly to users while Celery completes expensive operations asynchronously in the background.

  7. huey

    huey is: a task queue. written in python. clean and simple API. redis, sqlite, file-system, or in-memory storage. example code. huey supports: multi-process, multi-thread or greenlet task execution models. schedule tasks to execute at a given time, or after a given delay.

  8. Introduction to celery

    1. Celery is a simple, flexible, and reliable distributed system to process vast amounts of messages, while providing operations with the tools required to maintain such a system. 2. It's a task ...

  9. GitHub

    A simple distributed queue designed for handling one-off tasks with large sets of tasks - dcramer/taskmaster ... (using standard Python Queue's). This ensures a consistent memory pattern that can scale linearly. Requirements. Requirements should be handled by setuptools, but if ... Being that Taskmaster uses gevent for both its iterator task ...

  10. GitHub

    Originally it was meant to be just a brokerless job queue, recently I dove deeper on the topic and decided to add support for job persistence and extensions for Redis/RabbitMQ middlewares as well. The main advantage of using a brokerless task queue, beside latencies is the lower level of complexity of the system.

  11. Flask by Example

    Part Four: Implement a Redis task queue to handle the text processing. (current) Part Five: Set up Angular on the front-end to continuously poll the back-end to see if the request is done processing. Part Six: Push to the staging server on Heroku - setting up Redis and detailing how to run two processes (web and worker) on a single Dyno.

  12. fastapi-queue · PyPI

    fastapi-queue. A python implementation of a task queue based on Redis that can serve as a peak shaver and protect your app. What is fastapi-queue? Fastapi-queue provides a high-performance redis-based task queue that allows requests sent by clients to the FastAPI server to be cached in the queue for delayed execution. This means that you don't ...

  13. Asynchronous Tasks with Flask and Redis Queue

    Workflow. Our goal is to develop a Flask application that works in conjunction with Redis Queue to handle long-running processes outside the normal request/response cycle. The end user kicks off a new task via a POST request to the server-side. Within the view, a task is added to the queue and the task id is sent back to the client-side.

  14. RQ: Simple job queues for Python

    RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It can be integrated in your web stack easily. RQ requires Redis >= 3.0.0. Getting started. First, run a Redis server. You can use an existing one.

  15. How to Run Your First Task with RQ, Redis, and Python

    Start up the virtual environment again with the command source venv/bin/activate. Then type python app.py to run the project. Go back to the tab that is running rq worker --with-scheduler. Wait 5 more seconds after the first task is executed to see the next task.

  16. Background Tasks in Python with RQ

    RQ (Redis Queue) makes it easy to add background tasks to your Python applications on Heroku. RQ uses a Redis database as a queue to process background jobs. To get started using RQ, you need to configure your application and then run a worker process in your application. Configuration. To setup RQ and its dependencies, install it using pip:

  17. Use Redis Queue for Asynchronous Tasks in a Flask App

    What is Redis Queue? Redis Queue is a python library for queueing jobs for background processing. ... and queue up processes in the background. All without the front end user having to wait for those processes to complete. ... The /enqueue route creates the task and enters the task into the queue. Redis Queue accepts things such as key-value ...

  18. Redis Queue (RQ)

    Redis Queue (RQ) Redis Queue (RQ) is a Python task queue implementation that uses Redis to keep track of tasks in the queue that need to be executed. RQ is an implementation of the task queue concept. Learn more in the web development chapter or view the table of contents for all topics.

  19. GitHub

    RQ ( Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily. RQ requires Redis >= 3.0.0. Full documentation can be found here.

  20. GitHub

    A TaskTiger object keeps track of TaskTiger's settings and is used to decorate and queue tasks. The constructor takes the following arguments: connection. Redis connection object. The connection should be initialized with decode_responses=True to avoid encoding problems on Python 3. config.

  21. What's New In Python 3.13

    On Unix-like systems like Linux or macOS, Python now uses a new interactive shell. When the user starts the REPL from an interactive terminal, and both curses and readline are available, the interactive shell now supports the following new features: Colorized prompts. Multiline editing with history preservation.

  22. Redis Queue + python-rq: Right pattern to prevent high memory usage?

    We use Redis with python-rq purely as a task queue to provide for a delayed execution of some time-intense tasks. A task is retrieving some data from a PostgreSQL database and is writing the results back to it - thus no valuable data is saved at all in the Redis Instance. ... Without this cast, the string representation ended up in the DB ...