SkillAgentSearch skills...

Qdone

Command line job queue for SQS

Install / Use

/learn @suredone/Qdone

README

NPM Package Build Standard - JavaScript Style Guide

qdone

Command line job queue for SQS

Features

  • Enqueue and run any command line job with parameters
  • Creates SQS queues (and failed job queues) on demand
  • Minimizes SQS API calls
  • Workers can listen to multiple queues, including wildcards
  • Efficient batch enqueing of large numbers of jobs
  • Dynamic visibility timeout for long running jobs
  • Dynamic removal of idle queues

qdone was inspired, in part, by experiences with RQ in production.

Installing

npm install -g qdone

If you're project is CommonJS, then you have to do a deep import of qdone/commonjs:

// Node CommonJS
const { enqueue } = require('qdone/commonjs')

If your project is ESM, you can import directly:

// Node ESM
import { enqueue } from 'qdone'

Examples

Enqueue a job and run it:

# Command line
$ qdone enqueue myQueue "echo hello world"
Enqueued job 030252de-8a3c-42c6-9278-c5a268660384

$ qdone worker myQueue
...
Looking for work on myQueue (https://sqs.us-east-1ld...)
  Found job a23c71b3-b148-47b1-bfbb-f5dbb344ef97
  Executing job command: nice echo hello world
  SUCCESS
  stdout: hello world
// Node ESM
import { enqueue } from 'qdone'
await enqueue('myQueue', 'echo hello world')

// Node CommonJS
const { enqueue } = require('qdone/commonjs')
enqueue('myQueue', 'echo hello world').then(console.log).catch(console.error)

Queues are automatically created when you use them:

$ qdone enqueue myNewQueue "echo nice to meet you"
Creating fail queue myNewQueue_failed
Creating queue myNewQueue
Enqueued job d0077713-11e1-4de6-8f26-49ad51e008b9

Notice that qdone also created a failed queue. More on that later.

To queue many jobs at once, put a queue name and command on each line of stdin or a file:

# Command line
$ qdone enqueue-batch -  # use stdin
queue_0 echo hi
queue_1 echo hi
queue_2 echo hi
queue_3 echo hi
queue_4 echo hi
queue_5 echo hi
queue_6 echo hi
queue_7 echo hi
queue_8 echo hi
queue_9 echo hi
^D
Enqueued job 14fe4e30-bd4f-4415-b902-8df29cb73066 request 1
Enqueued job 60e31392-9810-4770-bfad-6a8f44114287 request 2
Enqueued job 0f26806c-2030-4d9a-94d5-b8d4b7a89115 request 3
Enqueued job 330c3d93-0364-431a-961b-5ace83066e55 request 4
Enqueued job ef64ab68-889d-4214-9ba5-af70d84565e7 request 5
Enqueued job 0fece491-6092-4ad2-b77a-27ccb0bd8e36 request 6
Enqueued job f053b027-3f4a-4e6e-8bb5-729dc8ecafa7 request 7
Enqueued job 5f11b69e-ede1-4ea2-8a60-c994adf2c5a0 request 8
Enqueued job 5079a10a-b13c-4b31-9722-8c1d3b146c28 request 9
Enqueued job 5dfe1008-9a1e-41df-b3bc-614ec5f34660 request 10
Enqueued 10 jobs
// Node ESM
import { enqueueBatch } from 'qdone'
await enqueueBatch(
  [
    { queue: 'queue_1', command: 'echo hi' },
    { queue: 'queue_2', command: 'echo hi' },
    { queue: 'queue_3', command: 'echo hi' },
    { queue: 'queue_4', command: 'echo hi' },
    { queue: 'queue_5', command: 'echo hi' },
    { queue: 'queue_6', command: 'echo hi' },
    { queue: 'queue_7', command: 'echo hi' },
    { queue: 'queue_8', command: 'echo hi' },
    { queue: 'queue_9', command: 'echo hi' }
  ]
)

If you are using the same queue, requests to SQS will be batched:

# Command line
$ qdone enqueue-batch -  # use stdin
queue_one echo hi
queue_one echo hi
queue_one echo hi
queue_one echo hi
queue_two echo hi
queue_two echo hi
queue_two echo hi
queue_two echo hi
^D
Enqueued job fb2fa6d1... request 1   # one
Enqueued job 85bfbe92... request 1   # request
Enqueued job cea6d180... request 1   # for queue_one
Enqueued job 9050fd34... request 1   #
Enqueued job 4e729c18... request 2      # another
Enqueued job 6dac2e4d... request 2      # request
Enqueued job 0252ae4b... request 2      # for queue_two
Enqueued job 95567365... request 2      #
Enqueued 8 jobs
// Node ESM
import { enqueueBatch } from 'qdone'
await enqueueBatch(
  [
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' },
    { queue: 'queue_one', command: 'echo hi' }
  ]
)

Failed jobs

A command fails if it finishes with a non-zero exit code:

$ qdone enqueue myQueue "false"
Enqueued job 0e5957de-1e13-4633-a2ed-d3b424aa53fb;

$ qdone worker myQueue
...
Looking for work on myQueue (https://sqs.us-east-1....)
  Found job 0e5957de-1e13-4633-a2ed-d3b424aa53fb
  Executing job command: nice false
  FAILED
  code  : 1
  error : Error: Command failed: nice false

The failed command will be placed on the failed queue.

To retry failed jobs, wait 30 seconds, then listen to the corresponding failed queue:

$ qdone worker myQueue_failed --include-failed
...
Looking for work on myQueue_failed (https://sqs.us-east-1.../qdone_myQueue_failed)
  Found job 0e5957de-1e13-4633-a2ed-d3b424aa53fb
  Executing job command: nice false
  FAILED
  code  : 1
  error : Error: Command failed: nice false

It failed again. It will go back on the failed queue.

In production you will either want to set alarms on the failed queue to make sure that it doesn't grow to large, or set all your failed queues to drain to a failed job queue after some number of attempts, which you also check.

Listening to multiple queues

It's nice sometimes to listen to a set of queues matching a prefix:

$ qdone worker 'test*'  # use single quotes to keep shell from globbing
...
Listening to queues (in this order):
  test - https://sqs.us-east-1.../qdone_test
  test1 - https://sqs.us-east-1.../qdone_test1
  test2 - https://sqs.us-east-1.../qdone_test2
  test3 - https://sqs.us-east-1.../qdone_test3
  test4 - https://sqs.us-east-1.../qdone_test4
  test5 - https://sqs.us-east-1.../qdone_test5
  test6 - https://sqs.us-east-1.../qdone_test6
  test7 - https://sqs.us-east-1.../qdone_test7
  test8 - https://sqs.us-east-1.../qdone_test8
  test9 - https://sqs.us-east-1.../qdone_test9

Looking for work on test (https://sqs.us-east-1.../qdone_test)
  Found job 2486f4b5-57ef-4290-987c-7b1140409cc6
...
Looking for work on test1 (https://sqs.us-east-1.../qdone_test1)
  Found job 0252ae4b-89c4-4426-8ad5-b1480bfdb3a2
...

The worker will listen to each queue for the --wait-time period, then start over from the beginning.

Long running jobs

Workers prevent others from processing their job by automatically extending the default SQS visibility timeout (30 seconds) as long as the job is still running. You can see this when running a long job:

# Command line
$ qdone enqueue test "sleep 35"
Enqueued job d8e8927f-5e42-48ae-a1a8-b91e42700942

$ qdone worker test --kill-after 300
...
  Found job d8e8927f-5e42-48ae-a1a8-b91e42700942
  Executing job command: nice sleep 35
  Ran for 15.009 seconds, requesting another 60 seconds
  SUCCESS
...
// Node ESM
import { enqueue } from 'qdone'
await enqueue('test', 'sleep 25', { killAfter: 300 })

The SQS API call to extend this timeout (ChangeMessageVisibility) is called at the halfway point before the message becomes visible again. The timeout doubles every subsequent call but never exceeds --kill-after.

Dynamically removing queues

If you have workers listening on a dynamic number of queues, then any idle queues will negatively impact how quickly jobs can be dequeued and/or increase the number of unecessary API calls. You can discover which queues are idle using the idle-queues command:

$ qdone idle-queues 'test*' --idle-for 60 > idle-queues.txt
Resolving queues: test*
  done

Checking queues (in this order):
  test - https://sqs.us-east-1.../qdone_test
  test2 - https://sqs.us-east-1.../qdone_test2

Queue test2 has been idle for the last 60 minutes.
Queue test has been idle for the last 60 minutes.
Queue test_failed has been idle for the last 60 minutes.
Queue test2_failed has been idle for the last 60 minutes.
Used 4 SQS and 28 CloudWatch API calls.

$ cat idle-queues.txt
test
test2

Accurate discovery of idle queues cannot be done through the SQS API alone, and requires the use of the more-expensive CloudWatch API (at the time of this writing, ~$0.40/1M calls for SQS API and ~$10/1M calls on CloudWatch). The idle-queues command attempts to make as few CloudWatch API calls as possible, exiting as soon as it discovers evidence of messages in the queue during the idle period.

You can use the --delete option to actually remove a queue if it has been idle:

$ qdone idle-queues 'test*' --idle-for 60 --delete > deleted-queues.txt
...
Deleted test
Deleted test_failed
Deleted test2
Deleted test2_failed
Used 8 SQS and 28 CloudWatch API calls.

$ cat deleted-queues.txt
test
test2

Because of the higher cost of CloudWatch API calls, you may wish plan your deletion schedule accordingly. For example, at the time of this writing, running the above command (two idle queues, 28 CloudWatch calls) every 10 minutes would cost around $1.20/month. However, if most of the queues are actively used, the number of CloudWatch calls needed goes down. On one of my setups, there are around 60 queues with a dozen queues idle over a two-hour period, and this translates to about 200 CloudWatch API calls every 10 minutes or $8/month.

FIFO Queues

The equeue and enqueue-batch commands can create FIFO queues with limited features controlled by

View on GitHub
GitHub Stars13
CategoryOperations
Updated13d ago
Forks1

Languages

JavaScript

Security Score

95/100

Audited on Mar 25, 2026

No findings