Mark duplicate jobs in Sidekiq
This marks a job as duplicate when there was already a job in the same queue with the same arguments. We do this by storing a key in Redis based on the argumens, worker and queue when a job gets scheduled. If another job gets scheduled and the key already exists, we mark the job as duplicate. When a job starts, we delete its key from Redis. Later we can use this for dropping jobs from redis if they are idempotent.
Showing
Please register or sign in to comment