Commit 57c065f8 authored by Dylan Griffith's avatar Dylan Griffith

Increase ProcessBookkeepingService batch to 10_000

Since we've learnt through monitoring that batches with 1000 jobs are
not taking more than 5.5s on average and the median is around 4s we
should be save to increase this ten times and still process batches in
the desired time window.

This started from a conversation at
https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28511#note_315475648
where initially we wanted this to be configurable but really it turns
out this number should be approximately how quickly a single core can
process this many jobs being marshalled and sent to Elasticsearch and is
likely not going to be something that benefits much from configuration.
But we'll want to increase it now to 10k which still seems like a safe
number and will give us a lot of scaling headroom before we have to
figure out how to parallelize this work.
parent 0b10caf3
......@@ -4,7 +4,7 @@ module Elastic
class ProcessBookkeepingService
REDIS_SET_KEY = 'elastic:incremental:updates:0:zset'
REDIS_SCORE_KEY = 'elastic:incremental:updates:0:score'
LIMIT = 1000
LIMIT = 10_000
class << self
# Add some records to the processing queue. Items must be serializable to
......
---
title: Increase ProcessBookkeepingService batch to 10_000
merge_request: 30817
author:
type: changed
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment