Commit f2c063d1 authored by Stan Hu's avatar Stan Hu

Use process-wide memory cache for feature flags

When we switched from a single-threaded application server (Unicorn) to
a multithreaded one (Puma), we did not realize that Puma often reaps
threads after a request is done and recreates them later. This makes the
thread-local cache ineffective, as the cache does not store anything
beyond the lifetime of the thread.

Since `ActiveSupport::Cache::MemoryStore` is thread-safe, we should be
able to switch the L1 cache for feature flags to use this to reduce load
on Redis.

Since read and write access is synchronized, this does have the side
effect of adding contention when feature flags are accessed.

Discovered in
https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9414
parent 0f758429
---
title: Use process-wide memory cache for feature flags
merge_request: 26935
author:
type: performance
...@@ -134,7 +134,11 @@ class Feature ...@@ -134,7 +134,11 @@ class Feature
end end
def l1_cache_backend def l1_cache_backend
Gitlab::ThreadMemoryCache.cache_backend if Gitlab::Utils.to_boolean(ENV['USE_THREAD_MEMORY_CACHE'])
Gitlab::ThreadMemoryCache.cache_backend
else
Gitlab::ProcessMemoryCache.cache_backend
end
end end
def l2_cache_backend def l2_cache_backend
......
# frozen_string_literal: true
module Gitlab
class ProcessMemoryCache
# ActiveSupport::Cache::MemoryStore is thread-safe:
# https://github.com/rails/rails/blob/2f1fefe456932a6d7d2b155d27b5315c33f3daa1/activesupport/lib/active_support/cache/memory_store.rb#L19
@cache = ActiveSupport::Cache::MemoryStore.new
def self.cache_backend
@cache
end
end
end
...@@ -146,7 +146,15 @@ describe Feature do ...@@ -146,7 +146,15 @@ describe Feature do
expect(described_class.enabled?(:enabled_feature_flag)).to be_truthy expect(described_class.enabled?(:enabled_feature_flag)).to be_truthy
end end
it { expect(described_class.l1_cache_backend).to eq(Gitlab::ThreadMemoryCache.cache_backend) } context 'with USE_THREAD_MEMORY_CACHE defined' do
before do
stub_env('USE_THREAD_MEMORY_CACHE', '1')
end
it { expect(described_class.l1_cache_backend).to eq(Gitlab::ThreadMemoryCache.cache_backend) }
end
it { expect(described_class.l1_cache_backend).to eq(Gitlab::ProcessMemoryCache.cache_backend) }
it { expect(described_class.l2_cache_backend).to eq(Rails.cache) } it { expect(described_class.l2_cache_backend).to eq(Rails.cache) }
it 'caches the status in L1 and L2 caches', it 'caches the status in L1 and L2 caches',
......
...@@ -193,8 +193,10 @@ RSpec.configure do |config| ...@@ -193,8 +193,10 @@ RSpec.configure do |config|
# expect(Gitlab::Git::KeepAround).to receive(:execute).and_call_original # expect(Gitlab::Git::KeepAround).to receive(:execute).and_call_original
allow(Gitlab::Git::KeepAround).to receive(:execute) allow(Gitlab::Git::KeepAround).to receive(:execute)
# Clear thread cache and Sidekiq queues [Gitlab::ThreadMemoryCache, Gitlab::ProcessMemoryCache].each do |cache|
Gitlab::ThreadMemoryCache.cache_backend.clear cache.cache_backend.clear
end
Sidekiq::Worker.clear_all Sidekiq::Worker.clear_all
# Temporary patch to force admin mode to be active by default in tests when # Temporary patch to force admin mode to be active by default in tests when
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment