Project

General

Profile

Actions

Story #13925

open

Default keep cache scales with requested container size

Added by Peter Amstutz over 6 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Normal
Assigned To:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Story points:
-
Release:
Release relationship:
Auto

Description

The default keep cache size is 256 MiB. For certain workloads, this is much too small. In particular, multithreaded workloads which read from multiple files experience severe cache contention. Unfortunately, it is difficult for users to analyze performance problems due to keep cache. Often times the response is simply to request more resources via runtime_constraints. However, because the keep cache does not scale with container/machine size, this does not have any effect.

Based on the observation that (a) users request more VCPUs for multithreaded workloads and (b) users' typical response to performance problems is to request more resources, we should scale the default keep cache based on runtime_constraints.

The cache should be either a percentage of RAM (say 12.5%) or multiplied by the number of cores, say 384 MiB per core.

This could be computed by a-c-r or on the API server.

Actions

Also available in: Atom PDF