Feature #9066
closed[Keep] keepstore -max-requests limits concurrent requests
100%
Description
Background¶
Currently, keepstore will accept an unbounded number of client requests, each one with a goroutine blocking until a memory buffer is available. If too many clients connect, keepstore will run out of file descriptors. This means it's unable to accept new connections, and even worse, it sabotages the requests at the front of the queue: keepstore can't open files on disk, or make connections to remote storage services.
When there are lots of queued requests, the ones at the back of the queue will eventually time out anyway; it would be better if those clients could find out right away that the server is overloaded.
Proposed fix¶
Add a -max-requests argument. When this many requests are already being handled/queued, respond immediately to new requests with "503 Server Busy".
If the -max-requests argument is not given, or is set to 0, use 2x max-buffers.
Note this applies to all requests, even ones that don't wait for buffers (e.g., index and status).
Implementation¶
Add a ConnectionLimiterFunc function to source:sdk/go/httpserver
struct limiterHandler { clients chan struct{} handler http.Handler } func NewConnectionLimiter(maxClients int, handler http.Handler) { return &limiterHandler{ clients: make(chan struct{}, maxClients), handler: handler, } } func (h *limiterHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) { select { case h.clients <- struct{}{}: default: // reached max clients resp.WriteHeader(http.StatusServiceUnavailable) return } h.handler(resp, req) <- clients }
In source:services/keepstore/logging_router.go, wrap MakeRESTRouter() with this handler.
Additional features, time permitting:- Add a Len() method ("return len(h.clients)") to limiterHandler, and report that in keepstore's status.json