Actions
Bug #11283
closedduplicate key value violates unique constraint "index_nodes_on_slot_number"
Start date:
03/17/2017
Due date:
% Done:
100%
Estimated time:
(Total: 0.00 h)
Story points:
-
Description
this is part of a 2.6G log file:
e51c5:/var/log/postgresql# ls -lh postgresql-9.3-main.log-20170326 -rw-r----- 1 postgres adm 2.6G Mar 26 06:49 postgresql-9.3-main.log-20170326
2017-03-16 21:55:37 UTC ERROR: duplicate key value violates unique constraint "index_nodes_on_slot_number" 2017-03-16 21:55:37 UTC DETAIL: Key (slot_number)=(6) already exists. 2017-03-16 21:55:37 UTC STATEMENT: UPDATE "nodes" SET "last_ping_at" = '2017-03-16 21:55:37.327129', "ip_address" = '10.38.64.23', "first_ping_at" = '2017-03-16 21:55:37.3 27129', "slot_number" = 6, "updated_at" = '2017-03-16 21:55:37.347028', "modified_at" = '2017-03-16 21:55:37.347028', "modified_by_client_uuid" = NULL, "info" = '--- ec2_instance_id: compute-z10duswzkdoq87w-e51c5 last_action: Prepared by Node Manager ping_secret: 2louyeuuylbqdz1c6swx6fmfm93ocbhcppnb9onn7qgh47k7vr ', "properties" = '--- cloud_node: price: 0.149 size: Standard_D11_v2 ' WHERE "nodes"."id" = 552
Theory is that this is actually a normal condition (uniqueness gets handled after return), but lazy/chatty logging by Rails.
We should fix this so the pg logs don't fill up the space and/or using CPU to compact the logfile, even if Rails ends up doing the right thing, it is CPU and I/O wasted for no reason. If we can't deal at the Rails level, Could there be a flag to avoid logging them at least?
Actions