So I have this table:
Quote:
=> \d tokens
Table "public.tokens"
Column | Type | Modifiers
------------+--------------------------+-----------
session_id | character varying | not null
username | character varying | not null
expires | timestamp with time zone | not null
id | character varying | not null
created | timestamp with time zone | not null
Indexes:
"tokens_pk" PRIMARY KEY, btree (id)
I measured the performance of UPDATEs/SELECTs on this table using the primary key in the where clause, and while most of the time the query completes in 2-3 ms, sometimes it can take 500-1000ms (0.3% of the time, but still).
My first two reactions were to REINDEX and DELETE/VACUUM/ANALYZE since old records are useless to me. That didn't change anything.
According to my measurement over the 250k last SELECTs/UPDATEs, the average query execution time was
3 ms on average with a standard deviation of
30 ms (variance of 1000 ms2).
Now something worth knowing is that my id is quite large and the query plan is the following:
Code:
QUERY PLAN
-------------------------------------------------------------------------------------------------------
Index Scan using tokens_pk on tokens (cost=0.00..8.27 rows=1 width=128)
Index Cond: ((id)::text = '3aaa6faaab3aaadbaaaaaae181faaabc6aaaaa7c7aaaacce1da3aaa1222a5aaa'::text)
(2 rows)
I should add that the load on the server is quite low, there's CPU/RAM to spare and I/O is low also.
Would anyone know what's wrong with that table?