Only PostgreSQL may require an action here, if you reach 2 billion records.

Requirement Yogi 2.6.5 for Confluence and above.

Introduction

Prior to version 2.6.5, all our database records were identified by IDs which were limited to 2 billion rows.

In version 2.6.5, we've performed the first step of "upgrading" those IDs to the type "long", which supports an almost-infinite number of rows, for the table AO_32F7CE_AOINTEGRATION_QUEUE, which contains the messages sent to Jira.

In future versions, we will change the rest of the tables.

As a database administrator, when should I act?

Expected error on PostgreSQL

Only for PostgreSQL, when you reach 2 billion records in the queue, PostgreSQL will refuse to insert new messages.

Users may come to you with an error such as "The requirements couldn't be reindexed for Requirement Yogi. (...) addToQueue()":

Or you may see, in the logs, an error such as:

2021-05-11 12:54:36.043 CEST [71956] ERROR:  nextval: reached maximum value of sequence "AO_32F7CE_AOINTEGRATION_QUEUE_ID_seq" (2147483647)
2021-05-11 12:54:36.043 CEST [71956] STATEMENT:  SELECT NEXTVAL('"AO_32F7CE_AOINTEGRATION_QUEUE_ID_seq"')
2021-05-11 12:54:36.044 CEST [71956] ERROR:  current transaction is aborted, commands ignored until end of transaction block
2021-05-11 12:54:36.044 CEST [71956] STATEMENT:  select 1

Then records are not saved anymore. You must update the sequence IDs.

How to fix in Postgres

The type of the column is already "big integer" and doesn't need to be changed. It's only the sequence which needs to have its max increased:

ALTER SEQUENCE "AO_32F7CE_AOINTEGRATION_QUEUE_ID_seq" AS bigint;
ALTER SEQUENCE "AO_32F7CE_AOINTEGRATION_QUEUE_ID_seq" MAXVALUE 9223372036854775807;

Only the table AO_32F7CE_AOINTEGRATION_QUEUE has this issue, from 2.6.5 on, and for PostgreSQL only.