You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently there is a daily cron job to reset the serial for the primary key to 1 to prevent overflow, but the correct solution is probably to use a UUID, instead.
Note that the worst case situation with the cron job hack in place is when there is already a PK of 1 in the database (extremely unlikely given the volume of records we're writing to Kinesis). In this case when a new file is ingested the unique constraint on the primary key column will be violated and the file will fail to be ingested. This is only a temporary issue as files are ingested transactionally (ie. this won't result in duplicated writes to Kinesis) and ingestion will proceed on another worker instance or when the records in the database with conflicting primary keys have all been processed.
The text was updated successfully, but these errors were encountered:
Currently there is a daily cron job to reset the serial for the primary key to 1 to prevent overflow, but the correct solution is probably to use a UUID, instead.
Note that the worst case situation with the cron job hack in place is when there is already a PK of 1 in the database (extremely unlikely given the volume of records we're writing to Kinesis). In this case when a new file is ingested the unique constraint on the primary key column will be violated and the file will fail to be ingested. This is only a temporary issue as files are ingested transactionally (ie. this won't result in duplicated writes to Kinesis) and ingestion will proceed on another worker instance or when the records in the database with conflicting primary keys have all been processed.
The text was updated successfully, but these errors were encountered: