Storing rows with a start and end date is no rarity. Many applications (hotel bookings, conference room reservations, working time tracking) must store time ranges in a database without overlapping. It would be best if you prevented overlapping rows because e.g. guests will be furious when they arrive and learn that they can't stay because of a double room reservation. However, ensuring those ranges never overlap is not easy due to race conditions. But you can ask the database to guarantee this constraint.
-- Not Supported :(
CREATE EXTENSION btree_gist;
CREATE TABLE rooms (
room int,
period tstzrange,
EXCLUDE USING gist (room WITH =, period WITH &&)
);
INSERT INTO rooms (room, period) VALUES (
5, '[2023-03-27 16:00:00+00,2023-03-28 10:00:00+00]'
); -- OK: 1 row affected
INSERT INTO rooms (room, period) VALUES (
5, '[2023-03-28 16:00:00+00,2023-03-29 10:00:00+00]'
); -- OK: 1 row affected
INSERT INTO rooms (room, period) VALUES (
5, '[2023-03-28 18:00:00+00,2023-03-31 10:00:00+00]'
); -- ERROR: conflicting value
INSERT INTO rooms (room, period) VALUES (
6, '[2023-03-28 18:00:00+00,2023-03-31 10:00:00+00]'
); -- OK: 1 row affected
Many applications store time ranges: A start and end time is combined whether you make hotel bookings, reserve meeting rooms or record your working time. However, many applications have a strict requirement that your database will never have any overlapping rows. It is simple to resolve multiple recorded working times for two clients simultaneously. But it is much more complicated if you overbook your hotel rooms by assigning a room twice. The travelers may have already reserved a flight, rented a car and told business partners they will attend the meeting.
At first, making non-overlapping time ranges sounds very simple. You only insert a new row if no conflicting one can be found. This may work but ignores that your application will be used by more than one user simultaneously. A race condition between searching for overlapping rows and inserting the new one will violate the non-overlapping requirement.
The only safe way to prevent those conflicts in application code is by applying pessimistic locking. But error-free locking is not easy to implement, as the tip Transactional Locking to Prevent Race Conditions explains. You will find a new problem when using redis or the more efficient database transactions for locking. It is not easy to select the resource that should be locked. The most efficient approach would be locking the affected time range. But you can't do that for ranges, only simple values. So in some cases, you will have to lock the complete parent resource (e.g. the hotel), which will disable any simultaneous operation on it for a single booking. With our hotel example, we could lock a single room within the hotel which would be much more efficient. Depending on the application or complexity of booking workflows, this may be a lot of work. The simple requirement of non-overlapping ranges will end in reduced performance (broad locking) or added complexity (implementation overhead) when implemented in application code.
Most tips I share have a common trait:
You shouldn't re-implement your database's functionality.
The one provided by your database will be much more efficient.
The PostgreSQL developers recognized this as a common application problem and built a solution.
They first added range types to provide you the ability to define non-overlapping ranges in your schema.
With a range like
'[2023-03-27 16:00:00+00,2023-03-28 10:00:00+00]'
, you can natively express a check-in starting at 16:00 and check-out the next until 10:00 in a way the database can understand it.
And the database will prevent overlapping time ranges when you add an exclusion constraint on the time range (
period WITH &&
) for the same room number (
room WITH =
).
It is now impossible to save overlapping time ranges in the database.
You should still do your manual check for better usability, but the database will prevent any errors because of race conditions.
SQL for Devs Founder
Be notified on future content. Never spam.