Dan Levy's Avatar DanLevy.net

The Timestamp That Destroyed a Database

A post-mortem on precision and why your timestamps lie.

It was 3:47 AM when the alerts started firing. Order confirmations were appearing before orders were placed. Payment timestamps predated checkout. Our distributed system had somehow achieved time travel—and not the fun kind.

The culprit? An 8-byte timestamp field that we’d been using for years without a second thought. Turns out, there’s a lot hiding in those 8 bytes that can absolutely wreck your database, your microservices, and your sleep schedule.

The Setup: A Tale of Two Services

Our architecture was textbook 2025: order service in New York, inventory service in Dublin, analytics in Singapore. Each service wrote timestamps when recording events. We’d periodically sync data to a central warehouse for reporting.

-- Order service schema
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
user_id INTEGER NOT NULL
);
-- Inventory service schema
CREATE TABLE inventory_events (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
product_id INTEGER NOT NULL
);

Looks reasonable, right? We thought so too. For 18 months.

When Things Fall Apart

The first sign something was wrong came from our data team: “Why are inventory reservations happening milliseconds after payment confirmation, even though the app flow is reversed?”

We dug in. The timestamps weren’t just slightly off—they were chaotically scrambled when events crossed service boundaries. Orders placed at “11:00 AM” in New York appeared as “4:00 PM” in our Dublin logs, but get this: both were being stored without timezone information.

PostgreSQL has two timestamp types: TIMESTAMP and TIMESTAMPTZ. We’d been using the former. Big mistake.

The Difference That Matters

TIMESTAMP (also known as TIMESTAMP WITHOUT TIME ZONE):

TIMESTAMPTZ (also known as TIMESTAMP WITH TIME ZONE):

As Markus Winand explains in SQL Performance Explained (2012), “The WITH TIME ZONE types automatically convert timestamps to UTC for storage and back to the local time zone for display… a feature that saves developers from timezone hell.”

We’d skipped the feature that would have saved us.

Cross-database reality check: MySQL’s DATETIME vs TIMESTAMP distinction is similar but not identical—MySQL’s TIMESTAMP converts to UTC but has a narrower range (1970-2038). Oracle uses TIMESTAMP WITH TIME ZONE but actually stores the timezone offset rather than converting to UTC. SQL Server’s DATETIMEOFFSET stores the offset explicitly. The lesson is universal: know your database’s timestamp semantics before you’re debugging production at 3 AM.

The Microsecond Problem

But wait—there’s more! Even when we started migrating to TIMESTAMPTZ, we ran into precision issues.

PostgreSQL timestamps support microsecond precision (6 decimal places). JavaScript’s Date object? Millisecond precision (3 decimal places). When our Node.js services wrote timestamps, PostgreSQL happily accepted them and padded with zeros.

// JavaScript - millisecond precision
const now = new Date(); // 2025-12-29T14:30:00.123Z
// Gets stored in Postgres as:
// 2025-12-29 14:30:00.123000

Not a huge deal until we started doing timestamp arithmetic for things like rate limiting or deduplication. Events that should have been different got silently truncated to identical timestamps.

-- These two events happened in the same millisecond
-- but different microseconds. Postgres can see it:
SELECT
'2025-12-29 14:30:00.123456'::TIMESTAMP,
'2025-12-29 14:30:00.123789'::TIMESTAMP;
-- But our app code couldn't distinguish them:
WHERE created_at > '2025-12-29 14:30:00.123'
-- Matches both! Silent data corruption.

Martin Kleppmann’s Designing Data-Intensive Applications (2017) warns about this exact scenario: “Different systems have different notions of time… even within a single datacenter, clocks can drift by hundreds of milliseconds.”

We had microsecond precision in the database but were making millisecond-level decisions in code. Recipe for chaos.

DST: The Gift That Keeps on Breaking

Then came daylight saving time.

Our European services all ran with timezone-aware timestamps (TIMESTAMPTZ). Perfect, right? Except for the 2 AM “spring forward” hour that doesn’t exist in the timezone database.

-- March 9, 2025 at 2:30 AM doesn't exist in America/New_York
SET timezone = 'America/New_York';
SELECT '2025-03-09 02:30:00'::TIMESTAMPTZ;
-- PostgreSQL helpfully "adjusts" this to 03:30:00
-- Your app logic? Not so helpful.

We had scheduled jobs that were supposed to run at 2:15 AM during DST transitions. PostgreSQL silently adjusted them to 3:15 AM. Our scheduled maintenance windows? Missed. Our backup jobs? Delayed by an hour.

The PostgreSQL docs warn about this: “Input values that are ambiguous or invalid during a daylight-savings-time transition are resolved using the pre-transition offset.” Translation: Postgres guesses, and you deal with the consequences.

The Migration: Fixing Our Mess

Here’s what we did to fix it:

1. Audit Every Timestamp Column

-- Find all TIMESTAMP columns (without timezone)
SELECT
table_name,
column_name,
data_type
FROM information_schema.columns
WHERE data_type = 'timestamp without time zone'
AND table_schema = 'public';

We found 247 columns across 83 tables. Yep.

2. Migrate to TIMESTAMPTZ

-- For each table, careful migration:
BEGIN;
-- Add new column
ALTER TABLE orders
ADD COLUMN created_at_tz TIMESTAMPTZ;
-- Backfill with assumed timezone
UPDATE orders
SET created_at_tz = created_at AT TIME ZONE 'America/New_York';
-- Verify data looks correct
SELECT created_at, created_at_tz FROM orders LIMIT 100;
-- Drop old, rename new
ALTER TABLE orders DROP COLUMN created_at;
ALTER TABLE orders RENAME COLUMN created_at_tz TO created_at;
COMMIT;

Critical: We had to know what timezone each service was writing from. If you don’t know, you’re guessing. Guessing with production data is bad.

3. Standardize Application Code

All services now explicitly convert to UTC before writing:

// Before: implicit timezone chaos
const now = new Date();
await db.query('INSERT INTO orders (created_at) VALUES ($1)', [now]);
// After: explicit UTC
const now = new Date();
const utc = now.toISOString(); // Always UTC
await db.query('INSERT INTO orders (created_at) VALUES ($1)', [utc]);

We also standardized on Temporal for new JavaScript code. No more Date confusion.

import { Temporal } from '@js-temporal/polyfill';
const now = Temporal.Now.zonedDateTimeISO('America/New_York');
const utc = now.toInstant().toString();
// "2025-12-29T19:30:00.123456789Z" - full precision, explicit UTC

4. Add Timezone Validation

We added database constraints to prevent future mistakes:

-- Ensure all new timestamp columns include timezone
CREATE OR REPLACE FUNCTION check_timestamp_tz()
RETURNS event_trigger AS $$
DECLARE
obj record;
BEGIN
FOR obj IN
SELECT * FROM pg_event_trigger_ddl_commands()
WHERE command_tag = 'CREATE TABLE'
LOOP
-- Check for TIMESTAMP WITHOUT TIME ZONE usage
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = obj.object_identity
AND data_type = 'timestamp without time zone'
) THEN
RAISE EXCEPTION 'Use TIMESTAMPTZ, not TIMESTAMP';
END IF;
END LOOP;
END;
$$ LANGUAGE plpgsql;
CREATE EVENT TRIGGER prevent_timestamp_without_tz
ON ddl_command_end
WHEN TAG IN ('CREATE TABLE', 'ALTER TABLE')
EXECUTE FUNCTION check_timestamp_tz();

Yep, we went nuclear. No more TIMESTAMP WITHOUT TIME ZONE allowed. Period.

Lessons from the Trenches

1. Always use TIMESTAMPTZ for events
Unless you have a really good reason (like storing user-entered birthdays that have no time component), use TIMESTAMPTZ. The overhead is identical—both are 8 bytes—but TIMESTAMPTZ saves you from yourself.

2. Be mindful of precision mismatches
If your app uses millisecond precision and your database supports microseconds, make sure you’re not inadvertently creating duplicates or missing events.

3. UTC everywhere, convert at the edges
Store UTC in databases. Convert to local timezones only when displaying to users. This is the standard advice from teams at Etsy, Stack Overflow, and pretty much everyone who’s been burned.

4. Test around DST boundaries
March and November aren’t just months—they’re edge-case gold mines. Write tests that specifically check DST transitions.

5. Let the database set timestamps Application servers can have clock skew. Use DEFAULT NOW() in Postgres so the timestamp reflects the database server’s canonical time, not some app server halfway around the world.

6. Consider application-level timestamps for distributed systems
In microservices, sometimes you want to know when the event occurred in the originating service, not when it hit the database. Use both: occurred_at (app timestamp) and recorded_at (DB timestamp).

The Aftermath

It took us 6 weeks to fully migrate. We had to coordinate downtime windows across three continents. We developed migration scripts, tested extensively in staging, and still managed to break reporting queries for 48 hours (wrong timezone assumptions in analytics code).

But the result? No more time travel bugs. No more “impossible” event orderings. No more 3 AM pages because daylight saving time did something weird.

Those 8 bytes matter more than you think.

Further Reading

The bottom line: If you’re still using TIMESTAMP WITHOUT TIME ZONE in production, go check your schema right now. Your future self will thank you—probably at 3 AM when something breaks.

Edit on GitHubGitHub