Timestamp Decoder

Timestamp Decoder

Decode any timestamp or time-based ID — auto-detect 16 formats

What is a Unix Timestamp?

A Unix timestamp (also known as Epoch time or POSIX time) is a system for tracking time as a running total of seconds since the Unix epoch — January 1, 1970, 00:00:00 UTC. This simple numeric representation has become the universal standard for storing and transmitting time data in computing systems worldwide.

Unix timestamps come in three common precisions: seconds (10 digits, e.g., 1350508407), milliseconds (13 digits, e.g., 1350508407000), and microseconds (16 digits). Most programming languages and databases use either seconds or milliseconds as their default precision. The timestamp is timezone-independent, always representing UTC, which makes it ideal for distributed systems where servers span multiple time zones.

Supported Formats

This tool supports auto-detection and conversion between 16 time-based formats used across different technologies:

FormatStructureUsed By
Unix (s/ms)10 or 13 digitsUniversal
ISO 8601YYYY-MM-DDTHH:mm:ssZAPIs, JSON
MongoDB ObjectId24 hex charsMongoDB
UUID v1 / v78-4-4-4-12 hex with dashesRFC 9562
ULID26 Crockford Base32 charsDistributed systems
Snowflake (Twitter/Discord)17-19 digit numberTwitter, Discord
KSUID27 base62 charsSegment
CUID v1c + 24 charsWeb applications
XID20 base32hex charsGo applications
Windows FILETIME18-digit 100ns intervalsWindows, Active Directory
Excel Serial Date5-6 digit decimalMicrosoft Excel
.NET Ticks18-digit 100ns intervals.NET Framework
macOS CFAbsoluteTime9-10 digit decimalmacOS, iOS

How to Use This Tool

Simply paste any timestamp or time-based ID into the input field. The tool automatically detects the format and displays the decoded time along with conversions to all 16 supported formats. You can also click the Now button to see the current time in all formats, or use the date/time picker to convert a specific date. Each converted value has a Copy button for quick clipboard access. If the input is ambiguous (e.g., a large number that could be a Snowflake ID or .NET Ticks), the tool presents a dropdown to let you choose the correct format.

MongoDB ObjectId Explained

A MongoDB ObjectId is a 12-byte (24 hex character) identifier automatically generated for every document in MongoDB. Its structure encodes useful metadata:

  • Bytes 0-3: Unix timestamp in seconds (4 bytes) — this is what we decode
  • Bytes 4-8: Random value unique to the machine and process (5 bytes)
  • Bytes 9-11: Incrementing counter initialized to a random value (3 bytes)

For example, the ObjectId 507f1f77bcf86cd799439011 starts with 507f1f77, which converts to Unix timestamp 1350508407 — October 17, 2012, 21:13:27 UTC. This makes ObjectIds naturally sortable by creation time and allows you to extract the exact creation timestamp from any MongoDB document without storing a separate createdAt field.

Snowflake ID Explained

Snowflake IDs are 64-bit integers designed for distributed ID generation at massive scale. Originally created by Twitter in 2010, the format has been adopted by Discord and other platforms with different epoch start dates.

The bit structure is identical for both: 1 unused sign bit + 41 bits for millisecond timestamp + 10 bits for worker/datacenter ID + 12 bits for sequence number. The key difference is the epoch:

  • Twitter: epoch = November 4, 2010 (1288834974657 ms) — the date Snowflake was launched
  • Discord: epoch = January 1, 2015 (1420070400000 ms) — Discord's own epoch

To extract the timestamp: right-shift the ID by 22 bits to remove the worker and sequence bits, then add the platform-specific epoch offset. The same numeric Snowflake ID will decode to completely different dates depending on which epoch you use, so selecting the correct platform is essential.

UUID v1 & v7 Explained

UUID (Universally Unique Identifier) versions 1 and 7 both embed timestamps, but in very different ways. UUID v1, defined in RFC 4122 and updated in RFC 9562, uses a 60-bit timestamp measured in 100-nanosecond intervals since October 15, 1582 (the Gregorian calendar reform). The timestamp is split across three fields: time_low (32 bits), time_mid (16 bits), and time_hi (12 bits), requiring reassembly before conversion.

UUID v7, introduced in RFC 9562 (2024), takes a simpler approach: the first 48 bits contain a standard Unix millisecond timestamp. This makes UUID v7 naturally sortable by creation time and much easier to decode. Both versions include a version nibble (1 or 7) at a fixed position, allowing automatic detection.

ULID Explained

ULID (Universally Unique Lexicographically Sortable Identifier) is a 26-character string encoded in Crockford Base32. The first 10 characters encode a 48-bit Unix millisecond timestamp, while the remaining 16 characters contain 80 bits of cryptographic randomness. Unlike UUIDs, ULIDs are designed to be lexicographically sortable — meaning alphabetical sorting equals chronological sorting. The Crockford Base32 alphabet excludes I, L, O, and U to avoid confusion with digits 1, 0, and each other.

KSUID, CUID & XID

KSUID (K-Sortable Unique Identifier) by Segment is a 27-character base62 string representing 20 bytes: 4 bytes for a Unix timestamp with a custom epoch (May 13, 2014) plus 16 bytes of random payload. KSUIDs are naturally sortable and designed for high-throughput distributed systems.

CUID v1 starts with the letter "c" followed by 24 characters. The first 8 characters after "c" encode the creation timestamp in base36 (milliseconds since Unix epoch), followed by a counter, machine fingerprint, and random data. Note: CUID v1 is deprecated in favor of CUID v2 due to timestamp leakage concerns.

XID is a 20-character base32hex-encoded identifier from the Go ecosystem. Its 12-byte binary structure contains a 4-byte Unix timestamp (seconds), 3-byte machine ID, 2-byte process ID, and 3-byte counter. XID is compact and globally unique without coordination.

Windows FILETIME, Excel & .NET Ticks

Windows FILETIME counts 100-nanosecond intervals since January 1, 1601 UTC. It appears in Active Directory timestamps, Windows file metadata, and Chrome browser timestamps. To convert to Unix time, subtract the epoch offset (116,444,736,000,000,000) and divide by 10,000 for milliseconds.

Excel Serial Date counts days since January 1, 1900 (serial 1), with a fractional part for time of day. Note the Lotus 1-2-3 compatibility bug: Excel incorrectly treats 1900 as a leap year, making serial 60 = February 29, 1900 (a date that never existed). Serial 25569 corresponds to January 1, 1970 (Unix epoch).

.NET DateTime.Ticks counts 100-nanosecond intervals since January 1, 0001 UTC. The offset to Unix epoch is 621,355,968,000,000,000 ticks. This format is used throughout the .NET ecosystem for high-precision time storage.

macOS CFAbsoluteTime & Sonyflake

macOS CFAbsoluteTime (Core Foundation Absolute Time) measures seconds since January 1, 2001 00:00:00 UTC — Apple's reference date. It appears in macOS/iOS system logs, Core Data timestamps, and Safari browser data. The offset from Unix epoch is 978,307,200 seconds.

Sonyflake, created by Sony, is a 63-bit ID with 39 bits for time (in 10-millisecond units since September 1, 2014), 8 bits for sequence, and 16 bits for machine ID. To extract the timestamp: right-shift by 24 bits, multiply by 10 for milliseconds, and add the epoch offset (1,409,529,600,000 ms).

Privacy & Security

This tool processes everything entirely in your browser using client-side JavaScript. No data is transmitted to any server. Your timestamps, IDs, and decoded results never leave your device. The source code is fully visible in the page — you can inspect it using your browser's developer tools to verify this claim. We believe developer tools should be transparent and trustworthy.