Unix Epoch Time Explained: Everything Developers Need to Know
Every timestamp in your database, every Date.now() call in your JavaScript, every time.time() in your Python script — they all trace back to a single concept: Unix epoch time. It is the invisible backbone of how computers measure time, and understanding it deeply will make you a better developer. It will also save you from some spectacularly frustrating debugging sessions.
This guide covers everything: what epoch time is, why the epoch starts on January 1, 1970, how to convert it in eight programming languages, the Y2038 problem, timezone pitfalls, and the most common mistakes developers make when working with timestamps.
What Is Unix Epoch Time?
Unix epoch time (also called Unix time, POSIX time, or simply "epoch time") is a system for tracking time as a single running total of seconds. It counts the number of seconds that have elapsed since January 1, 1970 at 00:00:00 UTC — a moment known as the Unix epoch.
That is it. There are no months, no days of the week, no time zones, no daylight saving transitions. Just a single integer that increases by one every second. This simplicity is exactly what makes epoch time so powerful and so widely used in computing.
Right now, as you read this sentence, the Unix timestamp is somewhere around 1,771,000,000. By the time you finish this article, it will be a few hundred seconds higher. The number never stops, never resets, and never goes backward (leap seconds aside, which we will get to later).
A Brief History: Why January 1, 1970?
The choice of January 1, 1970 as the epoch is not the result of some grand philosophical decision. It is a pragmatic choice made by the engineers who built Unix at Bell Labs in the early 1970s.
The original Unix system, developed by Ken Thompson and Dennis Ritchie, needed a way to represent time internally. The earliest versions of Unix actually used January 1, 1971 as the epoch and stored time as 1/60th-of-a-second intervals in a 32-bit integer. That gave the system a range of about 2.5 years before the counter overflowed — which was a real problem even in 1971.
The engineers soon switched to counting whole seconds instead of 60ths, which extended the range dramatically. They also moved the epoch back to January 1, 1970 — a round date that was recent enough to be useful but far enough back to cover the system's own creation. There was no deep significance to the date. It was simply the start of the decade during which Unix was born, a convenient and clean starting point.
This decision, made in a lab at Murray Hill, New Jersey over fifty years ago, now affects every smartphone, every web server, every IoT device, and virtually every database on the planet.
How Epoch Time Works Internally
At its core, epoch time is just arithmetic. To determine what time it is, a system reads the current epoch value and does simple math to extract year, month, day, hour, minute, and second.
The Basic Calculation
Given the Unix timestamp 1700000000, here is what happens behind the scenes:
Total seconds: 1,700,000,000
Days since epoch: 1,700,000,000 / 86,400 = 19,675.925...
Full days: 19,675
Remaining seconds: 79,900 (about 22 hours, 11 minutes, 40 seconds)
From the day count, calculate:
Years elapsed (accounting for leap years)
Month within the year
Day within the month
Result: November 14, 2023 at 22:13:20 UTC
The number 86,400 is the number of seconds in a standard day (60 × 60 × 24). This calculation is what every programming language's date library does when you convert a timestamp to a human-readable date. The tricky part is handling leap years and varying month lengths, but libraries have been doing this reliably for decades.
Before the Epoch: Negative Timestamps
Dates before January 1, 1970 are represented as negative numbers. For example:
-1→ December 31, 1969 at 23:59:59 UTC-86400→ December 31, 1969 at 00:00:00 UTC-2208988800→ January 1, 1900 at 00:00:00 UTC
Most modern languages handle negative timestamps correctly, but some older systems and databases do not. If your application deals with historical dates (before 1970), always test your timestamp handling with negative values.
Seconds vs. Milliseconds: The 1000x Confusion
This is one of the most common sources of bugs when working with epoch time. Different systems use different precisions:
Seconds (10 digits)
1700000000
Used by: Unix/Linux, PHP, Python time.time(), Ruby, C, Go, most APIs, PostgreSQL
Milliseconds (13 digits)
1700000000000
Used by: JavaScript Date.now(), Java System.currentTimeMillis(), Elasticsearch, MongoDB
The quick way to tell them apart: count the digits. If the number has 10 digits, it is in seconds. If it has 13 digits, it is in milliseconds. As of early 2026, a seconds-based timestamp looks like 177x... and a milliseconds-based one looks like 177x... followed by three more digits.
1700000000 interpreted as milliseconds gives you January 20, 1970 — not November 2023. If your dates are showing up in January 1970, you are almost certainly feeding seconds where milliseconds are expected.
Some systems go even further in precision:
- Microseconds (16 digits) — Python
datetime.timestamp()returns float with microsecond precision - Nanoseconds (19 digits) — Go
time.Now().UnixNano(), JavaInstant.now().toEpochMilli()can be extended
Converting Epoch Time in 8 Programming Languages
Here is how to get the current epoch time and convert between timestamps and human-readable dates in every major language. Bookmark this section — you will come back to it.
JavaScript
// Current epoch time in MILLISECONDS
Date.now(); // 1739318400000
// Current epoch time in SECONDS
Math.floor(Date.now() / 1000); // 1739318400
// Epoch to Date object
const date = new Date(1739318400 * 1000); // Note: multiply by 1000
console.log(date.toISOString()); // "2025-02-12T00:00:00.000Z"
console.log(date.toLocaleString()); // locale-specific string
// Date to epoch (seconds)
Math.floor(new Date('2025-02-12').getTime() / 1000); // 1739318400
// Date to epoch (milliseconds)
new Date('2025-02-12').getTime(); // 1739318400000
new Date() expects milliseconds, not seconds. If you pass a 10-digit seconds timestamp directly, you will get a date in January 1970. Always multiply by 1000.
Python
import time
from datetime import datetime, timezone
# Current epoch time (float with microsecond precision)
time.time() # 1739318400.123456
# Current epoch time (integer seconds)
int(time.time()) # 1739318400
# Epoch to datetime (UTC)
dt = datetime.fromtimestamp(1739318400, tz=timezone.utc)
print(dt) # 2025-02-12 00:00:00+00:00
print(dt.isoformat()) # 2025-02-12T00:00:00+00:00
# Datetime to epoch
dt = datetime(2025, 2, 12, tzinfo=timezone.utc)
int(dt.timestamp()) # 1739318400
# String to epoch
dt = datetime.strptime("2025-02-12", "%Y-%m-%d").replace(tzinfo=timezone.utc)
int(dt.timestamp()) # 1739318400
tz=timezone.utc to fromtimestamp(). Without it, Python assumes local time, which leads to different results on different servers.
Java
import java.time.Instant;
import java.time.ZoneOffset;
import java.time.LocalDateTime;
import java.time.ZonedDateTime;
// Current epoch time in MILLISECONDS
System.currentTimeMillis(); // 1739318400000L
// Current epoch time in SECONDS
Instant.now().getEpochSecond(); // 1739318400L
// Epoch to Instant
Instant instant = Instant.ofEpochSecond(1739318400L);
System.out.println(instant); // 2025-02-12T00:00:00Z
// Epoch to LocalDateTime (UTC)
LocalDateTime ldt = LocalDateTime.ofInstant(
Instant.ofEpochSecond(1739318400L), ZoneOffset.UTC
);
// LocalDateTime to epoch
long epoch = ldt.toEpochSecond(ZoneOffset.UTC); // 1739318400
Go
package main
import (
"fmt"
"time"
)
func main() {
// Current epoch time in seconds
now := time.Now().Unix() // 1739318400
// Current epoch time in milliseconds
nowMs := time.Now().UnixMilli() // 1739318400000
// Current epoch time in nanoseconds
nowNs := time.Now().UnixNano() // 1739318400000000000
// Epoch to Time
t := time.Unix(1739318400, 0)
fmt.Println(t.UTC()) // 2025-02-12 00:00:00 +0000 UTC
// Time to epoch
epoch := t.Unix() // 1739318400
// Parse string to epoch
t, _ = time.Parse("2006-01-02", "2025-02-12")
fmt.Println(t.Unix()) // 1739318400
}
PHP
// Current epoch time in seconds
time(); // 1739318400
// Current epoch time with microseconds
microtime(true); // 1739318400.123456
// Epoch to date string
echo date('Y-m-d H:i:s', 1739318400); // 2025-02-12 00:00:00
// Epoch to DateTime object
$dt = new DateTime('@1739318400');
$dt->setTimezone(new DateTimeZone('UTC'));
echo $dt->format('Y-m-d H:i:s'); // 2025-02-12 00:00:00
// Date string to epoch
echo strtotime('2025-02-12 00:00:00 UTC'); // 1739318400
// DateTime to epoch
$dt = new DateTime('2025-02-12', new DateTimeZone('UTC'));
echo $dt->getTimestamp(); // 1739318400
Ruby
# Current epoch time (float)
Time.now.to_f # 1739318400.123456
# Current epoch time (integer)
Time.now.to_i # 1739318400
# Epoch to Time object
t = Time.at(1739318400).utc
puts t # 2025-02-12 00:00:00 UTC
puts t.iso8601 # 2025-02-12T00:00:00Z
# Time to epoch
Time.utc(2025, 2, 12).to_i # 1739318400
# Parse string to epoch
require 'time'
Time.parse('2025-02-12T00:00:00Z').to_i # 1739318400
C
#include <stdio.h>
#include <time.h>
int main() {
// Current epoch time
time_t now = time(NULL); // 1739318400
// Epoch to struct tm (UTC)
struct tm *utc = gmtime(&now);
printf("%d-%02d-%02d %02d:%02d:%02d\n",
utc->tm_year + 1900, utc->tm_mon + 1, utc->tm_mday,
utc->tm_hour, utc->tm_min, utc->tm_sec);
// Epoch to formatted string
char buf[64];
strftime(buf, sizeof(buf), "%Y-%m-%d %H:%M:%S UTC", utc);
printf("%s\n", buf); // 2025-02-12 00:00:00 UTC
// struct tm to epoch
struct tm t = {0};
t.tm_year = 2025 - 1900;
t.tm_mon = 1; // February (0-indexed)
t.tm_mday = 12;
time_t epoch = timegm(&t); // 1739318400
return 0;
}
Bash
# Current epoch time
date +%s # 1739318400
# Epoch to human-readable (GNU date)
date -d @1739318400 # Wed Feb 12 00:00:00 UTC 2025
date -d @1739318400 --iso-8601=seconds # 2025-02-12T00:00:00+00:00
# Epoch to human-readable (macOS/BSD date)
date -r 1739318400 # Wed Feb 12 00:00:00 UTC 2025
# Human-readable to epoch (GNU date)
date -d "2025-02-12 00:00:00 UTC" +%s # 1739318400
# Epoch in milliseconds (GNU date)
date +%s%3N # 1739318400000
# Time arithmetic: 24 hours from now
echo $(( $(date +%s) + 86400 ))
date command differs between GNU (Linux) and BSD (macOS). GNU uses date -d @TIMESTAMP while BSD uses date -r TIMESTAMP. When writing scripts that must run on both, consider using Python or another cross-platform language instead.
Notable Epoch Timestamps Reference
Certain epoch values carry special significance in computing history. Here is a timeline of notable timestamps, past and future:
The Y2038 Problem: The Next Y2K
If you have heard of Y2K, you should know about Y2038. It is a real problem, it is well-understood, and parts of it are already being fixed — but it is not fully solved.
What Happens
Many systems store Unix timestamps as a signed 32-bit integer. A signed 32-bit integer can hold values from −2,147,483,648 to 2,147,483,647. The maximum positive value, 2,147,483,647, corresponds to January 19, 2038 at 03:14:07 UTC.
One second later, the counter overflows. Instead of incrementing to 2,147,483,648, it wraps around to −2,147,483,648, which corresponds to December 13, 1901. Any system still using a 32-bit time_t will suddenly think it is 1901.
What Systems Are Affected
The Y2038 problem primarily affects:
- Embedded systems — IoT devices, industrial controllers, automotive computers, and medical devices running 32-bit processors with no upgrade path
- Legacy databases — Any database schema storing timestamps as 32-bit INT rather than BIGINT or native TIMESTAMP types
- Older file systems — ext2/ext3 store timestamps as 32-bit values (ext4 has been patched)
- 32-bit Linux kernels — The Linux kernel's 32-bit time_t was a known issue (addressed in kernel 5.6+)
- Network protocols — NTP originally used 32-bit timestamps (NTPv4 addresses this)
- C programs using
time_t— On 32-bit systems where time_t is still 32 bits
What Has Already Been Fixed
The good news: mainstream computing has largely moved to 64-bit timestamps. A signed 64-bit integer can represent dates up to approximately 292 billion years in the future — well beyond the heat death of the universe.
- Linux kernel 5.6+ (2020) uses 64-bit time_t on 32-bit architectures
- Modern languages (Java, Go, Rust, Python 3) use 64-bit timestamps internally
- Modern databases (PostgreSQL TIMESTAMP, MySQL BIGINT) handle dates well beyond 2038
- 64-bit operating systems (which is nearly everything now) use 64-bit time_t natively
What You Should Do
If you are writing new code today:
- Use 64-bit integer types for any timestamp storage
- In databases, use
BIGINTinstead ofINTfor epoch columns, or use nativeTIMESTAMP/TIMESTAMPTZtypes - If you work with embedded systems, audit whether your platform's
time_tis 32-bit or 64-bit - Test your application with dates beyond 2038 to catch any hidden assumptions
Epoch Time in APIs and Databases
Epoch timestamps are the most common way to represent time in APIs and data storage. Here is how they are used in practice and what to watch out for.
REST APIs
Most REST APIs use one of two formats for timestamps:
// Option 1: ISO 8601 string (human-readable)
{
"created_at": "2025-02-12T00:00:00Z",
"updated_at": "2025-02-12T15:30:00+05:30"
}
// Option 2: Unix epoch (compact, unambiguous)
{
"created_at": 1739318400,
"updated_at": 1739374200
}
Epoch timestamps are smaller in payload size and immune to timezone formatting inconsistencies. ISO 8601 strings are more human-readable when inspecting API responses. Both are valid choices; the important thing is to document which one your API uses and whether epoch values are in seconds or milliseconds.
Some well-known APIs and their timestamp formats:
| API / Service | Timestamp Format | Precision |
|---|---|---|
| Stripe | Unix epoch | Seconds |
| Twilio | ISO 8601 / RFC 2822 | Seconds |
| GitHub API | ISO 8601 | Seconds |
| Slack API | Unix epoch (string) | Seconds with decimal |
| Discord API | ISO 8601 | Milliseconds |
| Twitter/X API | RFC 2822 | Seconds |
Databases
How you store timestamps in your database has long-term implications:
-- PostgreSQL: Use TIMESTAMPTZ (timestamp with time zone)
CREATE TABLE events (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
scheduled_for TIMESTAMPTZ
);
-- MySQL: BIGINT for epoch seconds, or DATETIME/TIMESTAMP
CREATE TABLE events (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
created_at BIGINT DEFAULT (UNIX_TIMESTAMP()),
scheduled_for DATETIME
);
-- SQLite: Store as INTEGER (epoch seconds)
CREATE TABLE events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
created_at INTEGER DEFAULT (strftime('%s', 'now'))
);
General advice: Use your database's native timestamp type when possible. It gives you built-in timezone handling, date arithmetic functions, and indexing optimizations. Store epoch integers when you need language-agnostic portability or when working with systems that expect raw numbers.
Timezone Considerations
Epoch time is inherently timezone-free. The number 1739318400 means the same instant everywhere in the world: February 12, 2025 at 00:00:00 UTC. The local representation of that instant varies by timezone, but the epoch value itself does not.
This is one of the greatest advantages of epoch time and also the source of its most common bugs.
The Golden Rule
Always store and transmit timestamps in UTC. Convert to local time only at the presentation layer, when showing dates to humans. This rule alone will prevent the majority of timezone-related bugs.
// BAD: Converting to local time before storage
const localDate = new Date('2025-02-12T00:00:00'); // Depends on server timezone!
saveToDatabase(localDate.getTime());
// GOOD: Always work in UTC
const utcDate = new Date('2025-02-12T00:00:00Z'); // Explicit UTC
saveToDatabase(utcDate.getTime());
// GOOD: Display in user's timezone only at render time
const stored = 1739318400000;
const display = new Date(stored).toLocaleString('en-US', {
timeZone: 'America/New_York'
}); // "2/11/2025, 7:00:00 PM"
Daylight Saving Time Traps
Epoch time handles DST perfectly because it does not care about time zones. The problems arise when you convert between epoch time and local time:
- Spring forward: When clocks jump from 2:00 AM to 3:00 AM, the local time 2:30 AM does not exist. Converting "March 9, 2025 2:30 AM EST" to epoch will either fail or silently produce the wrong result.
- Fall back: When clocks go from 2:00 AM back to 1:00 AM, the local time 1:30 AM occurs twice. Converting "November 2, 2025 1:30 AM EST" is ambiguous.
The solution: never store local times. Store epoch (which is always UTC) and convert to local only for display.
Timezone Database (tzdata)
Systems that convert epoch time to local time rely on the IANA timezone database (also known as the Olson database or tzdata). This database is updated several times per year as countries change their timezone rules. If your system has an outdated tzdata, conversions may be wrong for recently changed time zones.
2025-02-12T00:00:00Z or 2025-02-12T05:30:00+05:30). Never use bare date strings like "2025-02-12 00:00:00" without a timezone indicator.
Leap Seconds and Epoch Time
Leap seconds are one of those topics that developers usually do not need to worry about — until they do. Here is what you need to know.
What Are Leap Seconds?
Earth's rotation is gradually slowing down, which means solar time (based on the Earth's position relative to the sun) drifts away from atomic time (based on cesium atom oscillations). To keep the two in sync, the International Earth Rotation and Reference Systems Service (IERS) occasionally inserts a leap second — an extra second added at the end of June 30 or December 31.
Since 1972, 27 leap seconds have been added. The most recent was on December 31, 2016. Notably, in November 2022, the General Conference on Weights and Measures voted to abolish leap seconds by 2035, so they may become a thing of the past.
How Unix Time Handles Leap Seconds
The short answer: it pretends they do not exist.
POSIX specifies that every day has exactly 86,400 seconds. When a leap second occurs, Unix time effectively repeats a second — the timestamp does not increment during the leap second. This means Unix time is technically not a perfect count of SI seconds since the epoch, but it is always a simple and predictable mapping from timestamp to date.
In practice, most systems handle leap seconds using one of these approaches:
- Smearing: Google, AWS, and most cloud providers "smear" the leap second across a longer period (typically 24 hours), adding tiny fractions of a second to each clock tick. From the application's perspective, no leap second occurs.
- Stepping: The system clock jumps backward by one second at the leap second boundary. This can cause issues with software that assumes time only moves forward.
- Ignoring: Many systems simply let NTP correct the clock after the leap second, resulting in a brief period of slightly incorrect time.
Debugging Timestamp Issues: Common Pitfalls
Here is a field guide to the most common epoch time bugs and how to identify and fix them. If you work with timestamps regularly, you will encounter most of these at some point.
Pitfall 1: Seconds vs. Milliseconds Mismatch
Symptom: Dates showing up as January 1970, or dates millions of years in the future.
// Bug: passing seconds to a function expecting milliseconds
new Date(1739318400);
// Result: Mon Jan 20 1970 ... (WRONG!)
// Fix: multiply by 1000
new Date(1739318400 * 1000);
// Result: Wed Feb 12 2025 ... (Correct)
Quick check: If a timestamp has 10 digits, it is seconds. If 13 digits, milliseconds. If your date is in 1970, you are probably feeding seconds where milliseconds are expected. If your date is in the year 55,000+, you are feeding milliseconds where seconds are expected.
Pitfall 2: Timezone Assumption Differences
Symptom: Timestamps are off by a fixed number of hours (e.g., consistently 5 hours off, 8 hours off).
# Bug: Python's fromtimestamp() without timezone uses local time
from datetime import datetime
dt = datetime.fromtimestamp(1739318400)
# On a server in UTC-5: 2025-02-11 19:00:00 (no timezone info!)
# Fix: Always specify UTC
from datetime import datetime, timezone
dt = datetime.fromtimestamp(1739318400, tz=timezone.utc)
# Everywhere: 2025-02-12 00:00:00+00:00
Quick check: If dates are consistently off by whole hours, it is a timezone issue. Count the offset hours to identify which timezone is being incorrectly applied.
Pitfall 3: String Parsing Without Timezone
Symptom: The same date string produces different epoch values on different servers.
// Bug: parsing without timezone specification
new Date('2025-02-12'); // Midnight UTC (per spec)
new Date('2025-02-12 00:00:00'); // Local time! (implementation varies)
// Fix: always include timezone
new Date('2025-02-12T00:00:00Z'); // Explicitly UTC
Pitfall 4: Integer Overflow in 32-bit Systems
Symptom: Dates after 2038 wrap around to 1901 or produce negative values.
// In a 32-bit C program:
time_t t = 2147483647; // 2038-01-19 03:14:07 UTC
t += 1; // Overflows to -2147483648 (1901!)
// Fix: Use 64-bit types
#include <stdint.h>
int64_t t = 2147483648LL; // Works correctly
Pitfall 5: Floating-Point Precision Loss
Symptom: Timestamps lose precision when stored as floating-point numbers, especially at millisecond or microsecond precision.
// Bug: JavaScript number precision at large values
const ts = 1739318400123; // 13-digit millisecond timestamp
const recovered = JSON.parse(JSON.stringify(ts));
console.log(ts === recovered); // true (safe up to 2^53)
// But beware with microsecond precision:
const micro = 1739318400123456; // 16 digits
console.log(micro); // 1739318400123456 (OK in JS)
// In some languages with 32-bit floats, this WILL lose precision
Quick check: If timestamps appear correct to the second but wrong at the millisecond level, suspect floating-point precision loss. Always use integer types (BIGINT in SQL, long in Java) for timestamp storage.
Pitfall 6: Confusing Created Time with Modified Time
Symptom: Records appear to have been created in the future, or modification timestamps are earlier than creation timestamps.
This is not an epoch-specific bug, but it happens frequently with epoch timestamps because they are harder to eyeball than human-readable dates. Use your epoch converter to quickly sanity-check values during debugging.
Pitfall 7: Epoch Zero as a Default Value
Symptom: Records show dates of "January 1, 1970 00:00:00 UTC" when they should have a real timestamp.
This happens when an uninitialized integer (defaulting to 0) is interpreted as a timestamp. Epoch zero is a valid timestamp, but it almost never represents a real date in application data. If you see January 1, 1970 in your data, it almost certainly means a timestamp field was not populated.
// Defensive check
function isValidTimestamp(epoch) {
// Reject 0 and negative values for "created at" type fields
// Adjust the minimum based on your application's valid date range
const MIN_VALID = 946684800; // 2000-01-01 (reasonable minimum for modern apps)
const MAX_VALID = 4102444800; // 2100-01-01
return epoch >= MIN_VALID && epoch <= MAX_VALID;
}
Epoch Time Quick Reference
Bookmark this section for the commands you will use most often:
Get Current Epoch Time
# Bash
date +%s
# JavaScript
Math.floor(Date.now() / 1000)
# Python
import time; int(time.time())
# Go
time.Now().Unix()
# PHP
time()
# Ruby
Time.now.to_i
# Java
Instant.now().getEpochSecond()
# C
time(NULL)
Common Time Intervals in Seconds
| Interval | Seconds | Notes |
|---|---|---|
| 1 minute | 60 |
|
| 1 hour | 3600 |
60 × 60 |
| 1 day | 86400 |
60 × 60 × 24 |
| 1 week | 604800 |
86400 × 7 |
| 30 days | 2592000 |
86400 × 30 |
| 365 days | 31536000 |
86400 × 365 (not accounting for leap years) |
| Leap year | 31622400 |
86400 × 366 |
Conclusion
Unix epoch time is one of those foundational concepts that touches nearly everything in software development. Whether you are building APIs, debugging database queries, writing cron jobs, or diagnosing why a user's account shows a creation date of January 1, 1970, a solid understanding of epoch time will serve you well.
The key takeaways:
- Epoch time counts seconds since January 1, 1970 00:00:00 UTC. It is timezone-free, unambiguous, and universally understood.
- Watch for seconds vs. milliseconds. Count the digits: 10 digits = seconds, 13 digits = milliseconds. Getting this wrong is the single most common timestamp bug.
- Always work in UTC. Store and transmit in UTC. Convert to local time only at the presentation layer.
- The Y2038 problem is real but mostly addressed in modern 64-bit systems. Audit any 32-bit embedded systems or legacy databases.
- Use native date types in your database when possible. Fall back to BIGINT epoch columns when you need cross-system portability.
- Leap seconds are handled by your OS and cloud provider. Do not try to account for them manually unless you are building scientific or financial millisecond-precision systems.
The next time you see a mysterious 10-digit number in a log file or API response, you will know exactly what it means — and exactly how to convert it.
Free Timestamp Tools
Put your epoch time knowledge into practice with our browser-based tools.
Epoch Converter Timestamp Generator JSON Formatter