Skip to main content
Squeezing Pixels: The Math Behind JPEG Compression

Squeezing Pixels: The Math Behind JPEG Compression

February 28, 2026 (2w ago)

5 min read

compression
algorithms
images

JPEG compression is a cornerstone of digital imaging, powering everything from smartphone photos to web graphics. Understanding its inner workings demystifies how we handle image data efficiently. Let's break it down step by step, focusing on the math and algorithms that make it tick.


Why JPEG Matters

JPEG dominates the digital image landscape. Roughly 86% of images on the web are JPEGs, and it's the default format for most cameras and smartphones. This ubiquity stems from its ability to shrink file sizes dramatically, often to just 10% of the original, while preserving visual quality at high resolutions.

For developers, grasping JPEG is crucial when building apps that process images, optimize bandwidth, or integrate with media pipelines. It exploits human vision quirks to discard imperceptible data, achieving compression ratios that lossless formats like PNG can't match for photos. Without it, streaming services and social media would grind to a halt under the weight of raw pixel data.


The Human Eye's Weaknesses

JPEG's magic lies in exploiting how our eyes perceive light and color. The retina has two types of photoreceptors: rods (about 100 million per eye) and cones (about 6 million). Rods handle brightness (luminance) and low-light vision but aren't color-sensitive. Cones detect color (chrominance) via red, green, and blue receptors but are fewer in number and less sensitive to fine color details.

This imbalance means we're far better at discerning luminance variations than chrominance ones. A black-and-white version of an image often looks as detailed as the colored one, while isolating color channels reveals blurrier, less precise data. JPEG leverages this by prioritizing luminance data and aggressively compressing chrominance, reducing file size without noticeable quality loss.

The full JPEG pipeline transforms an RGB image into a compressed file through these steps:

JPEG Pipeline Diagram

Color Space Conversion

JPEG starts by converting the image from RGB (red, green, blue) to YCbCr (luminance Y, blue chrominance Cb, red chrominance Cr). This separates brightness from color, aligning with our visual sensitivities.

The conversion is lossless and reversible, using this matrix (from ITU-T T.81):

Y=0.299R+0.587G+0.114BCb=0.169R0.331G+0.500B+128Cr=0.500R0.419G0.081B+128\begin{aligned} Y &= 0.299R + 0.587G + 0.114B \\ C_b &= -0.169R - 0.331G + 0.500B + 128 \\ C_r &= 0.500R - 0.419G - 0.081B + 128 \end{aligned}

For 8-bit channels (0-255), the +128 offset centers Cb and Cr around zero for signed values. To reverse:

R=Y+1.403(Cr128)G=Y0.714(Cr128)0.344(Cb128)B=Y+1.773(Cb128)\begin{aligned} R &= Y + 1.403(C_r - 128) \\ G &= Y - 0.714(C_r - 128) - 0.344(C_b - 128) \\ B &= Y + 1.773(C_b - 128) \end{aligned}

No data is lost here. It's just a coordinate transform to prepare for compression.


Chrominance Downsampling (4:2:0)

With YCbCr in hand, JPEG downsamples the chrominance channels (Cb and Cr) since our eyes are less sensitive to them. The common 4:2:0 scheme averages 2x2 blocks in Cb and Cr, reducing each to quarter size while keeping Y full resolution.

Chrominance Downsampling Diagram

Mathematically, for a 2x2 block in Cb (values c1, c2, c3, c4):

Cˉb=c1+c2+c3+c44\bar{C}_b = \frac{c_1 + c_2 + c_3 + c_4}{4}

This step alone halves the data volume (Y: 50%, Cb+Cr: 50% combined), as chrominance now occupies half the space. During reconstruction, Cb and Cr are upsampled (e.g., via bilinear interpolation), but the loss is imperceptible for most images.


Discrete Cosine Transform

Each channel (Y, Cb, Cr) is divided into 8x8 blocks. DCT transforms these from spatial domain (pixel values) to frequency domain (coefficients representing patterns).

Think of it like recreating a painting with base "colors" (basis functions). JPEG uses 64 fixed 8x8 basis patterns, from low-frequency (smooth gradients) top-left to high-frequency (sharp details) bottom-right. Each block's pixels are expressed as a weighted sum of these bases.

DCT Flow Diagram

The 2D DCT formula (for pixel s(x,y) at position x,y in the block):

F(u,v)=14C(u)C(v)x=07y=07s(x,y)cos(((2x+1)uπ)/16)cos(((2y+1)vπ)/16)F(u,v) = \frac{1}{4} C(u) C(v) \sum_{x=0}^{7} \sum_{y=0}^{7} s(x,y) \cos\left(((2x+1)u\pi)/16\right) \cos\left(((2y+1)v\pi)/16\right)

Where C(0) = 1/√2 and C(k) = 1 otherwise. DCT is lossless. It rearranges data for easier compression in the next step.

For a 4032x3024 image, that's (4032/8) x (3024/8) = 504 x 378 = 190,512 blocks per channel (before downsampling).


Quantization

This is JPEG's lossy core: dividing DCT coefficients by a quantization table and rounding, discarding high-frequency details our eyes miss.

The standard luminance table (Annex K, ITU-T T.81):

Qlum=[1611101624405161121214192658605514131624405769561417222951878062182237566810910377243555648110411392496478871031211201017292959811210010399]Q_{lum} = \begin{bmatrix} 16 & 11 & 10 & 16 & 24 & 40 & 51 & 61 \\ 12 & 12 & 14 & 19 & 26 & 58 & 60 & 55 \\ 14 & 13 & 16 & 24 & 40 & 57 & 69 & 56 \\ 14 & 17 & 22 & 29 & 51 & 87 & 80 & 62 \\ 18 & 22 & 37 & 56 & 68 & 109 & 103 & 77 \\ 24 & 35 & 55 & 64 & 81 & 104 & 113 & 92 \\ 49 & 64 & 78 & 87 & 103 & 121 & 120 & 101 \\ 72 & 92 & 95 & 98 & 112 & 100 & 103 & 99 \end{bmatrix}

Chrominance table:

Qchrom=[17182447999999991821266699999999242656999999999947669999999999999999999999999999999999999999999999999999999999999999999999999999]Q_{chrom} = \begin{bmatrix} 17 & 18 & 24 & 47 & 99 & 99 & 99 & 99 \\ 18 & 21 & 26 & 66 & 99 & 99 & 99 & 99 \\ 24 & 26 & 56 & 99 & 99 & 99 & 99 & 99 \\ 47 & 66 & 99 & 99 & 99 & 99 & 99 & 99 \\ 99 & 99 & 99 & 99 & 99 & 99 & 99 & 99 \\ 99 & 99 & 99 & 99 & 99 & 99 & 99 & 99 \\ 99 & 99 & 99 & 99 & 99 & 99 & 99 & 99 \\ 99 & 99 & 99 & 99 & 99 & 99 & 99 & 99 \end{bmatrix}

For each coefficient F(u,v) and table entry Q(u,v):

Fq(u,v)=round ⁣(F(u,v)Q(u,v))F_q(u,v) = \text{round}\!\left(\frac{F(u,v)}{Q(u,v)}\right)

Higher bottom-right values (high frequencies) produce more zeros. The quality slider scales the table: for quality Q (1-100), scale factor S = (Q < 50 ? 5000/Q : 200 - 2*Q), then Q_new = round(Q_base * S / 100). Lower Q means larger divisors, more zeros, smaller files, but more artifacts.


Entropy Encoding

To compact the quantized coefficients, JPEG uses a zigzag scan to group zeros, then run-length encoding (RLE) and Huffman coding.

The zigzag scan reads the 8x8 block diagonally, starting from the DC coefficient (top-left) and ending at the highest frequency (bottom-right). This clusters non-zero low-frequency values at the front and pushes long runs of zeros to the tail:

Z=[0156141527282471316262942381217253041439111824314044531019233239455254202233384651556021343747505659613536484957586263]Z = \begin{bmatrix} 0 & 1 & 5 & 6 & 14 & 15 & 27 & 28 \\ 2 & 4 & 7 & 13 & 16 & 26 & 29 & 42 \\ 3 & 8 & 12 & 17 & 25 & 30 & 41 & 43 \\ 9 & 11 & 18 & 24 & 31 & 40 & 44 & 53 \\ 10 & 19 & 23 & 32 & 39 & 45 & 52 & 54 \\ 20 & 22 & 33 & 38 & 46 & 51 & 55 & 60 \\ 21 & 34 & 37 & 47 & 50 & 56 & 59 & 61 \\ 35 & 36 & 48 & 49 & 57 & 58 & 62 & 63 \end{bmatrix}

This turns the 8x8 matrix into a 1D sequence with trailing zeros. RLE encodes runs like "5 zeros, value 3" as (run, value). Huffman assigns shorter codes to common patterns (e.g., short runs, small values).

This variable-length coding squeezes the data further, lossless within this step.


Decompression: The Reverse Path

Decompression mirrors encoding: Decode Huffman/RLE to get quantized coefficients, multiply by quantization table (Fq * Q is approximately F, but rounding loss is permanent), inverse DCT to pixels, upsample chrominance, convert YCbCr to RGB.

The inverse DCT:

s(x,y)=14u=07v=07C(u)C(v)F(u,v)cos ⁣[((2x+1)uπ)/16]cos ⁣[((2y+1)vπ)/16]s(x,y) = \frac{1}{4} \sum_{u=0}^{7} \sum_{v=0}^{7} C(u) \cdot C(v) \cdot F(u,v) \cdot \cos\!\left[((2x+1)u\pi)/16\right] \cdot \cos\!\left[((2y+1)v\pi)/16\right]

Add 128 to undo level shift. High-frequency losses cause artifacts at low quality.


Where JPEG Falls Short

At low quality, blocky artifacts emerge around edges due to 8x8 boundaries and aggressive quantization. It struggles with vector graphics, text, or sharp lines. Reconstructing them from frequency bases introduces ringing or blurring.

For screenshots, diagrams, or line art, PNG excels with lossless compression and alpha support. JPEG's lossy nature amplifies noise in non-photographic images.


Connection to Video: H.264

H.264 (AVC) builds on JPEG principles for video, using chrominance downsampling, DCT-like transforms, and quantization. But it adds inter-frame compression: I-frames are like JPEGs (intra-coded), P-frames predict from prior frames, B-frames from both directions. This handles 24-60 frames/sec efficiently, with I-frames every 30 or at scene changes.


Summary

JPEG compresses images by converting RGB to YCbCr, downsampling chrominance, applying DCT to 8x8 blocks, quantizing to discard high frequencies, and entropy encoding the result. This exploits visual perception for 10x size reductions with minimal quality loss in photos, though it falters on sharp graphics. Understanding these steps equips you to optimize image handling in code, from quality tuning to custom implementations.


References