zlib
'zlib' is a data compression and decompression module provided by the Node.js standard library. It supports compression algorithms such as gzip, deflate, and brotli, and is used for compressing HTTP responses and saving files in compressed form. Both a stream-based API and a callback-based API are available.
Basic Syntax
var zlib = require('zlib');
// Compress a buffer with gzip (callback style)
zlib.gzip(data, function(err, compressedBuffer) {
// Processing after compression
});
// Decompress a gzip-compressed buffer (callback style)
zlib.gunzip(compressedBuffer, function(err, decompressedBuffer) {
// Processing after decompression
});
// Compression using streams (piped together)
inputStream.pipe(zlib.createGzip()).pipe(outputStream);
Key Methods
| Method | Overview |
|---|---|
zlib.gzip(buf, cb) | Compresses a buffer in gzip format. The compressed buffer is passed to the callback. |
zlib.gunzip(buf, cb) | Decompresses a gzip-compressed buffer. |
zlib.deflate(buf, cb) | Compresses a buffer in deflate format. |
zlib.inflate(buf, cb) | Decompresses a deflate-compressed buffer. |
zlib.brotliCompress(buf, cb) | Compresses a buffer in Brotli format (Node.js v10.16+). |
zlib.brotliDecompress(buf, cb) | Decompresses a Brotli-compressed buffer. |
zlib.createGzip() | Creates a Transform stream that performs gzip compression. |
zlib.createGunzip() | Creates a Transform stream that performs gzip decompression. |
zlib.createDeflate() | Creates a Transform stream that performs deflate compression. |
zlib.createInflate() | Creates a Transform stream that performs deflate decompression. |
zlib.gzipSync(buf) | Synchronously compresses a buffer with gzip. Returns the compressed buffer. |
zlib.gunzipSync(buf) | Synchronously decompresses a gzip-compressed buffer. |
gzip / gunzip — Compressing and Decompressing Buffers
Compress a string or buffer with 'gzip()' and restore it with 'gunzip()'. The compressed size and compression ratio can be verified.
gzip_basic.js
var zlib = require('zlib');
// Prepare data to compress (quote from Shinji Ikari)
var original = 'I mustn\'t run away. I mustn\'t run away. I mustn\'t run away. I must not flee from battle with the Angels. I must ride Eva.';
var buf = Buffer.from(original, 'utf8');
console.log('Original size:', buf.length, 'bytes');
// Compress with gzip
zlib.gzip(buf, function(err, compressed) {
if (err) {
console.error('Compression error:', err);
return;
}
console.log('Compressed size:', compressed.length, 'bytes');
console.log('Compression ratio:', Math.round((1 - compressed.length / buf.length) * 100) + '%');
// Decompress the compressed data
zlib.gunzip(compressed, function(err, decompressed) {
if (err) {
console.error('Decompression error:', err);
return;
}
console.log('Decompressed text:', decompressed.toString('utf8'));
});
});
node gzip_basic.js Original size: 168 bytes Compressed size: 113 bytes Compression ratio: 33% Decompressed text: I mustn't run away. I mustn't run away. I mustn't run away. I must not flee from battle with the Angels. I must ride Eva.
createGzip / createGunzip — Stream-Based Compression
For large files, the stream API is more appropriate. By piping the Transform stream returned by 'createGzip()' with pipe(), files can be compressed without consuming a large amount of memory.
gzip_stream.js
var zlib = require('zlib');
var fs = require('fs');
// Specify the file paths for compression
var inputPath = 'evangelion_report.txt';
var outputPath = 'evangelion_report.txt.gz';
// Create the input file first if it does not exist
fs.writeFileSync(inputPath, [
'Third New Tokyo City — Special Agency NERV Operations Report',
'Pilot: Ikari Shinji (First Children candidate)',
'Unit: Humanoid Decisive Weapon Evangelion Unit-01',
'Target Angel: Sachiel (Third Angel)',
'Operation result: Successful annihilation',
'Officer in charge: Lt. Col. Katsuragi Misato',
].join('\n'), 'utf8');
// Pipe: read stream -> gzip compression -> write stream
var readStream = fs.createReadStream(inputPath);
var writeStream = fs.createWriteStream(outputPath);
var gzip = zlib.createGzip();
readStream.pipe(gzip).pipe(writeStream);
writeStream.on('finish', function() {
var originalSize = fs.statSync(inputPath).size;
var compressedSize = fs.statSync(outputPath).size;
console.log('Compression complete:', outputPath);
console.log('Original size:', originalSize, 'bytes');
console.log('Compressed size:', compressedSize, 'bytes');
});
node gzip_stream.js Compression complete: evangelion_report.txt.gz Original size: 302 bytes Compressed size: 273 bytes
deflate / inflate
'deflate' is a compression format based on the same LZ77 algorithm as gzip, but without gzip's header information. It is sometimes used with HTTP's Content-Encoding: deflate. The synchronous APIs 'deflateSync()' and 'inflateSync()' are also available.
deflate_inflate.js
var zlib = require('zlib');
var message = 'Ayanami Rei: I don\'t know how to smile. I don\'t know what makes me smile.';
var buf = Buffer.from(message, 'utf8');
// Synchronously compress with deflate (fine for small data)
var compressed = zlib.deflateSync(buf);
console.log('Before deflate compression:', buf.length, 'bytes');
console.log('After deflate compression:', compressed.length, 'bytes');
// Synchronously decompress with inflate
var decompressed = zlib.inflateSync(compressed);
console.log('Decompressed text:', decompressed.toString('utf8'));
// Compare compressed sizes of gzip and deflate (same data)
var gzipped = zlib.gzipSync(buf);
console.log('\n--- Compression format comparison ---');
console.log('Original size:', buf.length, 'bytes');
console.log('gzip compressed:', gzipped.length, 'bytes');
console.log('deflate compressed:', compressed.length, 'bytes');
console.log('Note: deflate is slightly smaller than gzip due to no header overhead');
node deflate_inflate.js Before deflate compression: 80 bytes After deflate compression: 76 bytes Decompressed text: Ayanami Rei: I don't know how to smile. I don't know what makes me smile. --- Compression format comparison --- Original size: 80 bytes gzip compressed: 88 bytes deflate compressed: 76 bytes Note: deflate is slightly smaller than gzip due to no header overhead
Compressing HTTP Responses
Compressing responses in a web server reduces the amount of data transferred. The following example compresses a response with gzip when the client sends an Accept-Encoding: gzip header.
http_gzip.js
var http = require('http');
var zlib = require('zlib');
var server = http.createServer(function(req, res) {
// Prepare a longer response text
var body = [
'Welcome to Special Agency NERV.',
'Pilot roster:',
' Ikari Shinji — Evangelion Unit-01',
' Ayanami Rei — Evangelion Unit-00',
' Soryu Asuka Langley — Evangelion Unit-02',
' Nagisa Kaworu — Evangelion Unit-13',
'Operations command: Lt. Col. Katsuragi Misato',
].join('\n');
// Check if the client accepts gzip
var acceptEncoding = req.headers['accept-encoding'] || '';
if (acceptEncoding.indexOf('gzip') !== -1) {
// Return gzip-compressed response
res.writeHead(200, {
'Content-Type': 'text/plain; charset=utf-8',
'Content-Encoding': 'gzip',
});
zlib.gzip(Buffer.from(body, 'utf8'), function(err, compressed) {
if (err) { res.end(); return; }
res.end(compressed);
});
} else {
// Return uncompressed response
res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' });
res.end(body);
}
});
server.listen(3000, function() {
console.log('Server started: http://localhost:3000/');
});
node http_gzip.js Server started: http://localhost:3000/
Common Mistakes
Not handling the error event on zlib streams
When an error occurs in a zlib Transform stream, the process will crash if there is no handler on the error event. Use stream.pipeline() or set an error handler on each stream.
var zlib = require('zlib');
var fs = require('fs');
var stream = require('stream');
// OK: pipeline() automatically cleans up all streams on error
stream.pipeline(
fs.createReadStream('evangelion_report.txt'),
zlib.createGzip(),
fs.createWriteStream('evangelion_report.txt.gz'),
function(err) {
if (err) {
console.error('Ikari Shinji: Pipeline error —', err.message);
} else {
console.log('Katsuragi Misato: Compression complete');
}
}
);
When using pipe() manually, each stream must have its own error event handler.
Blocking the event loop by processing large files with the synchronous API
zlib.gzipSync() and zlib.gunzipSync() occupy the CPU, blocking the event loop during execution. For data that is several MB or larger, use the stream API or the asynchronous API (zlib.gzip()).
var zlib = require('zlib');
// NG: using the synchronous API on a large buffer blocks the event loop
var hugeData = Buffer.alloc(10 * 1024 * 1024, 'Ayanami Rei');
var compressed = zlib.gzipSync(hugeData); // All other processing stops during this
// OK: use the asynchronous API
zlib.gzip(hugeData, function(err, compressed) {
if (err) {
console.error('Compression error:', err.message);
return;
}
console.log('Compression complete:', compressed.length, 'bytes');
});
Overview
The zlib module supports three compression algorithms: gzip, deflate, and brotli. Three types of APIs are available — callback style (zlib.gzip() etc.), synchronous style (zlib.gzipSync() etc.), and stream style (zlib.createGzip() etc.) — and the choice depends on the use case.
Loading a large file entirely into memory degrades performance. In such cases, using the stream API to process data as a flow — fs.createReadStream().pipe(zlib.createGzip()).pipe(fs.createWriteStream()) — is the standard approach.
To compress HTTP responses, check the Accept-Encoding header sent by the client, and if it contains gzip, return the gzip-compressed data with a Content-Encoding: gzip header. Web frameworks such as Express handle the same processing through the compression middleware.
If you find any errors or copyright issues, please contact us.