One of the main factors of website optimization is to reduce pages size by compressing images, high-quality images with big sizes can reduce the website or pages speed, especially that many of web users are using mobile devices with 3G connection.
Website speed is one of the main factor in Google search ranking.
Google research shows that page that loads in 6 seconds or more could cause 100% or more bounce rate.
Source: Google/SOASTA Research, 2017.
You can test your website with ThinkWithGoogle to get information and report about the website speed and bounce rate. https://testmysite.withgoogle.com
One of the optimization techniques is to compress images to reduce overall page size:
According to Google:
Compressing images and text can be a game changer—30% of pages could save more than 250KB that way. Our analysis shows that the automotive, technology, and business and industrial market sectors have the most room for improvement.
Source: ThinkWithGoogle
jpegoptim is an optimization and compression tool, available for Linux and FreeBSD. It uses Huffman encoding which works by assigning the most frequent values in an input stream to the encodings with the smallest bit lengths.
jpegoptim can be installed in Ubuntu by apt, and in yum based distributions from EPEL repository, and in FreeBSD from ports /graphics/jpegoptim
I created this script to find all JPG images in the current directory (can be a website root directory), then output the found files in a For Loop for jpegoptim to compress each one. Finally, writes the output of the each image optimized in "jpegoptimLog" log file instead of the terminal.
#!/bin/bash
FINDJPG=`find . -type f -iname *.jpg`
FILES=$FINDJPG
for f in $FILES
do
echo "Optimizing" $f
jpegoptim $f >> jpegoptimLog
done
From jpegoptimLog log file:
The jpegoptim command in the bash script can be customized using jpegoptim command options.
jpegoptim options from help:
jpegoptim v1.3.0 Copyright (c) Timo Kokkonen, 1996-2013.
Usage: jpegoptim [options] <filenames>
-d<path>, --dest=<path>
specify alternative destination directory for
optimized files (default is to overwrite originals)
-f, --force force optimization
-h, --help display this help and exit
-m<quality>, --max=<quality>
set maximum image quality factor (disables lossless
optimization mode, which is by default on)
Valid quality values: 0 - 100
-n, --noaction don't really optimize files, just print results
-S<size>, --size=<size>
Try to optimize file to given size (disables lossless
optimizaiont mode). Target size is specified either in
kilo bytes (1 - n) or as percentage (1% - 99%)
-T<treshold>, --threshold=<treshold>
keep old file if the gain is below a threshold (%)
-o, --overwrite overwrite target file even if it exists
-p, --preserve preserve file timestamps
-q, --quiet quiet mode
-t, --totals print totals after processing all files
-v, --verbose enable verbose mode (positively chatty)
-V, --version print program version
--strip-all strip all (Comment & Exif) markers from output file
--strip-com strip Comment markers from output file
--strip-exif strip Exif markers from output file
--strip-iptc strip IPTC markers from output file
--strip-icc strip ICC profile markers from output file
--all-normal force all output files to be non-progressive
--all-progressive force all output files to be progressive