badblocks

searches a device (HDD partition) for bad blocks; fs must be unmounted. In general, e2fsck (or mke2fs) with -c option is preferred.

badblocks /dev/sda4

check /dev/sda4 for bad blocks (read only);

badblocks -w -v -o /root/badblk.t /dev/sda2

check /dev/sda2 for bad blocks (read/write, destructive), save the list of the found bad blocks to /root/badblk.t;

badblocks [opts] device [last_blk] [first_blk]

Options
-v verbose;
-b n the size of the block in bytes (the default is 1024);
-c n num of blocks that are tested at a time (the default is 64);
-e n max num of bad blocks before aborting the test; the default is 0 (continue until the end of the test range);

-i file

read a list of known bad blocks from file;

-n run non-destructive read/write test (time-consuming);

-o file

write the list of the bad block to file;

-p n num of passes; the default is 0 (exit after the first pass);
-s show the progress;
-w destructive read/write test; faster than -n, but all user data on the tested partition will be destroyed;

Note!

If the output of ~ is going to be fed to e2fsck or mke2fs, the block size must be carefully specified!

base64

encodes/decodes file (or stdin) using base64 encoding schemes. Base64 encoding schemes are used to encode binary data that needs to be stored or transferred over media that are designed to deal with textual data (e-mail, XML, etc).

base64 pic01.jpg > pic01jpg.enc

convert a JPG image file to a text file;

base64 -d pic01jpg.enc > pic01.jpg

convert a base64-encoded file to its original state (JPG);

Options

--help    --version

-d, --decode

decode data;

-i, --ignore-garbage

when decoding, ignore non-alphabet chars;

-w n, --wrap=n

wrap encoded lines after n chars (76 by default; 0 disables line wrapping);

bzip2

compresses files using the Burrows-Wheeler block sorting text compression algorithm and Huffman coding; bunzip2 decompresses bzip2 files; bzcat decompresses bzip2 files to stdout. bzip2recover recovers data from the damaged bzip2 files.

tar cvf - . | bzip2 -c > ../arc01.tar.bz2

archive and compress the whole contents of the current dir;

tar cvf - ./arc01 | bzip2 -c > arc01.tar.bz2

archive and compress the whole contents of arc01 subdir;

By default bzip2 replaces each processed file with its compressed version retaining the same mod date, perms, ownership (when possible). New filename is derived from the original like orig_name.bz2. If no filenames are specified bzip2 compresses from stdin to stdout, but wouldn’t write compressed output to the terminal.

bzip2 [options] [filenames]

bzip2 *

compress all files in the current dir (src files will be deleted);

bzip2 -k *

compress all files in the current dir, do not remove src files;

bzip2 -k -9 *

compress all files in the current dir, use the max block size (900K) for better compression, do not remove the source files;

bunzip2 [options] [filenames]

bunzip2 *.bz2

decompress all files with .bz2 extension in the current dir;

bzip2 and bunzip2 by default do not overwrite existing files. bzcat (or bzip2 –dc) decompresses all specified files to stdout.

bzcat [-s] [filenames]

bzip2 uses 32-bit CRC to protect against corruption. It won’t help to recover the damaged file, in such case try bzip2recover, it may help you to restore the undamaged blocks of a corrupted file.

bzip2recover filenames

Some options

-h    --help    -V    --version    -v    --verbose

-c, --stdout

compress or decompress to stdout;

-d, --decompress

force bzip2 to decompress regardless of the invocation name (bzip2, bunzip2, bzcat are the same program, and the invocation name defines what to do);

-f, --force

overwrite output files (normally bzip2 will not overwrite existing output files) and break hard links to files, which otherwise wouldn't be done;

-k, --keep

don't delete (keep) input (original) files during compression or decompression;

-L, --license

display the bzip2 license terms and quit;

-q, --quiet

suppress non-essential warnings; I/O errors and other critical events will not be suppressed;

-s, --small

reduce memory usage; sensless unless you are really short of memory; speed/ratio is below average;

-t, --test

check file integrity, don't decompress; in fact decompression will be done, but the result will be thrown away;

-z, --compress

force compression regardless of the invocation name; complementary to -d;

-n, --fast, --best

set the block size to 100k (-1) .. 900k (-9, default) when compressing; --fast and --best are used for compatibility with GNU GZIP and have little effect; higher block sizes give better compression, but require more memory, speed is nearly the same; compression takes approximately 2 times more memory than decompression;

-- treat all subsequent arguments as file names;