[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Backup script question... -- public key authentication, afio andstar, back2cd
Ken Keefe wrote:
> I wrote a very basic bash script that tars a set of critical files and
> then scp's the tar file to a server of mine. For right now it asks me
> for a password for the remote user and I enter it by hand. How can I
> accomplish this same sort of thing but in an automated fashion?
Well, there are several ways. Public key (DSA) SSHv2 authentication is
probably the most straightforward.
- If you want to "just get running now," do the following:
(client = SSH client system, server = SSH server, assumes same username
on both):
Generate the key:
client$ ssh-keygen -t dsa
Enter NO passphrase -- i.e. just hit enter for the passphrase
Copy the public key to the server as an authorized key:
client$ scp .ssh2/id_dsa_1024_a.pub server:.ssh/authorized_keys
Now when you do a:
client$ ssh server
Or:
client$ scp file server:
You should see _no_ password prompt.
If not, check the logs of each server, it might be a permissions issue
(which prevents the key from being used by the SSH server for security
reasons). Also make sure public key authentication is enabled. I'd
make it preferred (or the only way).
- Detailed information:
DSA is the asymetric cipher key type for SSH v2.
When you use ssh-keygen, you are going to generate a pair, one private,
one public, in ~/.ssh2.
If you enter a passphrase, the private key file will not be complete,
but will only work with a passphrase.
Without a passphrase, the private key is complete.
That's what you want, non-interactive login.
The public key file (id_dsa_1024_a.pub) should be copied from ~/.ssh2 to
the server's ~/.ssh directory as "authorized_keys".
I'd make sure there is not an existing one, because you don't want to
blow away any others.
If it already exists, just append the key.
This could be as simple as editing ~/.ssh2/id_dsa_1024_a.pub in a
terminal window to the client, and ~/.ssh/authorized_keys in a terminal
window to the server and cutting and pasting.
The key is ASCII encoded so you can do this.
>From now on, when client tries to ssh into server, the server will
encrypt a challenge (random, one-time password) with the client's public
key (which you've uploaded) which only the client's private key can
decode. When it does, it will send it back using the server's public
key (which you saved when you connected the first time to the SSH
server), which only the server's private key can decode.
That's mutual authentication with a random, one-time password. The
challenge itself is called a "one-time password." Even if it is
captured and eventually cracked, it is useless after this connection is
established. It's very secure, as long as the private keys for each
party are _never_ disclosed (not even to each other). The only
'non-repudiation" issue is uploading the public key file in the first
place, because that's where you saying you "trust" the public key so
they can login to the server.
SIDE NOTE: Configuration Management
Depending on the criticality and format of the files, don't forget about
using revision control instead. Setting up CVS (using SSH) or
Subversions is not a bad idea for network-wide configuration
management. It's not just for developers.
> I'd like to be able to add a cron job so it does this everyday at such
> and such time even when I am not around to feed it the password... for
> obvious security reasons, I'd rather not store my password in the
> script or some other file...
Correct. Public key authentication is the best way. And once you did
it a few times, it's simple. The only issue is the non-repudiation of
the key when you first upload it to the server. But if you're the admin
of both servers, then it's not so much of an issue.
> What other suggestions do people have for backing stuff up?
I use a combination of things.
For full backups to local tape [ or remote storage/tape ]:
afio -P lzop [ | ssh [buffer] ]
- afio HOWTO with lots of examples
Back in 2002, I wrote an extensive "HOWTO" to using afio in a variety of ways,
including extensive examples. This includes why I use afio, the lzop
compressor, and if I'm piping over ssh to a system that writes directly
to tape, "buffer" to buffer the I/O:
http://www.matrixlist.com/pipermail/leaplist/2002-December/026072.html
- POSIX/SUS ustar streaming format: afio, cpio, pax, star and tar
Just FYI, the underlying format of cpio (modern SysV) and tar is "ustar."
It is a streaming format which makes it very recoverable, but very slow to
access (no indexing). cpio uses 5KB block sizes by default, tar uses 10KB
block sizes by default. Compress is _not_ inherent to _either_ utility,
and any implementation that offers compression only does it as part of the
final stream output. I.e., the _entire_ archive itself is compressed
(even in GNU tar or star).
The new, unified IEEE POSIX and Single UNIX Specification (SUS) standard
utility is "pax" (*1* more on that below).
The best tar implementation on Linux right now is "star", because it
implements the full POSIX 2001 ustar implementation. GNU tar is rather
non-standard, because it has workaround for limitations with POSIX 1988
tar. These limitations are no longer found in newer POSIX ustar releases,
but GNU tar has not been updated as such. (*2* more on that below)
Otherwise, afio is ustar with per-file compression. Each file is compressed
_before_ it goes into the archive. And in the worst case, you can still
unarchive it with cpio/pax/tar on any system, and then decompress the
individual files as necessary.
ADDITIONAL NOTES:
*1* pax replaces both cpio and tar. Like cpio and tar before it, pax does
not offer native compression built-in. It uses ustar, and the POSIX 2001
standard supports storing extended attributes (EA) like access control
lists (ACLs) used in Ext3, XFS (also supported by Samba). pax also comes
standard with NT 5.x (2000, XP, 2003), although not everything is
implemented in the Win32 version.
*2* star supports the full POSIX 2001 spec, including ACLs on Linux (Ext3, XFS),
Solaris (UFS) and select other platforms. It can do most of the GNU tar options
and format options too, at least for reading GNU tar archives (but not all).
Again, GNU tar is rather non-standard, because it created its own
workaround for path and filesize limitations with POSIX 1988 tar. I
recommend people use star instead of cpio, pax or tar when they use ACLs
in Ext3 or XFS, and adopting star by default is a good idea in general.
> I made the silly mistake of typing rm -rf name * instead of rm -rf name*
> and I need to get serious about backing up my data. Luckily I had, just
> by chance, created a tar file of the stuff I lost.
- Directly browsable/restorable CD (and DVD) backups
I wrote "back2cd" for 3 reasons:
1. It allows users to easily make point backups of their own data
2. It allows me to easily backup/image project directories
3. It allows me to easily backup specific portions of a server tree
Unlike doing an archive and then making a .iso image of that archive,
"back2cd" makes a directly browsable CD (or DVD) image. The files go
directly into the .iso file, in their original tree, but are compressed
individually as they are put in. It dumps out a .iso file that anyone
with a CD-R drive can record (burn), or in to a directly that an
automated process can look for (which is what I did).
It does per-file compression, but _no_ archiving. There is no reason to
archive onto a CD (or DVD), because ISO9660 "Yellow Book" (.iso files) are
archives in themselves! When you "record" (burn), that's the "unarchiving"
step. The .iso file is broken out and written to on the CD (or DVD). I
cringe everytime someone uses tar, then CD images a single tar file into
an .iso. Kinda defeats the purpose of CD (random access), eh?
With back2cd, when you need to recover, you stick in the CD (or DVD) and bam!
You've got any directory tree you backed up in a _directly_accessible_
from, just like it was on the hard drive. The only difference is that the
files are compressed. You just copy (via cp -R or, better yet, find|cpio,
which preserves many things better) and decompress, recursively if needbe,
using _standard_ UNIX commands.
I get about 1.5GB of data per CD-R, or 12GB of data per DVD-R, depending
on the data being backed up, using the LZO compressor.
For more on back2cd, see Sys Admin 2002 April;
http://www.sysadminmag.com/articles/2002/0204/
The article is on-line here, with the corresponding listing and sidebar:
http://www.samag.com/documents/sam0204c/
http://www.samag.com/documents/sam0204c/sam0204c_l1.htm
http://www.samag.com/documents/sam0204c/sam0204c_s1.htm
For more on how LZO (lzop) differs from LZ77 (gzip) and BWT (bzip2), see:
http://www.matrixlist.com/pipermail/leaplist/2001-December/016244.html
-- Bryan J. Smith
General Annoyance
P.S. If I ever get time, I'm going to come up with an new universal backup,
archiver, copy (nubac) program. It will continue to use ustar as its
streaming format, but offer built-in, per-file compression, and offer
two indexes -- one for each volume (e.g., tape), so each tape can be
independent, and one for each volume set (e.g., last tape), so the
entire directory can be read as well. It will balance the best of all
worlds. The algorithm in the prototype I started is a bit complex, because
I have to predict how many and what files I can get in the current volume
(of a multiple volume set), estimating compression, while leaving room for
the per-volume index at the end. The problem with afio is performance, it
calls the compressor externally (when libz, libbz2 and liblzo should be
used directly), but it does the job for now.
--
Linux Enthusiasts call me anti-Linux.
Windows Enthusisats call me anti-Microsoft.
They both must be correct because I have over a
decade of experience with both in mission critical
environments, resulting in a bigotry dedicated to
mitigating risk and focusing on technologies ...
not products or vendors
--------------------------------------------------
Bryan J. Smith, E.I. b.j.smith@ieee.org
-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.