[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: nfs exports -- clients, AUTH_*, NFSv4, iptables, SFS,



On Mon, 2004-11-22 at 02:03, Casey Boone wrote:
> teaching myself how to use nfs exports (previously i just used samba)

My general rule of thumb is that you always serve up the native protocol
of the client.  E.g., SMB to Windows clients, NFS to UNIX clients.

One thing to remember about UNIX v. Windows clients is:
- UNIX:  4-byte (or 1-6 byte UTF) ISO charsets and case sensitivity
- Windows:  2-byte Unicode and is case insensitive (NT tracks case)

Also remember that "smbfs" is _only_ offered on Linux clients, and not
most other UNIX flavors.  I have regularly run into issues using "smbfs"
beyond just basic file access -- i.e., running UNIX programs and
accessing data over the mounts.  So you should always employ _both_ SMB
and NFS on your server for their respective clients (use Services for
UNIX on Windows Servers, it's based on old Sun PC-NFS and the most
compatible).

Andrew Filesystem (AFS) is also an unifying consideration when you
already deploy Kerberos.  OpenAFS offers Freedomware clients and servers
for most UNIX flavors (including MacOS X), and clients for Windows.  AFS
works a little differently than Samba or NFS is that it does not share
from filesystems of the underlying server directly (but has its own,
virtual filesystems not directly accessible to local on the server). 
This has many advantages and disadvantages (especially for locking,
replication, etc...).

> the userlists are the same for each box (as both have my normal user
> account as the only account not created during the install, so both
> have uid of 500)

This is known as the traditional Remote Procedure Call (RPC)
_authentication_ method as "AUTH_SYS."  After trusting the client to
mount from the server (system authorization), the client merely passes
the UserID to the server for _user_ authentication and eventual
_authorization_ to the file.  I.e., once we "trust" the client,
basically we trust all its users on it for NFS v2/v3.  The exception is
UserID 0 (Root) has limited (or no) access, unless the "no_root_squash"
or other options are used on the export.

> what are the security implications of using nfs?

All network filesystems (NFS) typically introduce various access,
including remote procedure call (RPC), assumed access privilege and
other issues.

A major differentiator between UNIX and Windows systems is the
"per-system" (because he have multiple users) v. "per-user" (because it
was designed for single user) mentality.  In UNIX, the "mount" is a
privileged operation, whereas in Windows, a "map" is typically not.  The
latter can be far worse in some cases, although the former requires
explicit "trust" of the entire client.

People complain this is "poor security."  The reality is that CIFS/SMB
does the same, damn thing!  Once you "trust" the client, you open a
CIFS/SMB service to all sorts of "trust" from that client.  In fact, the
_default_ of "NULL Sessions" is a _major_issue_ with legacy CIFS/SMB.  A
lot of pre-ActiveDirectory/Kerberos approaches in CIFS/SMB were "false
security" and did little other than allow marketing to the contrary.

I talked about this a bit over on LEAP awhile back:  
http://lists.leap-cf.org/pipermail/leaplist/2004-October/040554.html  

Today, there are new RPC AUTH_* approaches, and even some new
authorization capabilities.

The new one in NFS v4, as leveraged in the Linux 2.6 kernel, is known as
RPCSEC_GSS.  GSSAPI is a generic security services API (hence the
acronym), which is a unifying API for various approaches.  Most common
GSSAPI approaches tie into Kerberos as a password store and ticketing
server, along with SASL and other features (SASL is often
interchangeably used with GSSAPI, although the two are different --
GSSAPI can offer SASL, but not vice-versa).

Red Hat CL4 (FC2+) and other distributions take it one step further.  In
addition to RPCGSS, they integrate in IDMAP and a generic GSS daemon --
e.g., rpcgssd, rpcidmapd and rpcsvcgssd.  Red Hat was one of the first
distributors to Kerberosize most of its services, and has now done so
with GSS/IDMAP -- from FTP to Samba -- in CL4.  Once its new directory
service (AOL-Netscape Directory Server) is GPL'd in CL5 (FC5+) around
2005Apr30, the circle will be complete.  It should be well integrated
out-of-the-box at that time.

If you want hetergeneous bliss today with not only single sign-on (SSO)
authentication, but single authorization too, setup LDAP+Kerberos
services, unify around IDMAP, and use only GSSAPI (or traditionally
Kerberozied) services and clients on your subnet(s).

> is there anything i should do (asside from iptables) in order to secure
> these shares?

On the NFS server, other than filtering by IP address (which you
_should_ do), NetFilter (iptables) and TCP Wrappers (/etc/host.*) don't
offer much.

On the NFS client, NetFilter (iptables) can "deny-all incoming" traffic,
while still allowing RPC/portmap services like NFS to work.  The _only_
issue I've seen is with the NIS client (ypbind), the Red Hat iptables
script doesn't seem to accommodate its usage of RPC well.

For data-level encryption and encapsulation, as well as the "protect"
RPC services from "direct" access, I recommend you use the
Self-certifying File System (SFS):  
  http://www.fs.net/sfswww/  

In a nutshell, SFS encapsulates and encrypts _all_ RPC services over a
single tunnel.  Tunneling RPC services is difficult because the who
purpose of the "portmapper" is to _dynamically_ assigne ports.  It is
_far_better_ and _far_less_troublesome_ than tunneling NFS over SSH
(which cannot accommodate RPC well).  SFS does _not_ replace the the
inherent RPC-NFS capabilities of your OS (BSD/MacOSX, Linux, etc...),
but merely tunnels them over SSL.

SFS requires you issue certificates (a very good practice, one most
Windows networks do not use), so you will want to have a good X.509
hierarchy already deployed in your organization (or do so now if you
have not).

> i want the shares rw between any boxes on that local network that have
> the right local users (i will keep the local users for each box in
> sync manually as i only have a handful of machines on my local lan). 

Well, NFS will _always_ do RPC calls to "verify access" to a file.  The
question is how much do you want the server to "trust" the UserID sent
by the client is "actual"?

With NFS v2/v3, this is pretty difficult with AUTH_SYS.  There were
Kerberosized AUTH_* versions, but rarely were they well-standardized. 
The new approach is NFS v4 with RPCSEC_GSS and, optionally, IDMAP.  I
have used this to a limited extent, but am really waiting on Red Hat CL5
(FC5+) to integrate the directory component into the "total solution."

> i also want the root user to have access to the shares just like they
> are local drive mounts (ie being able to override permissions and
> whatnot).

First off, note this is _not_ recommended unless you _explicitly_ trust
your clients.

Secondly, this is _not_ allowed in NFS v4 (AFAICT).

> i really dont know the full implications of nfs like i do with samba,

CIFS/SMB is full of "false security" approaches that are "marketed
against" NFS as "more secure," but they are often not at all.  In fact,
until NT5.1/2003 Server, a lot of the "advanced" capabilities like
SMB+IPSec, SMB Signing and turning off NULL Sessions really _broke_
badly!  And even for NT5.1/2003, they only work well with NT5.1/XP
clients.

Again, see my LEAP post here:  
http://lists.leap-cf.org/pipermail/leaplist/2004-October/040554.html  

> other than it used to carry the stigma of really standing
> for "no fscking security".

No worse than "Common Ignorance of Fscking Security" (CIFS).

-- 
Bryan J. Smith                                    b.j.smith@ieee.org 
-------------------------------------------------------------------- 
Subtotal Cost of Ownership (SCO) for Windows being less than Linux
Total Cost of Ownership (TCO) assumes experts for the former, costly
retraining for the latter, omitted "software assurance" costs in 
compatible desktop OS/apps for the former, no free/legacy reuse for
latter, and no basic security, patch or downtime comparison at all.



-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.