[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Partition imaging mystery
Hi folks,
I have a real head scratcher for you. I wouldn't believe it myself if I
hadn't witnessed it with my own eyes. As some of you know, I plan to get
out of the federal rent-a-brain business and start a third career as a
computer forensics consultant. I've run into a bit of a technical
problem along the way....
For practice -- and as a safety precaution before upgrading to FC3 --
I'm booting my laptop with the Helix 1.5 forensics CD and exporting
images of its partitions to a very large drive array on my big server
system. I'm using dd and nc just as in the SANS Track 8 course:
listener (large array):
# nc -l -p 30000 > hda5.img
source (laptop):
# dd if=/dev/hda5 bs=2048 | nc 192.168.1.2 30000 -w 3
The size of the hda5 partition is 35,486,608 1k blocks, or exactly
36,338,286,592 bytes.
The netcat transfer of the hda5 image consistently aborts when the
destination file reaches 9,883,033,600 bytes. This is smaller than the
hda2 image file, for which this process works perfectly (see directory
entries below).
The path is composed of the following sequence of links:
1. Helix 1.5 boot CD running Knoppix 2.6.7 kernel, _OR_
FC1 2.4.22-1.2199.nptl kernel (same results for both)
2. Intel Pro/100 (e100.o) driver v2.3.18 dated 8/25/03
configured as 192.168.1.4, 100Mbps, and full duplex
3. Cat5e cable
4. Linksys WRT54G router (and default gateway 192.168.1.1)
5. Cat5e cable
6. 3Com 3c920/980 (3c59x.o) driver v1.1.18ac dated 7/2/01
configured as 192.168.1.2, 100Mbps, and full duplex
7. RHEL3 Update3 2.4.21-10.ELsmp kernel
After two tries and two identical abbreviated files, df shows the large
array's status as:
Filesystem 1K-blocks Used Available Use% Mounted
/dev/sdb1 1056888680 210811160 792390704 22% /pub
# ls -gG hda2* hda5*
-rw-r--r-- 1 10489651200 Nov 10 17:35 hda2.img
-rw-r--r-- 1 9883033600 Nov 10 21:17 hda5.img
-rw-r--r-- 1 9883033600 Nov 11 03:02 hda5a.img
While netcat is copying the hda5 image to the large array, I see
multiple errors like this in the listener's /var/log/messages:
Nov 11 02:58:23 lion kernel: eth0: memory shortage
Nov 11 03:00:03 lion last message repeated 3 times
Before anyone dashes off to Google searching for this error message,
I've already done that. It's generated by the 3c59x.o driver when its
receive buffer is depleted. I've Bugzilla'd it (#137270), and the guys
at Red Hat have confirmed that this is an obscure bug that's been around
for quite some time.
OK...here's the mystery. Given all these facts, can anyone explain why
multiple transfer attempts all fail after exactly 9,883,033,600 bytes?
What's special about that number??
--Doc
P.S. About the only thing I haven't tried is slowing the Ethernet port
speeds at each end to 10Mbps, but I've forgotten how to do that. Who
remembers?
-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.