SUMMARY: Wiping disks (update)....
Rick.Brashear at ercgroup.com
Rick.Brashear at ercgroup.com
Thu Nov 13 12:37:54 EST 2003
Some additional responses came in after my summary that I felt would be
beneficial to the archives:
Niall O Broin [niall at makalumedia.com] wrote:
>If you want to be sure that the disks are wiped beyond retrieval, you want
>do this a number of times. There is a Mil. standard for this. AFAIK i
>specifies writing X number ot times with value Y, followed by A number of
>times with value B etc.
Hearn, Stan (CEI-Atlanta) [Stan.Hearn at cox.com] wrote:
Shred of Gnu File Utils
>From the shred documentation is the following enlightening information:
shred: Remove files more securely
shred overwrites devices or files, to help prevent even very expensive
hardware from recovering the data.
Ordinarily when you remove a file (see rm invocation), the data is not
actually destroyed. Only the index listing where the file is stored is
destroyed, and the storage is made available for reuse. There are undelete
utilities that will attempt to reconstruct the index and can bring the file
back if the parts were not reused.
On a busy system with a nearly-full drive, space can get reused in a few
seconds. But there is no way to know for sure. If you have sensitive data,
you may want to be sure that recovery is not possible by actually
overwriting the file with non-sensitive data.
However, even after doing that, it is possible to take the disk back to a
laboratory and use a lot of sensitive (and expensive) equipment to look for
the faint "echoes" of the original data underneath the overwritten data. If
the data has only been overwritten once, it's not even that hard.
The best way to remove something irretrievably is to destroy the media it's
on with acid, melt it down, or the like. For cheap removable media like
floppy disks, this is the preferred method. However, hard drives are
expensive and hard to melt, so the shred utility tries to achieve a similar
This uses many overwrite passes, with the data patterns chosen to maximize
the damage they do to the old data. While this will work on floppies, the
patterns are designed for best effect on hard drives. For more details, see
the source code and Peter Gutmann's paper Secure Deletion of Data from
Magnetic and Solid-State Memory, from the proceedings of the Sixth USENIX
Security Symposium (San Jose, California, 22-25 July, 1996). The paper is
also available online
Please note that shred relies on a very important assumption: that the
filesystem overwrites data in place. This is the traditional way to do
things, but many modern filesystem designs do not satisfy this assumption.
* Log-structured or journaled filesystems, such as those supplied with
AIX and Solaris.
* Filesystems that write redundant data and carry on even if some
writes fail, such as RAID-based filesystems.
* Filesystems that make snapshots, such as Network Appliance's NFS
* Filesystems that cache in temporary locations, such as NFS version 3
* Compressed filesystems.
If you are not sure how your filesystem operates, then you should assume
that it does not overwrite data in place, which means that shred cannot
reliably operate on regular files in your filesystem.
Generally speaking, it is more reliable to shred a device than a file, since
this bypasses the problem of filesystem design mentioned above. However,
even shredding devices is not always completely reliable. For example, most
disks map out bad sectors invisibly to the application; if the bad sectors
contain sensitive data, shred won't be able to destroy it.
shred makes no attempt to detect or report these problem, just as it makes
no attempt to do anything about backups. However, since it is more reliable
to shred devices than files, shred by default does not truncate or remove
the output file. This default is more suitable for devices, which typically
cannot be truncated and should not be removed.
Mike Demarco [mdemarco at tritonpcs.com] wrote:
>This really depends on how secure you need this to be, The problem with
using either of the mentioned methods is that data >can still be retrieved
by changing thermal properties of the disk. If data was written to a track
at lets say 110 Degrees
>the head position over the track is at a given location. If you cool the
disk down to 60 degrees it will move the head ever >so slightly off track
and you will see old information ghosting. One of the problems with doing a
format-analyze is that it >lays down a pattern on the disk at the current
temperature and when you have a given pattern it is much easier to read the
>ghost. The only way to guarantee the data can not be read is to destroy the
Jason.Santos at aps.com wrote:
>Just some corrections/additions --
>/dev/random only exists on Solaris 9, or on Solaris 8 with patch 112438.
>It will be much faster to dd from /dev/urandom if you wish to overwrite
>the disk with pseudorandom data, because /dev/random is a source of
>"higher quality" random data, which means that it takes longer to
>produce, thus slowing down your dd.
>Also, you cannot use dd if=/dev/null, because you cannot read anything
>from /dev/null. You can use /dev/zero instead to get a stream of zero
>bytes. This will be much faster than using /dev/urandom.
I am tasked with ensuring no data is left on a large number of disks on
servers we are returning on lease expiration. I have done some preliminary
searching for techniques or tools for this task without success.
What say my brothers/sisters at arms on this subject?
dd if=/dev/random of=/dev/rdsk/<spanning partition of whtever disk)
format - analyze - write/compare/purge/verify
Thanks again to one and all!
Some suggested newfs but as documented in this sunmanagers archive article
newfs does very little to remove data (should have checked here first - Tim
Thanks to these respondants:
ippy at optonline.net
Bruntel, Mitchell L, ALABS [mbruntel at att.com]
Steven Hill [sjh at waroffice.net]
Stephen Moccio [svm at lucent.com]
neil quiogue [neil at quiogue.com]
Eric Paul [epaul at profitlogic.com]
Gwyn Price [gwyn at glyndwr.com]
Steve Elliott [se at comp.lancs.ac.uk]
Pablo Jejcic [pablo.jejcic at smartweb.rgu.ac.uk]
Tim Evans [tkevans at tkevans.com]
Ungaro, Matt [mjungaro at capitolindemnity.com]
joe.fletcher at btconnect.com
Dave Mitchell [davem at fdgroup.com]
Smith, Kevin [Kevin.Smith at sbs.siemens.co.uk]
Information Technology Department
Employers Reinsurance Corporation
Overland Park, Kansas 66201
* 913 676-6418
* rick.brashear at ercgroup.com
sunmanagers mailing list
sunmanagers at sunmanagers.org
More information about the summaries