From: Nicholas Clark Date: 21:32 on 21 Nov 2004 Subject: HFS+ CALL THIS A FUCKING FILESYSTEM? $ ls -l ../parrot-clean/ops/core.ops -rw-r--r-- 8 nick nick 25106 19 Nov 15:35 ../parrot-clean/ops/core.ops $ ln ../parrot-clean/ops/core.ops ops/core.ops $ ls -l ops/core.ops ---------- 1 root wheel 0 6 Oct 2003 ops/core.ops Yes, that's right. If I hardlink a file owned by *me*, then it's quite correct for it to be owned by *root* now. And I can repeat this little loop ad nauseum. Nothing short of rebooting will cure it. I KNOW, BECAUSE I'VE BEEN HERE FUCKING SO MANY TIMES BEFORE. You're corrupting your data structures in RAM, FUCKTARD. I'm praying you don't manage to SHIT the corruption to the disk, given how awe inspiring the OS X disk recovery tools are. People pay money for this CRAP? I can get disk corruption on Linux for free, if that's what I want. Nicholas Clark
From: peter (Peter da Silva) Date: 22:41 on 21 Nov 2004 Subject: Re: HFS+ > CALL THIS A FUCKING FILESYSTEM? You're too polite. I really wish they'd move the API from HFS+ to a higher layer so you could transparently fiddle with the magic HFS+ metadata in dotfiles or something without having to have HFS+ under the hood. I mean, they ported the new UFS from FreeBSD for Panther, BUT YOU CAN'T BLOODY USE IT because Carbon apps freak out if they can't see any damn resource forks and finder info! Carbon apps are freaky. They'll even cough up blood if the application is on a different volume than the data, sometimes. If I wanted to have weird interactions beween apps and system internals I'd just run Windows.
From: Nicholas Clark Date: 15:16 on 11 May 2005 Subject: HFS+ Hateful Fucking Shit (I've just unpacked all of CPAN onto this "gcc 2.96" of "filesystems" and it's already decided that some files are unreadable. People pay money for this?) Nicholas Clark
From: Arthur Bergman Date: 15:18 on 11 May 2005 Subject: Re: HFS+ On 11 May 2005, at 15:16, Nicholas Clark wrote: > Hateful > Fucking > Shit You forgot the plus :-) Hateful Fucking Shit Plus! > > > (I've just unpacked all of CPAN onto this "gcc 2.96" of "filesystems" > and it's > already decided that some files are unreadable. People pay money for > this?) > Not really ;) We pay for UFS and then realise how painful that is! ----- CTO @ Fotango Ltd +447834716919 http://www.fotango.com/
From: peter (Peter da Silva) Date: 17:50 on 11 May 2005 Subject: Re: HFS+ > (I've just unpacked all of CPAN onto this "gcc 2.96" of "filesystems" and it's > already decided that some files are unreadable. People pay money for this?) Case inensitivity leading to missing files, or actual OS errors?
From: Chris Nandor Date: 18:09 on 11 May 2005 Subject: Re: HFS+ At 11:50 -0500 2005.05.11, Peter da Silva wrote: >> (I've just unpacked all of CPAN onto this "gcc 2.96" of "filesystems" >>and it's >> already decided that some files are unreadable. People pay money for this?) > >Case inensitivity leading to missing files, or actual OS errors? I've never had HFS+ render a file "unreadable," that I can recall. Hardware disk errors, yes, HFS+, no. Case-insentivity can be a problem of course, but 10.3 introduced case-sensitivity as an option (for a price ... some disk utils would not work with such volumes, like DiskWarrior). I've had other HFS+ problems, specifically directory corruption and other such things that don't render a file unreadable so much as lost.
From: peter (Peter da Silva) Date: 18:46 on 11 May 2005 Subject: Re: HFS+ > I've never had HFS+ render a file "unreadable," that I can recall. I wouldn't put it past it. I've had file system corruption on HFS+ render the file system mountable but unrepairable, though. Which is something completely alien to me on any UNIX system new enough to have fsck... if your file system is so broken that fsck can't fix it then it's broken indeed, and you have to have done something really appalling to get it that messed up... we're talking about things like trashing the partition table and a significant chunk of the beginning of the file system in a horrible format accident. I've had that happen to me a few times, and I always knew why. I've had HFS+ eat its brain badly enough that I had to copy everything to a new filesystem and format it at least as often as that... over a tenth as many years and maybe a hundredth as many systems.
From: Chris Nandor Date: 19:03 on 11 May 2005 Subject: Re: HFS+ At 12:46 -0500 2005.05.11, Peter da Silva wrote: >I've had file system corruption on HFS+ render the file system >mountable but unrepairable, though. I have, but not in a long time, not since I started using DiskWarrior. The only time I've not been able to repair with DiskWarrior -- apart from when I had case-sensitive HFS+, which it didn't recognize -- is when I've had hardware problems.
From: peter (Peter da Silva) Date: 23:50 on 11 May 2005 Subject: Re: HFS+ > I have, but not in a long time, not since I started using DiskWarrior. The > only time I've not been able to repair with DiskWarrior -- apart from when > I had case-sensitive HFS+, which it didn't recognize -- is when I've had > hardware problems. I have really resisted buying DiskWarrior. The idea that Apple can't provide an adequate repair tool of their own just disgusts me, I have to think happy thoughts until the nausea goes away.
From: Chris Nandor Date: 02:04 on 12 May 2005 Subject: Re: HFS+ At 17:50 -0500 2005.05.11, Peter da Silva wrote: >I have really resisted buying DiskWarrior. The idea that Apple can't >provide an adequate repair tool of their own just disgusts me, I have >to think happy thoughts until the nausea goes away. I agree entirely. However, there are worse feelings, such as those of not being able to recover your data ...
From: peter (Peter da Silva) Date: 12:26 on 12 May 2005 Subject: Re: HFS+ > I agree entirely. However, there are worse feelings, such as those of not > being able to recover your data ... If I've got data that is only on one disk, that has no backups anywhere, that data doesn't really exist yet anyway.
From: Chris Nandor Date: 16:09 on 12 May 2005 Subject: Re: HFS+ At 6:26 -0500 2005.05.12, Peter da Silva wrote: >> I agree entirely. However, there are worse feelings, such as those of not >> being able to recover your data ... > >If I've got data that is only on one disk, that has no backups anywhere, >that data doesn't really exist yet anyway. And speaking of hating software, let me count the ways I hate all backup software ... (that said, I have at least one, usually two, and in critical cases 3-4 backups of all my data these days :-).
From: David King Date: 05:56 on 13 May 2005 Subject: Re: HFS+ > And speaking of hating software, let me count the ways I hate all backup > software ... (that said, I have at least one, usually two, and in critical Oh, goodness. At work we use VERITAS BackupExec for backing up. I don't like it from a preference standpoint, but not until recently have I had occasion to hate it with the bile of a thousand, uh, things with lots of bile. It is attached to an Exabyte VXA-2 1U tape loader. Great device: instead of swapping tapes every night, it keeps 10 tapes in it and swaps them itself. Every now and then the drive needs to be cleaned. The drive reports, "Hey, I'm dirty! I need to be cleaned." One of its 10 slots is set aside to automagically clean itself, so it cleans itself. But now BackupExec has it in its head that it needs to clean the drive, which it does. So now the drive has been cleaned twice, using the cleaning tapes twice as fast. That one's my fault: misconfigured. Have auto-clean on for one or the other, but not both. But after doing this, the software decides, out of nowhere, "Hey, tape #1 is a cleaning tape!" The library doesn't report this (I've checked). The software hasn't seen that the cleaning tape has moved. It decides that one of the COMPLETELY UNRELATED bar codes identify a cleaning tape. When I notice this, and tell BE, "Hey, look again" it recognises the tape as a brand new tape, and calls it blank. So the data on there is gone. Not because the tape has melted, or is too old, has been stepped or, or eaten by one of the techs. Because the software deletes the catalogue for it. When that bar code is re-identified. Worse yet, it may take days to notice this (since it doesn't notify anybody, return an error, etc), so eventually all but two or three tapes are identified as cleaning media, and they are quickly filled. So instead of having five working days of backups on this library, I have one. > cases 3-4 backups of all my data these days :-). Yes, fortunately this is not our only backup system. But that makes it no more enjoyable.
From: Aaron J. Grier Date: 07:42 on 13 May 2005 Subject: Re: HFS+ On Thu, May 12, 2005 at 08:09:53AM -0700, Chris Nandor wrote: > And speaking of hating software, let me count the ways I hate all > backup software ... (that said, I have at least one, usually two, and > in critical cases 3-4 backups of all my data these days :-). I've tried unsuccessfully now TWICE to do a backup to a remote (NFS mounted) filesystem and have had disk utility completely lock up on me. of course I can't use dump(8) since the previous owner of this laptop formatted with HFS. (does dump even work under OSX with UFS?) I could use the oft-suggested carbon copy pro, but it doesn't want to write to a network path, and, I really don't want to buy a USB or firewire device enclosure. (and the ibook doesn't have SCSI on it.) that's what I have a network for, dammit. so I plod back to the command line to see what I can find that I can run in verbose mode and either figure out why the thing is locking up or maybe it'll finally work. asr? it can restore from images, but it doesn't appear to be able to generate them. so I hit http://www.macos.utah.edu/Documentation/ASROSX/commandline.html#restore which gives me a bunch of steps for creating a blank image, mounting it, and using ditto to copy files around. remember, I just want to make a block backup of the drive to a file here, in some format that can be restored without much trouble to a different device later (IE _not_ dd). something like dump. so I poke through the steps and I'm scratching my head wondering what the fuck is going on until I discover that hdiutil claims to do what I want: generate an image file from a source device. it's written roughly 6GB now and seems to be plodding along steadily at about 4MB/s. I'm keeping my fingers crossed. if it works I'll have a little less reason to hate apple. but just a little. there's not even a goddamn pager (more or less) on the tiger install DVD. but there's a perl interpreter. apple can't give me a fucking pager in /usr/bin, but they can give me /usr/bin/perl? (and finger? and openssl?) still hateful.
From: Matt McLeod Date: 08:33 on 13 May 2005 Subject: Re: HFS+ Aaron J. Grier wrote: > I'm keeping my fingers crossed. if it works I'll have a little less > reason to hate apple. but just a little. Dunno about using hdiutil to do this to a network volume, but doing it to HFS+ at least barfs at ~2GB. Could've sworn they had largefile support on HFS+ now as I've got lots of bloody huge video files, but apparently hdiutil hasn't caught up. I wound up giving in and using tar. Fortunately only a few things had resource forks so it worked OK. Matt
From: Michael G Schwern Date: 08:50 on 13 May 2005 Subject: Re: HFS+ On Fri, May 13, 2005 at 05:33:59PM +1000, Matt McLeod wrote: > I wound up giving in and using tar. Fortunately only a few things > had resource forks so it worked OK. I backup using rsyncX which is rsync + resource fork awareness + a little GUI. http://archive.macosxlabs.org/rsyncx/rsyncx.html
From: peter (Peter da Silva) Date: 12:17 on 13 May 2005 Subject: Re: HFS+ > I backup using rsyncX which is rsync + resource fork awareness + a little > GUI. Noooo... the tentacles...
From: Chris Nandor Date: 15:19 on 13 May 2005 Subject: Re: HFS+ At 0:50 -0700 2005.05.13, Michael G Schwern wrote: >On Fri, May 13, 2005 at 05:33:59PM +1000, Matt McLeod wrote: >> I wound up giving in and using tar. Fortunately only a few things >> had resource forks so it worked OK. > >I backup using rsyncX which is rsync + resource fork awareness + a little >GUI. >http://archive.macosxlabs.org/rsyncx/rsyncx.html I use psync, which is what Carbon Copy Cloner uses under the hood, and is included with MacOSX::File.
From: David Champion Date: 17:23 on 13 May 2005 Subject: Re: HFS+ * On 2005.05.13, in <20050513075031.GA10214@xxxxxxxx.xxxxxxx.xxx>, * "Michael G Schwern" <schwern@xxxxx.xxx> wrote: > On Fri, May 13, 2005 at 05:33:59PM +1000, Matt McLeod wrote: > > I wound up giving in and using tar. Fortunately only a few things > > had resource forks so it worked OK. > > I backup using rsyncX which is rsync + resource fork awareness + a little > GUI. > http://archive.macosxlabs.org/rsyncx/rsyncx.html I've had many problems with rsyncx, most of which I haven't cared enough to learn more about. I now use ditto(1) to copy my MacOS files if I suspect some bastard stuck a resource fork in there anywhere. Ditto has its own body of hate, but at least there's no intersection with this thread. Copying through a pipe, you can force ditto to read or write from a cpio-formatted stream, so you can pipe ditto into ssh ditto, or whatever else you need.
From: peter (Peter da Silva) Date: 12:16 on 13 May 2005 Subject: Re: HFS+ > I wound up giving in and using tar. Fortunately only a few things > had resource forks so it worked OK. Hie thee to http://www.metaobject.com/Products.html then then scroll to the bottom or jump to http://www.metaobject.com/downloads/macos-x/ and grab hfstar. There's also an hfspax. Just stay clear of rsyncx. You see, when one makes incompatible changes in the rsync protocol, one is normally expected to change the version number so one doesn't use it with normal rsync and then discover that ones backup consists almost entirely of empty files. Because how rsyncx "handles" resource forks and finder info is to send each file three times with the same name and inode number, and the receiving rsyncx goes "oh, I already have this, this must be the resource fork". It turns out that normal rsync is entirely happy to accept the same file multiple times and write over the already synced file with its finder info and then its usually empty resource fork. I have a patched version of rsync somewhere that sends the resource fork with a synthetic inode number and a new name, so it works with rsync. It doesn't send the finder info, though, because you can't just open the file with a different name and read that like you can with the resource fork, so I hadn't got that far before I found hfspax and hfstar and switched to Amanda for my Mac backups.
From: Matt McLeod Date: 12:43 on 13 May 2005 Subject: Re: HFS+ Peter da Silva wrote: > > I wound up giving in and using tar. Fortunately only a few things > > had resource forks so it worked OK. > > Hie thee to http://www.metaobject.com/Products.html then then scroll to > the bottom or jump to http://www.metaobject.com/downloads/macos-x/ and > grab hfstar. For work I'm probably just going to install the TSM client kit and configure it to back up a subset of my home directory. Fortunately we don't keep much on desktops anyway: my mail is all on the IMAP server, and anything else I'm working on will be on the machine behind the curtain (a Solaris box that's properly backed-up). For home I'm sorry to say that while I have quite a lot of data most of it isn't backed up for the simple reason that solutions that handle half a terabyte are either expensive or inconvenient. But most of it isn't important -- again, the stuff I really care about is on remote hosts that are properly backed-up. The using-tar thing was for the Tiger upgrade. The machine had two and a half years of cruft on it and I felt it was time for a clean install, but there are always little random files with useful notes in them that would be inconvenient to lose. That's where TSM will come in in the longer run. Matt
From: Chris Nandor Date: 15:19 on 13 May 2005 Subject: Re: HFS+ At 6:16 -0500 2005.05.13, Peter da Silva wrote: >I have a patched version of rsync somewhere that sends the resource >fork Tiger's rsync has an option to handle resource forks (well, metadata in general) now. Same with cp etc. Unfortunately, tar does too, but it picks up metadata without an option for you to tell it not to ... Casey West can tell you all about his hate over this.
From: peter (Peter da Silva) Date: 16:42 on 13 May 2005 Subject: Re: HFS+ > Tiger's rsync has an option to handle resource forks (well, metadata in > general) now. Same with cp etc. I hope they didn't just pick up rsyncx. That would be bad and evil. What's the wire protocol look like?
From: Michael G Schwern Date: 21:20 on 13 May 2005 Subject: Re: HFS+ On Fri, May 13, 2005 at 06:16:15AM -0500, Peter da Silva wrote: > You see, when one makes incompatible changes in the rsync protocol, > one is normally expected to change the version number so one doesn't > use it with normal rsync and then discover that ones backup consists > almost entirely of empty files. > > Because how rsyncx "handles" resource forks and finder info is to > send each file three times with the same name and inode number, > and the receiving rsyncx goes "oh, I already have this, this must > be the resource fork". It turns out that normal rsync is entirely > happy to accept the same file multiple times and write over the > already synced file with its finder info and then its usually empty > resource fork. I just uploaded a text file with a resource fork from my Mac using rsyncx to my Debian machine. It went across fine with no resource fork. rsyncx is 2.1 (2.6.0 protocol 27) and on the Debian side its 2.6.4 protocol 29. Maybe you had an old version of rsync on one side or the other.
From: peter (Peter da Silva) Date: 21:29 on 13 May 2005 Subject: Re: HFS+ > I just uploaded a text file with a resource fork from my Mac using rsyncx > to my Debian machine. It went across fine with no resource fork. rsyncx > is 2.1 (2.6.0 protocol 27) and on the Debian side its 2.6.4 protocol 29. You apparently didn't tell rsyncx to "handle resource forks". If you don't ask it to do its "x" stuff, it acts just like standard rsync. I don't remember the option now, but it's not on by default.
From: Michael G Schwern Date: 23:31 on 13 May 2005 Subject: Re: HFS+ On Fri, May 13, 2005 at 03:29:55PM -0500, Peter da Silva wrote: > > I just uploaded a text file with a resource fork from my Mac using rsyncx > > to my Debian machine. It went across fine with no resource fork. rsyncx > > is 2.1 (2.6.0 protocol 27) and on the Debian side its 2.6.4 protocol 29. > > You apparently didn't tell rsyncx to "handle resource forks". If you don't > ask it to do its "x" stuff, it acts just like standard rsync. I don't remember > the option now, but it's not on by default. I don't see anything in the man page or --help listing about "fork" or "resource" but rsyncx seems to have the habit of not documenting its new, magic switches. Hate. When I do an rsync to a local filesystem it does copy the resource fork... but then again so does "cp". I don't have a remote Mac to try it out to see if this also happens in a remote rsync but I suspect you're right. However, using the Quick RsyncX Script Generator the script it generates has a mysterious --eahfs option. If I use that it just doesn't work because the remote rsync doesn't have that switch. It also uses some --showtogo option which the Debian rsync also doesn't know, desite being a newer rsync. That's hateful. But if it doesn't do it by default isn't that the right thing to do? If you asked it to send resource forks to a filesystem which can't handle them... well, its your gun and you pointed it at your foot.... but how did you manage that? Speaking of hate, the RsyncX Script Generator has the interesting option to "scp rsync to Destination First" which would seem to nicely handle the problem of incompatible versions of rsync or the remote not having rsync at all. I expected it to go into /tmp/username or something. It tries to put it into /usr/local/bin!!! GAH! Only my lack of permissions to do so saved me from that fate. There's a "Test Mode - Dry Run Only" option. If I choose that it STILL tries to copy rsync into dest:/usr/local/bin! Finally, this Quick RsyncX Script Generator doesn't actually generate a script. I expected some way to save the resulting script so I could run it later rather than having to run it immediately, kill it and copy it from the terminal window.
From: peter (Peter da Silva) Date: 00:35 on 14 May 2005 Subject: Re: HFS+ > If you asked it to send resource forks to a filesystem which can't handle > them... well, its your gun and you pointed it at your foot.... but how did > you manage that? I used the "--hfs" option, with an rsync server. I expected that it would do the same thing that it does when you copy files to a LOCAL disk that doesn't handle resource forks and put the resource fork and the finder info in an appledouble file with a "._" prefix. What it SHOULD do is use its own protocol version, then if that's rejected have a way to fall back to sending appledouble files using the standard protocol.
From: Aaron J. Grier Date: 22:02 on 18 May 2005 Subject: Re: HFS+ On Fri, May 13, 2005 at 05:33:59PM +1000, Matt McLeod wrote: > Aaron J. Grier wrote: > > I'm keeping my fingers crossed. if it works I'll have a little less > > reason to hate apple. but just a little. > > Dunno about using hdiutil to do this to a network volume, but doing it > to HFS+ at least barfs at ~2GB. Could've sworn they had largefile > support on HFS+ now as I've got lots of bloody huge video files, but > apparently hdiutil hasn't caught up. it worked. I gave it another half a day and apple disk util was happy with it too. so... less hateful. but I just installed 10.4 (or whatever the fuck the new one is) and it's already playing music and flashing animations at me. I just had two molars extracted under general anaesthesia last friday and am still in an incredible amount of pain even with the hydrocodone, and am not in the mood for fast-moving socket-fucking spinny animations right now.
From: Robert G. Werner Date: 03:47 on 14 May 2005 Subject: Re: HFS+ Aaron J. Grier wrote: [snip] > there's not even a goddamn pager (more or less) on the tiger install > DVD. but there's a perl interpreter. apple can't give me a fucking > pager in /usr/bin, but they can give me /usr/bin/perl? (and finger? > and openssl?) > > still hateful. > Maybe it's a test. Maybe they want you to write your own version of more or less in perl.
From: David King Date: 06:46 on 14 May 2005 Subject: Re: HFS+ On Fri, 13 May 2005, Robert G. Werner wrote: >> there's not even a goddamn pager (more or less) on the tiger install >> DVD. but there's a perl interpreter. apple can't give me a fucking >> pager in /usr/bin, but they can give me /usr/bin/perl? (and finger? >> and openssl?) >> >> still hateful. >> > Maybe it's a test. Maybe they want you to write your own version of more or > less in perl. I can't very well imagine tech support calls to Apple that go like, "Okay, now I want you to type "find space forward slash pipe pee ee are ell dash ee single-quote while diamond..."
From: Robert G. Werner Date: 21:14 on 14 May 2005 Subject: Re: HFS+ David King wrote: > On Fri, 13 May 2005, Robert G. Werner wrote: > >>> there's not even a goddamn pager (more or less) on the tiger install >>> DVD. but there's a perl interpreter. apple can't give me a fucking >>> pager in /usr/bin, but they can give me /usr/bin/perl? (and finger? >>> and openssl?) >>> >>> still hateful. >>> >> Maybe it's a test. Maybe they want you to write your own version of >> more or less in perl. > > > I can't very well imagine tech support calls to Apple that go like, > "Okay, now I want you to type "find space forward slash pipe pee ee are > ell dash ee single-quote while diamond..." > > ROTFLOL!!!!!
From: David Champion Date: 03:13 on 15 May 2005 Subject: Re: HFS+ * On 2005.05.13, in <428566DE.8000006@xxxxxxxxxxx.xxx>, * "Robert G. Werner" <robert@xxxxxxxxxxx.xxx> wrote: > >there's not even a goddamn pager (more or less) on the tiger install > >DVD. but there's a perl interpreter. apple can't give me a fucking > >pager in /usr/bin, but they can give me /usr/bin/perl? (and finger? > > Maybe it's a test. Maybe they want you to write your own version of > more or less in perl. I did IRIX support in a shop with 150 or so SGI boxes, around the time it became impossible to delay any longer in upgrading our various hardware-specific releases of 5.3, 6.2, 6.3, and 6.4 to 6.5, the Grand Unified IRIX. SGI's manual installation procedure is a superlative pain in all exposed body parts simultaneously; to upgrade all the systems manually would have easily cost us between 600 and 1000 man-hours, depending indivual machine complexity and on how parallelized the procedure could be made for any given roomful of machines. Our network was spread over several square miles of labs, so a quick jaunt between CD 2 and CD 3 wasn't really likely. So I developed a network auto-installer, similar to Jumpstart, or Kickboot or Tirekick or whatever those Red Hut people call theirs. SGI had recently developed one called RoboInst, but it was really quite shoddy, as though it were one front-line support staffer's weekend project, yet it cost $10,000 even under contract, and while we had $10,000, it sadly wasn't on the ledger line for gutter sludge. The problem was that my installer, not being really a full-scale product and certainly not something I wished to maintain into the future, still depended upon the genuine SGI CD miniroot that it TFTPed upon netboot. I'd discovered a procedure for building one's own miniroot from dd(1)ed efs images and some yarn and beeswax, but I wasn't really comfortable deploying it since I was sole possessor of the arcana and was planning to jump ship in a month or two. And SGI's miniroot, being intended just to bootstrap their manual installation process, had no pager. So I wrote one. In Bourne shell, the only interpreter on the miniroot. It had to run with no external dependencies. This is not in itself very much to speak of, since there's basically nothing you can do in such straits but copy lines, while read echo, but then I discovered that expr was on the miniroot, and dd and stty, and soon I was adding features of more like regular expression searches and "--Much-- (47%)". ("Much is a pager that's even less than more.") It was an interesting experiment, but I could never really understand why they didn't just put a pager on the miniroot. Sometimes I wonder, though, whether in the end there is more hate owed to the chicken, or to the egg that chicken hate inspires.
From: Robert G. Werner Date: 04:56 on 15 May 2005 Subject: Re: HFS+ David Champion wrote: [snip] > It was an interesting experiment, but I could never really understand > why they didn't just put a pager on the miniroot. Sometimes I wonder, > though, whether in the end there is more hate owed to the chicken, or to > the egg that chicken hate inspires. > Anyone who can write more in Bourn shell gets my hat tipped to them. Wow!!!
Generated at 10:26 on 16 Apr 2008 by mariachi