At 15:56 -0500 13/5/04, Cloyce D. Spradling wrote: >On Fri, May 14, 2004 at 06:49:20AM +1000, Mark Gibson wrote: > >: OK. I'll give rm -f * a go (as I don't want to confirm each deletion). > >You won't need -f if you aren't using the 'rm' alias that comes standard. >Invoke it with its full path (probably /bin/rm or /usr/bin/rm; no OSX system >on hand to check), and no confirmation will be needed. > >You probably _do_ want the -f, though, in order to make it return success >no matter what happens. > >: My concern is that there seems to be an upper limit to the number of >: files that rm can cope with. > >It's actually a shell issue. To get around it, use xargs: > > cd <CUPS dir>; ls | xargs rm -f > >Of course, one you've started using the Dreaded Pipe, there's no reason >to stop there: > > cd <CUPS dir>; ls | grep -v '^tmp$' | xargs rm -rf > >and get that directory _REALLY_ clean! > >-- >Cloyce > Cloyce, Thanks, the problem is I have to clean /tmp more frequently (due to exponential file build up) than /cups. What I'm looking at for /tmp is: /usr/bin/find /private/var/spool/cups/tmp -mmin +5 -delete Which deletes files over 5 minutes old (the cron task runs this every 5 mins to make it a rolling cleanup). But I'll try: cd <CUPS dir>; ls | xargs rm -f To clean /cups on a daily basis. Thanks to all. -- Regards, Mark (}-: AIM / iChat: gibsonm1 Guy 1: "Man, you have a horrible virus on your computer!" Guy 2: "I do?" Guy 1: "Oh, my bad, It's just Windows©."