DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 
Working with files and directories

Running commands in a pipeline

A pipeline is a sequence of commands that operate concurrently on a stream of data. All the processes are started simultaneously, but instead of reading or writing to a file or terminal, they read or write to and from a pipe. As the first process begins to produce some output, that output is fed to the second process as input, so that both processes are working at the same time. For example:

   $ ls | sort > list1
Here, the ls command sends its output straight to sort, which processes it and sends its own output to the file list1. Unlike the similar command line in ``Entering commands on the same line'', no intermediate file called list is created. Writing to a temporary file is a comparatively slow process because it involves transferring data to disk, and the second process must then access the file and read it back into memory. Pipes, in contrast, transfer data directly from one process to another without writing it to the disk.

More than one pipe operation can appear on a single command line, as follows:

   $ sort -u file | grep basilisk | wc -l > words
This pipe sequence creates a file called words, containing a count of all the nonidentical lines in file that contain the word ``basilisk''.
Next topic: Access control for files and directories
Previous topic: Entering commands on the same line

© 2005 The SCO Group, Inc. All rights reserved.
SCO OpenServer Release 6.0.0 -- 03 June 2005