Before you start your experiment, you must decide what type of experiment you want to run and what parameters the logger should use. There are four parameters:
The ``Logging'' parameter and the ``StartState'' parameter have similar functionality, to turn on and off logging. The distinction between the two is that if ``Logging'' is set to ``off'', then no log is created for the experiment and, therefore, logging may not be turned on again. When ``StartState'' is set to ``off'' (but ``Logging'' is on), the experiment creates the header of the log, but no logging records. Later, logging may be turned on. See ``During the experiment'' for details on how to turn logging on and off during the experiment. By default, the ``Logging'' and ``StartState'' parameters are set to ``on''.
Why would one want to turn logging on or off during an experiment? It is often prudent to factor out startup costs from the data (for example, the costs of page faults, reading initialization files) or to factor out various parts of the program's run (a particularly expensive operation that has already been analyzed). It is possible to do this while reading the log (see ``Analyzing the results of the experiment'' for more details), but it is often more convenient to do this by turning logging on and off judiciously.
The ``Accuracy'' parameter allows for accurate or normal logs. Accurate time stamp logs are so called because the logger stores time stamps that are accurate to the microsecond (at least). However, this comes with a cost. Achieving this level of accuracy takes a few microseconds per log entry. This time is constant and can, therefore, be factored out during the analysis phase, but if your software is very time dependent, you may need to be concerned with the effect that this extra overhead may generate. Also, accurate time stamp experiments require root privilege to run. This is because the logger speaks directly to the hardware to get information.
Normal time stamp logs store less accurate data. The granularity of the timing is 1/100 of a second. This is similar to prof analysis (which uses 1/10 of a second granularity). As with prof, it is assumed that if experiments are run for a significant amount of time statistical variation will be factored out. This is especially useful for locality of reference tuning, which barely uses the time stamp information at all. This type of experiment does not require root privilege to run. By default, the ``Accuracy'' parameter is set to ``normal''.
The ``LogPrefix'' parameter allows you to choose where you would like the logs to be stored. When a program is begun that has logging turned on, an output file is created with the name LogPrefix.pid (where pid is the process-id of the running program). By default, ``LogPrefix'' is set to /tmp/out. This means that for a process with a pid of 12, its log will be the file /tmp/out.12.
There are two different ways of setting up these parameters: system-wide and per-experiment. To set up system-wide parameters use
fprof -C [Logging=on|off],[StartState=on|off],[Accuracy=accurate|normal], [LogPrefix=pathname-for-prefix].Then, just run the programs to be profiled normally. Using the example started earlier:
$ fprof -C StartState=off,Accuracy=normal $ travel Reading in data . . . Processing data done. $
This will run travel with logging on, but the actual logging of calls will not be started until a subsequent command to turn it back on. The log will contain entries with timestamps that are accurate to 1/100 of a second.
To set up per-experiment parameters use
fprof -s -C [Logging=on|off],[StartState=on|off],[Accuracy=accurate|normal], [LogPrefix=pathname-for-prefix] command argumentsThis will run the command with the designated parameters. To repeat the same configuration as above, do this:
$ fprof -s -C StartState=off,Accuracy=normal travel Reading in data . . . Processing data done. $