ChatGPT解决这个技术问题 Extra ChatGPT

How to redirect and append both standard output and standard error to a file with Bash

To redirect standard output to a truncated file in Bash, I know to use:

cmd > file.txt

To redirect standard output in Bash, appending to a file, I know to use:

cmd >> file.txt

To redirect both standard output and standard error to a truncated file, I know to use:

cmd &> file.txt

How do I redirect both standard output and standard error appending to a file? cmd &>> file.txt did not work for me.

I would like to note that &>outfile is a Bash (and others) specific code and not portable. The way to go portable (similar to the appending answers) always was and still is >outfile 2>&1
… and ordering of that is important.
Does this answer your question? Redirect stderr and stdout in Bash
@BrettHale I marked that one as a duplicate of this question instead, mainly because the accepted answer here is portable to other shells, and this question is better articulated. Weird that the same user asked the same question twice, and didn't get noticed until now.

F
Fritz
cmd >>file.txt 2>&1

Bash executes the redirects from left to right as follows:

>>file.txt: Open file.txt in append mode and redirect stdout there. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.


works great! but is there a way to make sense of this or should I treat this like an atomic bash construct?
It's simple redirection, redirection statements are evaluated, as always, from left to right. >>file : Red. STDOUT to file (append mode) (short for 1>>file) 2>&1 : Red. STDERR to "where stdout goes" Note that the interpretion "redirect STDERR to STDOUT" is wrong.
It says "append output (stdout, file descriptor 1) onto file.txt and send stderr (file descriptor 2) to the same place as fd1".
@TheBonsai however what if I need to redirect STDERR to another file but appending? is this possible?
if you do cmd >>file1 2>>file2 it should achieve what you want.
M
Matthias Braun

There are two ways to do this, depending on your Bash version.

The classic and portable (Bash pre-4) way is:

cmd >> outfile 2>&1

A nonportable way, starting with Bash 4 is

cmd &>> outfile

(analog to &> outfile)

For good coding style, you should

decide if portability is a concern (then use the classic way)

decide if portability even to Bash pre-4 is a concern (then use the classic way)

no matter which syntax you use, don't change it within the same script (confusion!)

If your script already starts with #!/bin/sh (no matter if intended or not), then the Bash 4 solution, and in general any Bash-specific code, is not the way to go.

Also remember that Bash 4 &>> is just shorter syntax — it does not introduce any new functionality or anything like that.

The syntax is (beside other redirection syntax) described in the Bash hackers wiki.


I prefer &>> as it's consistent with &> and >>. It's also easier to read 'append output and errors to this file' than 'send errors to output, append output to this file'. Note while Linux generally has a current version of bash, OS X, at the time of writing, still requires bash 4 to manually installed via homebrew etc.
I like it more because it is shorter and only tweoi places per line, so what would for example zsh make out of "&>>"?
Also important to note, that in a cron job, you have to use the pre-4 syntax, even if your system has Bash 4.
@zsero cron doesn't use bash at all... it uses sh. You can change the default shell by prepending SHELL=/bin/bash to the crontab -e file.
A
Aaron R.

In Bash you can also explicitly specify your redirects to different files:

cmd >log.out 2>log_error.out

Appending would be:

cmd >>log.out 2>>log_error.out

Redirecting two streams to the same file using your first option will cause the first one to write "on top" of the second, overwriting some or all of the contents. Use cmd >> log.out 2> log.out instead.
Thanks for catching that; you're right, one will clobber the other. However, your command doesn't work either. I think the only way to write to the same file is as has been given before cmd >log.out 2>&1. I'm editing my answer to remove the first example.
The reason cmd > my.log 2> my.log doesn't work is that the redirects are evaluated from left to right and > my.log says "create new file my.log replacing existing files and redirect stdout to that file" and after that has been already done, the 2> my.log is evaluated and it says "create new file my.log replacing existing files and redirect stderr to that file". As UNIX allows deleting open files, the stdout is now logged to file that used to be called my.log but has since been deleted. Once the last filehandle to that file is closed, the file contents will be also deleted.
On the other hand, cmd > my.log 2>&1 works because > my.log says "create new file my.log replacing existing files and redirect stdout to that file" and after that has been already done, the 2>&1 says "point file handle 2 to file handle 1". And according to POSIX rules, file handle 1 is always stdout and 2 is always stderr so stderr then points to already opened file my.log from first redirect. Notice that syntax >& doesn't create or modify actual files so there's no need for >>&. (If first redirect had been >> my.log then file had been simply opened in append mode.)
P
Peter Mortensen

This should work fine:

your_command 2>&1 | tee -a file.txt

It will store all logs in file.txt as well as dump them in the terminal.


This is the correct answer if you want to see the output in the terminal, too. However, this was not the question originally asked.
P
Peter Mortensen

In Bash 4 (as well as Z shell (zsh) 4.3.11):

cmd &>> outfile

just out of box.


@all: this is a good answer, since it works with bash and is brief, so I've edited to make sure it mentions bash explicitly.
@mikemaccana: TheBonsai's answer shows bash 4 solution since 2009
Why does this answer even exist when it's included in TheBonsai's answer? Please consider deleting it. You'll get a disciplined badge.
P
Peter Mortensen

Try this:

You_command 1> output.log  2>&1

Your usage of &> x.file does work in Bash 4. Sorry for that: (

Here comes some additional tips.

0, 1, 2, ..., 9 are file descriptors in bash.

0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3~9 is spare for any other temporary usage.

Any file descriptor can be redirected to other file descriptor or file by using operator > or >>(append).

Usage: >

Please see the reference in Chapter 20. I/O Redirection.


Your example will do something different than the OP asked for: It will redirect the stderr of You_command to stdout and the stdout of You_command to the file output.log. Additionally it will not append to the file but it will overwrite it.
Correct: File descriptor could be any values which is more than 3 for all other files.
Your answer shows the most common output redirection error: redirecting STDERR to where STDOUT is currently pointing and only after that redirecting STDOUT to file. This will not cause STDERR to be redirected to the same file. Order of the redirections matters.
does it mean, i should firstly redirect STDERROR to STDOUT, then redirect STDOUT to a file. 1 > output.log 2>&1
@Quintus.Zhou Yup. Your version redirects err to out, and at the same time out to file.
P
Peter Mortensen

Another approach:

If using older versions of Bash where &>> isn't available, you also can do:

(cmd 2>&1) >> file.txt

This spawns a subshell, so it's less efficient than the traditional approach of cmd >> file.txt 2>&1, and it consequently won't work for commands that need to modify the current shell (e.g. cd, pushd), but this approach feels more natural and understandable to me:

Redirect standard error to standard output. Redirect the new standard output by appending to a file.

Also, the parentheses remove any ambiguity of order, especially if you want to pipe standard output and standard error to another command instead.

To avoid starting a subshell, you instead could use curly braces instead of parentheses to create a group command:

{ cmd 2>&1; } >> file.txt

(Note that a semicolon (or newline) is required to terminate the group command.)


This implementation causes one extra process for system to run. Using syntax cmd >> file 2>&1 works in all shells and does not need an extra process to run.
@MikkoRantalainen I already explained that it spawns a subshell and is less efficient. The point of this approach is that if efficiency isn't a big deal (and it rarely is), this way is easier to remember and harder to get wrong.
@MikkoRantalainen I've updated my answer with a variant that avoids spawning a subshell.
If you truly cannot remember if the syntax is cmd >> file 2>&1 or cmd 2>&1 >> file I think it would be easier to do cmd 2>&1 | cat >> file instead of using braces or parenthesis. For me, once you understand that the implementation of cmd >> file 2>&1 is literally "redirect STDOUT to file" followed by "redirect STDERR to whatever file STDOUT is currently pointing to" (which is obviously file after the first redirect), it's immediately obvious which order you put the redirects. UNIX does not support redirecting to a stream, only to file descriptor pointed by a stream.
F
F. Hauri - Give Up GitHub

Redirections from script himself

You could plan redirections from the script itself:

#!/bin/bash

exec 1>>logfile.txt
exec 2>&1

/bin/ls -ld /tmp /tnt

Running this will create/append logfile.txt, containing:

/bin/ls: cannot access '/tnt': No such file or directory
drwxrwxrwt 2 root root 4096 Apr  5 11:20 /tmp

Log to many different files

You could create two different logfiles, appending to one overall log and recreating another last log:

#!/bin/bash

if [ -e last.log ] ;then
    mv -f last.log last.old
fi
exec 1> >(tee -a overall.log /dev/tty >last.log)
exec 2>&1

ls -ld /tnt /tmp

Running this script will

if last.log already exist, rename them to last.old (overwriting last.old if they exist).

create a new last.log.

append everything to overall.log

output everything to the terminal.

Simple and combined logs

#!/bin/bash

[ -e last.err ] && mv -f last.err lasterr.old
[ -e last.log ] && mv -f last.log lastlog.old

exec 2> >(tee -a overall.err combined.log /dev/tty >last.err)
exec 1> >(tee -a overall.log combined.log /dev/tty >last.log)

ls -ld /tnt /tmp

So you have

last.log last run log file

last.err last run error file

lastlog.old previous run log file

lasterr.old previous run error file

overall.log appended overall log file

overall.err appended overall error file

combined.log appended overall error and log combined file.

still output to the terminal

And for interactive session, use stdbuf:

If you plan to use this in interactive shell, you must tell tee to not buffering his input/output:

# Source this to multi-log your session
[ -e last.err ] && mv -f last.err lasterr.old
[ -e last.log ] && mv -f last.log lastlog.old
exec 2> >(exec stdbuf -i0 -o0 tee -a overall.err combined.log /dev/tty >last.err)
exec 1> >(exec stdbuf -i0 -o0 tee -a overall.log combined.log /dev/tty >last.log)

Once sourced this, you could try:

ls -ld /tnt /tmp

See further: Pipe output to two different commands, then follow link to more detailled answer on this duplicate in comment.
A
Ana Nimbus

If you care about the ordering of the content of the two streams, see @ed-morton 's answer to a similar question, here.