ChatGPT解决这个技术问题 Extra ChatGPT

How can I pipe stderr, and not stdout?

I have a program that writes information to stdout and stderr, and I need to process the stderr with grep, leaving stdout aside.

Using a temporary file, one could do it in two steps:

command > /dev/null 2> temp.file
grep 'something' temp.file

But how can this be achieved without temp files, using one command and pipes?

A similar question, but retaining stdout: unix.stackexchange.com/questions/3514/…
This question was for Bash but it's worth mentioning this related article for Bourne / Almquist shell.
I was expecting something like this: command 2| othercommand. Bash is so perfect that development ended in 1982, so we'll never see that in bash, I'm afraid.
@Rolf What do you mean? Bash gets updates fairly regularly; the syntax you propose is not very good, because it conflicts with existing conventions, but you can actually use |& to pipe both stderr and stdout (which isn't what the OP is asking exactly, but pretty close to what I guess your proposal could mean).
@Z4-tier Thanks. 2 | is not 2| indeed, I would not call it ambiguous, more like potentially error-inducing, just like echo 2 > /myfile and echo 2> /myfile which is even more of an issue. Anyway it's not about saving a few keystrokes, I find the other solutions convoluted and quirky and have yet to wrap my head around them which is why I would just fire up rc which has a straightforward syntax for determining the stream that you want to redirect.

j
jtepe

First redirect stderr to stdout — the pipe; then redirect stdout to /dev/null (without changing where stderr is going):

command 2>&1 >/dev/null | grep 'something'

For the details of I/O redirection in all its variety, see the chapter on Redirections in the Bash reference manual.

Note that the sequence of I/O redirections is interpreted left-to-right, but pipes are set up before the I/O redirections are interpreted. File descriptors such as 1 and 2 are references to open file descriptions. The operation 2>&1 makes file descriptor 2 aka stderr refer to the same open file description as file descriptor 1 aka stdout is currently referring to (see dup2() and open()). The operation >/dev/null then changes file descriptor 1 so that it refers to an open file description for /dev/null, but that doesn't change the fact that file descriptor 2 refers to the open file description which file descriptor 1 was originally pointing to — namely, the pipe.


i just stumbled across /dev/stdout /dev/stderr /dev/stdin the other day, and I was curious if those are good ways of doing the same thing? I always thought 2>&1 was a bit obfuscated. So something like: command 2> /dev/stdout 1> /dev/null | grep 'something'
You could use /dev/stdout et al, or use /dev/fd/N. They will be marginally less efficient unless the shell treats them as special cases; the pure numeric notation doesn't involve accessing files by name, but using the devices does mean a file name lookup. Whether you could measure that is debatable. I like the succinctness of the numeric notation - but I've been using it for so long (more than a quarter century; ouch!) that I'm not qualified to judge its merits in the modern world.
@Jonathan Leffler: I take a little issue with your plain text explanation 'Redirect stderr to stdout and then stdout to /dev/null' -- Since one has to read redirection chains from right to left (not from left to right), we should also adapt our plain text explanation to this: 'Redirect stdout to /dev/null, and then stderr to where stdout used to be'.
@KurtPfeifle: au contraire! One must read the redirection chains from left to right since that is the way the shell processes them. The first operation is the 2>&1, which means 'connect stderr to the file descriptor that stdout is currently going to'. The second operation is 'change stdout so it goes to /dev/null', leaving stderr going to the original stdout, the pipe. The shell splits things at the pipe symbol first, so, the pipe redirection occurs before the 2>&1 or >/dev/null redirections, but that's all; the other operations are left-to-right. (Right-to-left wouldn't work.)
The thing that really surprises me about this is that it works on Windows, too (after renaming /dev/null to the Windows equivalent, nul).
P
Peter Mortensen

Or to swap the output from standard error and standard output over, use:

command 3>&1 1>&2 2>&3

This creates a new file descriptor (3) and assigns it to the same place as 1 (standard output), then assigns fd 1 (standard output) to the same place as fd 2 (standard error) and finally assigns fd 2 (standard error) to the same place as fd 3 (standard output).

Standard error is now available as standard output and the old standard output is preserved in standard error. This may be overkill, but it hopefully gives more details on Bash file descriptors (there are nine available to each process).


A final tweak would be 3>&- to close the spare descriptor that you created from stdout
Can we create a file descriptor that has stderr and another that has the combination of stderr and stdout? In other words can stderr go to two different files at once?
@JonathanLeffler Out of curiosity, does your tweak serve any purpose performance-wise, other than perhaps clarifying the role of file descriptor (3) for an observer?
@JonasDahlbæk: the tweak is primarily an issue of tidiness. In truly arcane situations, it might make the difference between a process detecting and not detecting EOF, but that requires very peculiar circumstances.
Caution: this assumes FD 3 is not already in use, doesn't close it, and doesn't undo the swapping of file descriptors 1 and 2, so you can't go on to pipe this to yet another command. See this answer for further detail and work-around. For a much cleaner syntax for {ba,z}sh, see this answer.
C
Camille Goudeseune

In Bash, you can also redirect to a subshell using process substitution:

command > >(stdout pipe)  2> >(stderr pipe)

For the case at hand:

command 2> >(grep 'something') >/dev/null

Works very well for output to the screen. Do you have any idea why the ungrepped content appears again if I redirect the grep output into a file? After command 2> >(grep 'something' > grep.log) grep.log contains the same the same output as ungrepped.log from command 2> ungrepped.log
Use 2> >(stderr pipe >&2). Otherwise the output of the "stderr pipe" will go through the "stdlog pipe".
yeah!, 2> >(...) works, i tried 2>&1 > >(...) but it didn't
Here's a small example that may help me next time I look-up how to do this. Consider the following ... awk -f /new_lines.awk <in-content.txt > out-content.txt 2> >(tee new_lines.log 1>&2 ) In this instance I wanted to also see what was coming out as errors on my console. But STDOUT was going to the output file. So inside the sub-shell, you need to redirect that STDOUT back to STDERR inside the parentheses. While that works, the STDOUT output from the tee command winds-up at the end of the out-content.txt file. That seems inconsistient to me.
@datdinhquoc I did it somehow like 2>&1 1> >(dest pipe)
P
Pinko

Combining the best of these answers, if you do:

command 2> >(grep -v something 1>&2)

...then all stdout is preserved as stdout and all stderr is preserved as stderr, but you won't see any lines in stderr containing the string "something".

This has the unique advantage of not reversing or discarding stdout and stderr, nor smushing them together, nor using any temporary files.


Isn't command 2> >(grep -v something) (without 1>&2) the same?
No, without that, the filtered stderr ends up being routed to stdout.
This is what I needed - tar outputs "file changed as we read it" for a directory always, so just want to filter out that one line but see if any other errors occur. So tar cfz my.tar.gz mydirectory/ 2> >(grep -v 'changed as we read it' 1>&2) should work.
M
Michael Martinez

It's much easier to visualize things if you think about what's really going on with "redirects" and "pipes." Redirects and pipes in bash do one thing: modify where the process file descriptors 0, 1, and 2 point to (see /proc/[pid]/fd/*).

When a pipe or "|" operator is present on the command line, the first thing to happen is that bash creates a fifo and points the left side command's FD 1 to this fifo, and points the right side command's FD 0 to the same fifo.

Next, the redirect operators for each side are evaluated from left to right, and the current settings are used whenever duplication of the descriptor occurs. This is important because since the pipe was set up first, the FD1 (left side) and FD0 (right side) are already changed from what they might normally have been, and any duplication of these will reflect that fact.

Therefore, when you type something like the following:

command 2>&1 >/dev/null | grep 'something'

Here is what happens, in order:

a pipe (fifo) is created. "command FD1" is pointed to this pipe. "grep FD0" also is pointed to this pipe "command FD2" is pointed to where "command FD1" currently points (the pipe) "command FD1" is pointed to /dev/null

So, all output that "command" writes to its FD 2 (stderr) makes its way to the pipe and is read by "grep" on the other side. All output that "command" writes to its FD 1 (stdout) makes its way to /dev/null.

If instead, you run the following:

command >/dev/null 2>&1 | grep 'something'

Here's what happens:

a pipe is created and "command FD 1" and "grep FD 0" are pointed to it "command FD 1" is pointed to /dev/null "command FD 2" is pointed to where FD 1 currently points (/dev/null)

So, all stdout and stderr from "command" go to /dev/null. Nothing goes to the pipe, and thus "grep" will close out without displaying anything on the screen.

Also note that redirects (file descriptors) can be read-only (<), write-only (>), or read-write (<>).

A final note. Whether a program writes something to FD1 or FD2, is entirely up to the programmer. Good programming practice dictates that error messages should go to FD 2 and normal output to FD 1, but you will often find sloppy programming that mixes the two or otherwise ignores the convention.


Really nice answer. My one suggestion would be to replace your first use of "fifo" with "fifo (a named pipe)". I've been using Linux for a while but somehow never managed to learn that is another term for named pipe. This would have saved me from looking it up, but then again I wouldn't have learned the other stuff I saw when I found that out!
@MarkEdington Please note that FIFO is only another term for named pipe in the context of pipes and IPC. In a more general context, FIFO means First in, first out, which describes insertion and removal from a queue data structure.
@Loomchild Of course. The point of my comment was that even as a seasoned developer, I had never seen FIFO used as a synonym for named pipe. In other words, I didn't know this: en.wikipedia.org/wiki/FIFO_(computing_and_electronics)#Pipes - Clarifying that in the answer would have saved me time.
P
Peter Mortensen

If you are using Bash, then use:

command >/dev/null |& grep "something"

http://www.gnu.org/software/bash/manual/bashref.html#Pipelines


Nope, |& is equal to 2>&1 which combines stdout and stderr. The question explicitly asked for output without stdout.
„If ‘|&’ is used, the standard error of command1 is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |” Taken verbatim from the fourth paragraph at your link.
@Profpatsch: Ken's answer is correct, look that he redirects stdout to null before combining stdout and stderr, so you'll get in pipe only the stderr, because stdout was previously droped to /dev/null.
But i still found your answer is wrong, >/dev/null |& expand to >/dev/null 2>&1 | and means stdout inode is empty to pipe because nobody(#1 #2 both tied to /dev/null inode) is tied to stdout inode (e.g. ls -R /tmp/* >/dev/null 2>&1 | grep i will give empty, but ls -R /tmp/* 2>&1 >/dev/null | grep i will lets #2 which tied to stdout inode will pipe).
Ken Sharp, I tested, and ( echo out; echo err >&2 ) >/dev/null |& grep "." gives no output (where we want "err"). man bash says If |& is used … is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command. So first we redirect command's FD1 to null, then we redirect command's FD2 to where FD1 pointed, ie. null, so grep's FD0 gets no input. See stackoverflow.com/a/18342079/69663 for a more in-depth explanation.
J
JBD

For those who want to redirect stdout and stderr permanently to files, grep on stderr, but keep the stdout to write messages to a tty:

# save tty-stdout to fd 3
exec 3>&1
# switch stdout and stderr, grep (-v) stderr for nasty messages and append to files
exec 2> >(grep -v "nasty_msg" >> std.err) >> std.out
# goes to the std.out
echo "my first message" >&1
# goes to the std.err
echo "a error message" >&2
# goes nowhere
echo "this nasty_msg won't appear anywhere" >&2
# goes to the tty
echo "a message on the terminal" >&3

t
theDolphin

This will redirect command1 stderr to command2 stdin, while leaving command1 stdout as is.

exec 3>&1
command1 2>&1 >&3 3>&- | command2 3>&-
exec 3>&-

Taken from LDP


So if I'm understanding this correctly, we start by duplicating the stdout of the current process (3>&1). Next redirect command1's error to its output (2>&1), then point stdout of command1 to the parent process's copy of stdout (>&3). Clean up the duplicated file descriptor in the command1 (3>&-). Over in command2, we just need to also delete the duplicated file descriptor (3>&-). These duplicates are caused when the parent forked itself to create both processes, so we just clean them up. Finally in the end, we delete the parent process's file descriptor (3>&-).
In the end, we have command1's original stdout pointer, now pointing to the parent process's stdout, while its stderr is pointing to where its stdout used to be, making it the new stdout for command2.
T
Tripp Kinetics

I just came up with a solution for sending stdout to one command and stderr to another, using named pipes.

Here goes.

mkfifo stdout-target
mkfifo stderr-target
cat < stdout-target | command-for-stdout &
cat < stderr-target | command-for-stderr &
main-command 1>stdout-target 2>stderr-target

It's probably a good idea to remove the named pipes afterward.


Upvote for FIFO use
P
Peter Mortensen

You can use the rc shell.

First install the package (it's less than 1 MB).

This an example of how you would discard standard output and pipe standard error to grep in rc:

find /proc/ >[1] /dev/null |[2] grep task

You can do it without leaving Bash:

rc -c 'find /proc/ >[1] /dev/null |[2] grep task'

As you may have noticed, you can specify which file descriptor you want piped by using brackets after the pipe.

Standard file descriptors are numerated as such:

0 : Standard input

1 : Standard output

2 : Standard error


Suggesting installing an entirely different shell seems kindof drastic to me.
@xdhmoore What's so drastic about it? It does not replace the default shell and the software only takes up a few K of space. The rc syntax for piping stderr is way better than what you would have to do in bash so I think it is worth a mention.
l
lasteye

I try follow, find it work as well,

command > /dev/null 2>&1 | grep 'something'

Doesn't work. It just sends stderr to the terminal. Ignores the pipe.