ChatGPT解决这个技术问题 Extra ChatGPT

Concatenate multiple files but include filename as section headers

I would like to concatenate a number of text files into one large file in terminal. I know I can do this using the cat command. However, I would like the filename of each file to precede the "data dump" for that file. Anyone know how to do this?

what I currently have:

file1.txt = bluemoongoodbeer

file2.txt = awesomepossum

file3.txt = hownowbrowncow

cat file1.txt file2.txt file3.txt

desired output:

file1

bluemoongoodbeer

file2

awesomepossum

file3

hownowbrowncow
i am actually using uuencode & uudecode to do something similar, i do not need the file readable inbetween, only need the result and pipe them to another command again
bat is a great alternative to cat when you just want to view multiple file's contents

J
John Diamond

Was looking for the same thing, and found this to suggest:

tail -n +1 file1.txt file2.txt file3.txt

Output:

==> file1.txt <==
<contents of file1.txt>

==> file2.txt <==
<contents of file2.txt>

==> file3.txt <==
<contents of file3.txt>

If there is only a single file then the header will not be printed. If using GNU utils, you can use -v to always print a header.


This works with the GNU tail (part of GNU Coreutils) as well.
Awesome -n +1 option! An alternative: head -n-0 file1 file2 file3.
Works great with both BSD tail and GNU tail on MacOS X. You can leave out the space between -n and +1, as in -n+1.
tail -n +1 * was exactly was I was looking for, thanks!
works on MacOsX 10.14.4 sudo ulimit -n 1024; find -f . -name "*.rb" | xargs tail -n+1 > ./source_ruby.txt
T
Theo

I used grep for something similar:

grep "" *.txt

It does not give you a 'header', but prefixes every line with the filename.


Output breaks if *.txt expands to only one file. In this regard, I'd advise grep '' /dev/null *.txt
+1 for showing me a new use for grep. this met my needs perfectly. in my case, each file only contained one line, so it gave me a neatly formatted output that was easily parsable
grep will only print file headers if there is more than one file. If you want to make sure to print the file path always, use -H. If you don't want the headers use -h. Also note it will print (standard input) for STDIN.
You can also use ag (the silver searcher): by default, ag . *.txt prefixes each file with its name, and each line with its number.
Of note, passing -n to grep also yields line numbers which allows you to write simple linters with pinpointing that could be picked up by, e.g., emacs.
F
Flimm

This should do the trick as well:

$ find . -type f -print -exec cat {} \;
./file1.txt
Content of file1.txt
./file2.txt
Content of file2.txt

Here is the explanation for the command-line arguments:

find    = linux `find` command finds filenames, see `man find` for more info
.       = in current directory
-type f = only files, not directories
-print  = show found file
-exec   = additionally execute another linux command
cat     = linux `cat` command, see `man cat`, displays file contents
{}      = placeholder for the currently found filename
\;      = tell `find` command that it ends now here

You further can combine searches trough boolean operators like -and or -or. find -ls is nice, too.


Could you explain more what this command does? Is exactly what I Needed
This is linux' standard find command. It searches all files in the current directory, prints their name, then for each one, cats the file. Omitting the -print won't print the filename before the cat.
You can also use -printf to customize the output. For example: find *.conf -type f -printf '\n==> %p <==\n' -exec cat {} \; to match the output of tail -n +1 *
-printf doesn't work on mac, unless you want to brew install findutils and then use gfind instead of find.
If you want colors you can use that fact that find allows multiple -execs: find -name '*.conf' -exec printf '\n\e[33;1m%s\e[0m\n' {} \; -exec cat {} \;
C
Chris Eberle

This should do the trick:

for filename in file1.txt file2.txt file3.txt; do
    echo "$filename"
    cat "$filename"
done > output.txt

or to do this for all text files recursively:

find . -type f -name '*.txt' -print | while read filename; do
    echo "$filename"
    cat "$filename"
done > output.txt

didn't work. I just wrote some really ugly awk code: for i in $listoffiles do awk '{print FILENAME,$0,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11}' $i >> concat.txt done
...care to elaborate? That's about as simple as bash code gets.
@Nick: your awk line shouldn't even work, considering that $0 is the entire line, so you've actually got repeating columns in there...
@Nick: Nifty solution otherwise :)
@Chris: yes, but its a lot uglier than I would like it to be. Maybe your code wasn't working for me because I'm using >> to catch the stdout?
A
Asclepius

When there is more than one input file, the more command concatenates them and also includes each filename as a header.

To concatenate to a file:

more *.txt > out.txt

To concatenate to the terminal:

more *.txt | cat

Example output:

::::::::::::::
file1.txt
::::::::::::::
This is
my first file.
::::::::::::::
file2.txt
::::::::::::::
And this is my
second file.

@Acumenus For myself I had to use: String="${String//$'\n'::::::::::::::$'\n'/|}" then: String="${String//::::::::::::::$'\n'/}" and finally: String="${String//$'\n'/|}" to make into a YAD array: IFS='|' read -ra OLD_ARR <<< "$String"
@Acumenus First I had to build the string field using: String=$(sudo -u "$SUDO_USER" ssh "$SUDO_USER"@"$TRG_HOST" \ "find /tmp/$SUDO_USER/scp.*/*.Header -type f \ -printf '%Ts\t%p\n' | sort -nr | cut -f2 | \ xargs more | cat | cut -d'|' -f2,3" \ )
@WinEunuuchs2Unix I don't follow. If you'd like, you can explain more clearly in a gist.github.com, and link to it instead.
@Acumenus Better yet I'll upload the script to github when done. It's just to copy root files between hosts using sudo because hosts don't have root accounts. The code in comments was to select headers for previous payloads. Kind of a fringe thing that won't interest most users.
This does not work for me on MacOS. I only get the file contents.
K
Kevin
find . -type f -print0 | xargs -0 -I % sh -c 'echo %; cat %'

This will print the full filename (including path), then the contents of the file. It is also very flexible, as you can use -name "expr" for the find command, and run as many commands as you like on the files.


It is also quite straightforward to combine with grep. To use with bash: find . -type f -print | grep PATTERN | xargs -n 1 -I {} -i bash -c 'echo ==== {} ====; cat {}; echo'
k
kvantour

And the missing awk solution is:

$ awk '(FNR==1){print ">> " FILENAME " <<"}1' *

Thanks for your great answer. Can you explain the meaning of the 1 at the end of expression?
Thanks. You can have a look at what is the meaning of 1 at the end of awk script
M
MorpheousMarty

This is how I normally handle formatting like that:

for i in *; do echo "$i"; echo ; cat "$i"; echo ; done ;

I generally pipe the cat into a grep for specific information.


Be careful here. for i in * won't include subdirectories.
c
ch3ll0v3k

I like this option

for x in $(ls ./*.php); do echo $x; cat $x | grep -i 'menuItem'; done

Output looks like this:

./debug-things.php
./Facebook.Pixel.Code.php
./footer.trusted.seller.items.php
./GoogleAnalytics.php
./JivositeCode.php
./Live-Messenger.php
./mPopex.php
./NOTIFICATIONS-box.php
./reviewPopUp_Frame.php
            $('#top-nav-scroller-pos-<?=$active**MenuItem**;?>').addClass('active');
            gotTo**MenuItem**();
./Reviews-Frames-PopUps.php
./social.media.login.btns.php
./social-side-bar.php
./staticWalletsAlerst.php
./tmp-fix.php
./top-nav-scroller.php
$active**MenuItem** = '0';
        $active**MenuItem** = '1';
        $active**MenuItem** = '2';
        $active**MenuItem** = '3';
./Waiting-Overlay.php
./Yandex.Metrika.php

k
keyser

you can use this simple command instead of using a for loop,

ls -ltr | awk '{print $9}' | xargs head

d
drkvogel

If the files all have the same name or can be matched by find, you can do (e.g.):

find . -name create.sh | xargs tail -n +1

to find, show the path of and cat each file.


s
sjas

If you like colors, try this:

for i in *; do echo; echo $'\e[33;1m'$i$'\e[0m'; cat $i; done | less -R

or:

tail -n +1 * | grep -e $ -e '==.*'

or: (with package 'multitail' installed)

multitail *

For the sake of colors highlighting the filename: find . -type f -name "*.txt" | xargs -I {} bash -c "echo $'\e[33;1m'{}$'\e[0m';cat {}"
T
Trey Brister

Here is a really simple way. You said you want to cat, which implies you want to view the entire file. But you also need the filename printed.

Try this

head -n99999999 * or head -n99999999 file1.txt file2.txt file3.txt

Hope that helps


a cleaner syntax would be head -n -0 file.txt file2.txt file3.txt. Works for head in GNU coreutils
b
boroboris

If you want to replace those ugly ==> <== with something else

tail -n +1 *.txt | sed -e 's/==>/\n###/g' -e 's/<==/###/g' >> "files.txt"

explanation:

tail -n +1 *.txt - output all files in folder with header

sed -e 's/==>/\n###/g' -e 's/<==/###/g' - replace ==> with new line + ### and <== with just ###

>> "files.txt" - output all to a file


M
Matt Clark
find . -type f -exec cat {} \; -print

F
Fekete Sumér

If you want the result in the same format as your desired output you can try:

for file in `ls file{1..3}.txt`; \
do echo $file | cut -d '.' -f 1; \ 
cat $file  ; done;

Result:

file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow

You can put echo -e before and after the cut so you have the spacing between the lines as well:

$ for file in `ls file{1..3}.txt`; do echo $file | cut -d '.' -f 1; echo -e; cat $file; echo -e  ; done;

Result:

file1

bluemoongoodbeer

file2

awesomepossum

file3

hownowbrowncow

Explanation: for loop goes through the list the ls command resulted. $ ls file{1..3}.txt Result: file1.txt file2.txt file3.txt each iteration will echo the $file string then it's piped into a cut command where I used . as a field separator which breaks fileX.txt into two pieces and prints out the field1 (field2 is the txt) The rest should be clear
J
JustinKaisse

AIX 7.1 ksh

... glomming onto those who've already mentioned head works for some of us:

$ r head
head file*.txt
==> file1.txt <==
xxx
111

==> file2.txt <==
yyy
222
nyuk nyuk nyuk

==> file3.txt <==
zzz
$

My need is to read the first line; as noted, if you want more than 10 lines, you'll have to add options (head -9999, etc).

Sorry for posting a derivative comment; I don't have sufficient street cred to comment/add to someone's comment.


s
serenesat

This method will print filename and then file contents:

tail -f file1.txt file2.txt

Output:

==> file1.txt <==
contents of file1.txt ...
contents of file1.txt ...

==> file2.txt <==
contents of file2.txt ...
contents of file2.txt ...

The -f is useful if you want to track a file which is being written to, but not really within the scope of what the OP asked.
This solution only prints the last few lines - not the whole files' contents.
I
Igor

For solving this tasks I usually use the following command:

$ cat file{1..3}.txt >> result.txt

It's a very convenient way to concatenate files if the number of files is quite large.


But this does not answer the OP's question.
J
John Ray

First I created each file: echo 'information' > file1.txt for each file[123].txt.

Then I printed each file to makes sure information was correct: tail file?.txt

Then I did this: tail file?.txt >> Mainfile.txt. This created the Mainfile.txt to store the information in each file into a main file.

cat Mainfile.txt confirmed it was okay.

==> file1.txt <== bluemoongoodbeer

==> file2.txt <== awesomepossum

==> file3.txt <== hownowbrowncow