ChatGPT解决这个技术问题 Extra ChatGPT

Delete specific line number(s) from a text file using sed?

I want to delete one or more specific line numbers from a file. How would I do this using sed?

Can you give a more specific example of what you want? How will you decide which lines to remove?
Maybe see also stackoverflow.com/questions/13272717/… and just applyeit in reverse (print if key not in associative array).

f
firedev

If you want to delete lines from 5 through 10 and line 12th:

sed -e '5,10d;12d' file

This will print the results to the screen. If you want to save the results to the same file:

sed -i.bak -e '5,10d;12d' file

This will store the unmodified file as file.bak, and delete the given lines.

Note: Line numbers start at 1. The first line of the file is 1, not 0.


Not all unixes have gnu sed with "-i". Don't make the mistake of falling back to "sed cmd file > file", which will wipe out your file.
what If I wanted to delete the 5th line up to the last line?
@WearetheWorld sed -e '5,$d' file
@KanagaveluSugumar sed -e '5d' file. The syntax is <address><command>; where <address> can be either a single line like 5 or a range of lines like 5,10, and the command d deletes the given line or lines. The addresses can also be regular expressions, or the dollar sign $ indicating the last line of the file.
Note that the lines from the 5th to 10th are all inclusive.
b
binaryfunt

You can delete a particular single line with its line number by

sed -i '33d' file

This will delete the line on 33 line number and save the updated file.


In my case "sed" removed a wrong line. So I use this approach: sed -i '0,/<TARGET>/{/<NEW_VALUE>/d;}' '<SOME_FILE_NAME>'. Thanks!
Same here, I wrote a loop and strangely some files lost the correct line but some files lost one other line too, have no clue what went wrong. (GNU/Linux bash4.2) awk command below worked fine in loop
Be really careful to use sort -r if you are deleting from a list of lines, otherwise your first sed will change the line numbers of everything else!...
To comments about wrong lines being deleted within a loop : be sure to start with the largest line number, otherwise each deleted line will offset the line numbering…
On my system, when processing large files, sed appears an order of magnitude slower than a simple combination of head and tail: here's an example of the faster way (without in-place mode): delete-line() { local filename="$1"; local lineNum="$2"; head -n $((lineNum-1)) "$filename"; tail +$((lineNum+1)) "$filename"; }
g
ghostdog74

and awk as well

awk 'NR!~/^(5|10|25)$/' file

NB: That awk line worked more reliably for me than the sed variant (between OS-X and Ubuntu Linux)
Note that this doesn't delete anything in the file. It just prints the file without these lines to stdout. So you also need to redirect the output to a temp file, and then move the temp file to replace the original.
M
Matthew Slattery
$ cat foo
1
2
3
4
5
$ sed -e '2d;4d' foo
1
3
5
$ 

t
tripleee

This is very often a symptom of an antipattern. The tool which produced the line numbers may well be replaced with one which deletes the lines right away. For example;

grep -nh error logfile | cut -d: -f1 | deletelines logfile

(where deletelines is the utility you are imagining you need) is the same as

grep -v error logfile

Having said that, if you are in a situation where you genuinely need to perform this task, you can generate a simple sed script from the file of line numbers. Humorously (but perhaps slightly confusingly) you can do this with sed.

sed 's%$%d%' linenumbers

This accepts a file of line numbers, one per line, and produces, on standard output, the same line numbers with d appended after each. This is a valid sed script, which we can save to a file, or (on some platforms) pipe to another sed instance:

sed 's%$%d%' linenumbers | sed -f - logfile

On some platforms, sed -f does not understand the option argument - to mean standard input, so you have to redirect the script to a temporary file, and clean it up when you are done, or maybe replace the lone dash with /dev/stdin or /proc/$pid/fd/1 if your OS (or shell) has that.

As always, you can add -i before the -f option to have sed edit the target file in place, instead of producing the result on standard output. On *BSDish platforms (including OSX) you need to supply an explicit argument to -i as well; a common idiom is to supply an empty argument; -i ''.


I don't quite agree with "symptom of an antipattern". Markup-based file types (e.g. XML or JSON) require specific lines at the end in order to be valid files. In that case, it's often the most reasonable approach to remove those lines, put into the file what you want to be added and then re-add those lines, because putting the lines in between straight away can be much more effort, and goes against the potential desire to avoid extra tools like sed as much as you can.
I don't quite understand what sort of scenario you are imagining. There are scenarios where this is a legitimate approach but the vast majority of cases I have seen are newbies who do more or less exactly what my first example demonstrates. (Perhaps they come from some really low-level language and are used to dividing their problem way past the molecular level, because you have to in asm or C.)
Removing stuff by line number from XML or JSON sounds extermely brittle, if not outright dangerous.
What I basically mean by that, is that as the creator of such a file, you know what has to be at the end of the document (i.e. the set of closing braces/square brackets in the last few lines for JSON, or the exact closing tags for XML). Being aware of that, the most simple approach to extend such a document is 1) remove the last few lines, 2) add the new content, 3) re-add the last few lines. This way, the document can be valid both before and after it has been extended, without needing to find a way of adding lines mid-document.
So far this is the only answer with an appropriate solution for a large number of lines (i.e. provided by a file). And the foreword makes sense too. It deserves more upvotes. BTW, if you want to print lines rather than delete them, use p instead of d, along with option -n (it won't work without -n, and !d won't work either).
H
Hastur

I would like to propose a generalization with awk.

When the file is made by blocks of a fixed size and the lines to delete are repeated for each block, awk can work fine in such a way

awk '{nl=((NR-1)%2000)+1; if ( (nl<714) || ((nl>1025)&&(nl<1029)) ) print  $0}'
 OriginFile.dat > MyOutputCuttedFile.dat

In this example the size for the block is 2000 and I want to print the lines [1..713] and [1026..1029].

NR is the variable used by awk to store the current line number.

% gives the remainder (or modulus) of the division of two integers;

nl=((NR-1)%BLOCKSIZE)+1 Here we write in the variable nl the line number inside the current block. (see below)

|| and && are the logical operator OR and AND.

print $0 writes the full line

Why ((NR-1)%BLOCKSIZE)+1:
(NR-1) We need a shift of one because 1%3=1, 2%3=2, but 3%3=0.
  +1   We add again 1 because we want to restore the desired order.

+-----+------+----------+------------+
| NR  | NR%3 | (NR-1)%3 | (NR-1)%3+1 |
+-----+------+----------+------------+
|  1  |  1   |    0     |     1      |
|  2  |  2   |    1     |     2      |
|  3  |  0   |    2     |     3      |
|  4  |  1   |    0     |     1      |
+-----+------+----------+------------+


I admire the way you live up to your madness-inducing name.
T
Timo

The shortest, deleting the first line in sed

sed -i '1d' file

As Brian states here, <address><command> is used, <address> is <1> and <command> <d>.


s
shane sontr

cat -b /etc/passwd | sed -E 's/^( )+(<line_number>)(\t)(.*)/--removed---/g;s/^( )+([0-9]+)(\t)//g'

cat -b -> print lines with numbers

s/^( )+(<line_number>)(\t)(.*)//g -> replace line number to null (remove line)

s/^( )+([0-9]+)(\t)//g #remove numbers the cat printed