ChatGPT解决这个技术问题 Extra ChatGPT

UnicodeDecodeError: 'utf8' codec can't decode byte 0x9c

I have a socket server that is supposed to receive UTF-8 valid characters from clients.

The problem is some clients (mainly hackers) are sending all the wrong kind of data over it.

I can easily distinguish the genuine client, but I am logging to files all the data sent so I can analyze it later.

Sometimes I get characters like this œ that cause the UnicodeDecodeError error.

I need to be able to make the string UTF-8 with or without those characters.

Update:

For my particular case the socket service was an MTA and thus I only expect to receive ASCII commands such as:

EHLO example.com
MAIL FROM: <john.doe@example.com>
...

I was logging all of this in JSON.

Then some folks out there without good intentions decided to send all kind of junk.

That is why for my specific case it is perfectly OK to strip the non ASCII characters.

does the string come out of a file or a socket? could you please post code examples of how the string is encoded end decoded before it is send through the socket/filehandler?
Did I write or didn't I write that the string comes over the socket? I simply read the string from the socket and with to put it in a dictionary and then JSON it to send it along. The JSON function failed due to those characters.
can you please put your sample data of problem

M
Max Ghenis

http://docs.python.org/howto/unicode.html#the-unicode-type

str = unicode(str, errors='replace')

or

str = unicode(str, errors='ignore')

Note: This will strip out (ignore) the characters in question returning the string without them.

For me this is ideal case since I'm using it as protection against non-ASCII input which is not allowed by my application.

Alternatively: Use the open method from the codecs module to read in the file:

import codecs
with codecs.open(file_name, 'r', encoding='utf-8',
                 errors='ignore') as fdata:

Yes, though this is usually bad practice/dangerous, because you'll just lose characters. Better to determine or detect the encoding of the input string and decode it to unicode first, then encode as UTF-8, for example: str.decode('cp1252').encode('utf-8')
In some cases yes you are right it might cause problems. In my case I don't care about them as they seem to be extra characters originating from a the bad formatting and programming of the clients connecting to my socket server.
This one actually helps if the content of the string is actually invalid, in my case '\xc0msterdam' which turns in to u'\ufffdmsterdam' with replace
if you ended up here because you are having problems reading a file, opening the file in binary mode might help: open(file_name, "rb") and then apply Ben's approach from the comments above
How can I import unicode ?
D
Doğuş

Changing the engine from C to Python did the trick for me.

Engine is C:

pd.read_csv(gdp_path, sep='\t', engine='c')

'utf-8' codec can't decode byte 0x92 in position 18: invalid start byte

Engine is Python:

pd.read_csv(gdp_path, sep='\t', engine='python')

No errors for me.


that's actually a good solution. i dont know why it was downvoted.
This could be not a good idea if you have a huge csv file. It could lead you to an OutOfMemory error or an automatic restart of your notebook's kernel. You should set the encoding on this case.
Excellent answer. Thank You. This worked for me. I had "? " inside a diamond shape character that was causing the issue. With plain eyes i had ' " " which is inch. I did 2 things to figure out. a) df = pd.read_csv('test.csv', n_rows=10000). This worked perfectly without the engine. So i incremented the n_rows to figure out which row had error. b) df = pd.read_csv('test.csv', engine='python') . This worked and i printed the errored row using df.iloc[36145], this printed me the errored record.
this worked for me too... Not sure what is happening 'under the hood' and if this is actually a nice/good/proper solution in all cases, but it did the trick for me ;)
Although it worked for me, I find it so not intuitive.. How in the world I would figure it out with out someone point it out? I am curious to know from where it came from...
J
James McCormac

This type of issue crops up for me now that I've moved to Python 3. I had no idea Python 2 was simply steam rolling any issues with file encoding.

I found this nice explanation of the differences and how to find a solution after none of the above worked for me.

http://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html

In short, to make Python 3 behave as similarly as possible to Python 2 use:

with open(filename, encoding="latin-1") as datafile:
    # work on datafile here

However, read the article, there is no one size fits all solution.


the link is broken as of 2021-10-09
As of 2022-02-12 using Python 3.8 I have no problems.
I
Ignacio Vazquez-Abrams
>>> '\x9c'.decode('cp1252')
u'\u0153'
>>> print '\x9c'.decode('cp1252')
œ

I'm confused, how did you choose cp1252? It worked for me, but why ? I don't know and now I'm lost :/. Could you elaborate ? Thanks a lot ! :)
Could you present an option that works for all characters? Is there a way to detect the characters that need to be decoded so a more generic code can be implemented? I see many people are looking at this and I bet for some discarding is not the desired option like it is for me.
As you can see this question has quite the popularity. Think you could expand your answer with a more generic solution?
There is no more generic solution to "Guess the encoding roulette"
found it using a combination of web search, luck and intuition: cp1252 was used by default in the legacy components of Microsoft Windows in English and some other Western languages
I
Ivan Lee

the first,Using get_encoding_type to get the files type of encode:

import os    
from chardet import detect

# get file encoding type
def get_encoding_type(file):
    with open(file, 'rb') as f:
        rawdata = f.read()
    return detect(rawdata)['encoding']

the second, opening the files with the type:

open(current_file, 'r', encoding = get_encoding_type, errors='ignore')

what happens when it return None
m
maiky_forrester

I had same problem with UnicodeDecodeError and i solved it with this line. Don't know if is the best way but it worked for me.

str = str.decode('unicode_escape').encode('utf-8')

C
Community

This solution works nice when using Latin American accents, such as 'ñ'.

I have solved this problem just by adding

df = pd.read_csv(fileName,encoding='latin1')

Worked for me too, but I wonder what's going to happen to the Chinese, Greek and Russian named media on my drive. To be continued...
S
Sathiamoorthy

I have resolved this problem using this code

df = pd.read_csv(path, engine='python')

h
http8086

Just in case of someone has the same problem. I'am using vim with YouCompleteMe, failed to start ycmd with this error message, what I did is: export LC_CTYPE="en_US.UTF-8", the problem is gone.


How does this relate to this question?
Exactly the same, if you know how youcompleteme works. Ycm plugin is socket architecture, communication between client and server is using socket, both are python modules, not able to decode the packets if the encoding setting is incorrect
I have the same problem. Can you please tell me where to put export LC_CTYPE="en_US.UTF-8"?
@Remonn hi, you know we have profile file for bash? Put inside.
@hylepo, I'm on a windows system :)
K
Krisztián Balla

What can you do if you need to make a change to a file, but don’t know the file’s encoding? If you know the encoding is ASCII-compatible and only want to examine or modify the ASCII parts, you can open the file with the surrogateescape error handler:

with open(fname, 'r', encoding="ascii", errors="surrogateescape") as f:
    data = f.read()

This caused my notebook to crash.