ChatGPT解决这个技术问题 Extra ChatGPT

UnicodeDecodeError: 'utf8' codec can't decode byte 0xa5 in position 0: invalid start byte

I am using Python-2.6 CGI scripts but found this error in server log while doing json.dumps(),

Traceback (most recent call last):
  File "/etc/mongodb/server/cgi-bin/getstats.py", line 135, in <module>
    print json.dumps(​​__get​data())
  File "/usr/lib/python2.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/lib/python2.7/json/encoder.py", line 201, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python2.7/json/encoder.py", line 264, in iterencode
    return _iterencode(o, 0)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa5 in position 0: invalid start byte

Here ,

​__get​data() function returns dictionary {} .

Before posting this question I have referred this of question os SO.

UPDATES

Following line is hurting JSON encoder,

now = datetime.datetime.now()
now = datetime.datetime.strftime(now, '%Y-%m-%dT%H:%M:%S.%fZ')
print json.dumps({'current_time': now}) # this is the culprit

I got a temporary fix for it

print json.dumps( {'old_time': now.encode('ISO-8859-1').strip() })

But I am not sure is it correct way to do it.

It looks like you have some string data in the dictionary that can't be encoded/decoded. What's in the dict?
@mgilson yup master I understood the issue but donno how to deal with it..dict has list, dict, python timestamp value
@Pilot -- Not really. The real problem is buried somewhere in __getdata. I don't know why you're getting a non-decodable character. You can try to come up with patches on the dict to make it work, but those are mostly just asking for more problems later. I would try printing the dict to see where the non-ascii character is. Then figure out how that field got calculated/set and work backward from there.
I had that same error when trying to read a .csv file which had some non-ascii characters in it. Removing those characters (as suggested below) solved the issue.

s
shiva

I switched this simply by defining a different codec package in the read_csv() command:

encoding = 'unicode_escape'

Eg:

import pandas as pd
data = pd.read_csv(filename, encoding= 'unicode_escape')

Only if you using pandas
sorry, this didn't working, I again had the same error. but when I used ('filename.csv', engine ='python'). This worked.
works for pysrt too, pysrt.open(subfilename, encoding='unicode_escape') and I think this solution should work with Un-encoded text / plain text for any library that supports encoding on file open "unicode_escape" will open file , but if you have non-ascii you should give specific encoding for example for Turkish encoding='ISO-8859-9'
Helped me today.. Thanks a lot
Thanks! It saved me today!
F
Felipe Augusto

Try the below code snippet:

with open(path, 'rb') as f:
  text = f.read()

I had r instead of rb. thanks for the reminder to add b!
By default open function has 'r' as read only mode. rb stands for read binary mode.
J
Jean-Francois T.

The error is because there is some non-ascii character in the dictionary and it can't be encoded/decoded. One simple way to avoid this error is to encode such strings with encode() function as follows (if a is the string with non-ascii character):

a.encode('utf-8').strip()

Since UTF-8 is back-compatible with the oldschool 7-bit ASCII you should just encode everything. For characters in the 7-bit ASCII range this encoding will be an identity mapping.
This doesn't seem real clear. When importing a csv file how do you use this code?
The same issue appears for me when executing an sqlalchemy query, how would I encode the query (has no .encode, since its not a string)?
s
shiva

Your string has a non ascii character encoded in it.

Not being able to decode with utf-8 may happen if you've needed to use other encodings in your code. For example:

>>> 'my weird character \x96'.decode('utf-8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 19: invalid start byte

In this case, the encoding is windows-1252 so you have to do:

>>> 'my weird character \x96'.decode('windows-1252')
u'my weird character \u2013'

Now that you have Unicode, you can safely encode into utf-8.


I have created a simple page which may help establish the encoding of some unexpected "mystery bytes"; tripleee.github.io/8bit
s
shiva

On read csv, I added an encoding method:

import pandas as pd
dataset = pd.read_csv('sample_data.csv', header= 0,
                        encoding= 'unicode_escape')

s
shiva

This solution worked for me:

import pandas as pd
data = pd.read_csv("training.csv", encoding = 'unicode_escape')

s
shiva

Inspired by @aaronpenne and @Soumyaansh

f = open("file.txt", "rb")
text = f.read().decode(errors='replace')

I got "AttributeError: 'str' object has no attribute 'decode'". Not sure what went wrong?
Did you include b to the "rb"? The b is for opening the file as byte-formated. If you just use r it is string, and don't include decode.
s
shiva

As of 2018-05 this is handled directly with decode, at least for Python 3.

I'm using the below snippet for invalid start byte and invalid continuation byte type errors. Adding errors='ignore' fixed it for me.

with open(out_file, 'rb') as f:
    for line in f:
        print(line.decode(errors='ignore'))

Of course, this silently discards information. A much better fix is to figure out what's supposed to be there, and fixing the original problem.
G
Gil Baggio

Simple Solution:

import pandas as pd
df = pd.read_csv('file_name.csv', engine='python')

The only solution that works for me of all those presented here.
This solution helped me open csv file with korean language in it. Thanks
This was very helpful, thank you!
H
HimalayanCoder

Set default encoder at the top of your code

import sys
reload(sys)
sys.setdefaultencoding("ISO-8859-1")

I think python3 doesn't have setdefaultencoding in sys module!
s
shiva

If the above methods are not working for you, you may want to look into changing the encoding of the csv file itself.

Using Excel:

Open csv file using Excel Navigate to File menu option and click Save As Click Browse to select a location to save the file Enter intended filename Select CSV (Comma delimited) (*.csv) option Click Tools drop-down box and click Web Options Under Encoding tab, select the option Unicode (UTF-8) from Save this document as drop-down list Save the file

Using Notepad:

Open csv file using notepad Navigate to File > Save As option Next, select the location to the file Select the Save as type option as All Files(.) Specify the file name with .csv extension From Encoding drop-down list, select UTF-8 option. Click Save to save the file

By doing this, you should be able to import csv files without encountering the UnicodeCodeError.


This worked for me while none of the above solutions did.
D
Deepak Ingole

Following line is hurting JSON encoder,

now = datetime.datetime.now()
now = datetime.datetime.strftime(now, '%Y-%m-%dT%H:%M:%S.%fZ')
print json.dumps({'current_time': now}) // this is the culprit

I got a temporary fix for it

print json.dumps( {'old_time': now.encode('ISO-8859-1').strip() })

Marking this as correct as a temporary fix (Not sure so).


a
amit haldar

The following snippet worked for me.

import pandas as pd
df = pd.read_csv(filename, sep = ';', encoding = 'latin1', error_bad_lines=False) #error_bad_lines is avoid single line error

s
shiva

You may use any standard encoding of your specific usage and input.

utf-8 is the default.

iso8859-1 is also popular for Western Europe.

e.g: bytes_obj.decode('iso8859-1')

see: docs


Blindly guessing the encoding is likely to produce more errors. Selecting iso8859-1 or cp1251 etc without actually knowing which encoding the file uses will remove the symptom, but produce garbage if you guessed wrong. If it's just a few bytes, it could take years before you notice and fix the real error.
s
shiva

After trying all the aforementioned workarounds, if it still throws the same error, you can try exporting the file as CSV (a second time if you already have). Especially if you're using scikit learn, it is best to import the dataset as a CSV file.

I spent hours together, whereas the solution was this simple. Export the file as a CSV to the directory where Anaconda or your classifier tools are installed and try.


R
Rick James

Instead of looking for ways to decode a5 (Yen ¥) or 96 (en-dash ), tell MySQL that your client is encoded "latin1", but you want "utf8" in the database.

See details in Trouble with UTF-8 characters; what I see is not what I stored


P
Paul

I encountered the same error while trying to import to a pandas dataframe from an excel sheet on sharepoint. My solution was using engine='openpyxl'. I'm also using requests_negotiate_sspi to avoid storing passwords in plain text.

import requests
from io import BytesIO
from requests_negotiate_sspi import HttpNegotiateAuth
cert = r'c:\path_to\saved_certificate.cer'
target_file_url = r'https://share.companydomain.com/sites/Sitename/folder/excel_file.xlsx'
response = requests.get(target_file_url, auth=HttpNegotiateAuth(), verify=cert)
df = pd.read_excel(BytesIO(response.content), engine='openpyxl', sheet_name='Sheet1')

s
shiva

In my case, i had to save the file as UTF8 with BOM not just as UTF8 utf8 then this error was gone.


M
Madat Sardarli
from io import BytesIO

df = pd.read_excel(BytesIO(bytes_content), engine='openpyxl')

worked for me


Where does bytes_content come from?
bytes_content is just a sample variable, containing bytes like object
T
Timus

Simple solution:

import pandas as pd

df = pd.read_csv('file_name.csv', engine='python-fwf')

If it's not working try to change the engine to 'python' or 'c'.