I am currently using Beautiful Soup to parse an HTML file and calling get_text()
, but it seems like I'm being left with a lot of \xa0 Unicode representing spaces. Is there an efficient way to remove all of them in Python 2.7, and change them into spaces? I guess the more generalized question would be, is there a way to remove Unicode formatting?
I tried using: line = line.replace(u'\xa0',' ')
, as suggested by another thread, but that changed the \xa0's to u's, so now I have "u"s everywhere instead. ):
EDIT: The problem seems to be resolved by str.replace(u'\xa0', ' ').encode('utf-8')
, but just doing .encode('utf-8')
without replace()
seems to cause it to spit out even weirder characters, \xc2 for instance. Can anyone explain this?
u''
s instead of ''
s. :-)
u' '
replacement, not the ' '
. Is the original string the unicode one?
\xa0 is actually non-breaking space in Latin1 (ISO 8859-1), also chr(160). You should replace it with a space.
string = string.replace(u'\xa0', u' ')
When .encode('utf-8'), it will encode the unicode to utf-8, that means every unicode could be represented by 1 to 4 bytes. For this case, \xa0 is represented by 2 bytes \xc2\xa0.
Read up on http://docs.python.org/howto/unicode.html.
Please note: this answer in from 2012, Python has moved on, you should be able to use unicodedata.normalize
now
There's many useful things in Python's unicodedata
library. One of them is the .normalize()
function.
Try:
new_str = unicodedata.normalize("NFKD", unicode_str)
Replacing NFKD with any of the other methods listed in the link above if you don't get the results you're after.
normalize('NFKD', '1º\xa0dia')
to return '1º dia' but it returns '1o dia'
unicodedata.normalize
й
to an identically looking sequence of two unicode characters. The problem here is that strings that used to be equal do not match anymore. Fix: use "NFKC"
instead of "NFKD"
.
﷼
to the four-letter string ریال
that it actually is. So it's much easier to replace when needed. You'd normalize and then replace, without having to care which one it was. normalize("NFKD", "﷼").replace("ریال", '')
.
After trying several methods, to summarize it, this is how I did it. Following are two ways of avoiding/removing \xa0 characters from parsed HTML string.
Assume we have our raw html as following:
raw_html = '<p>Dear Parent, </p><p><span style="font-size: 1rem;">This is a test message, </span><span style="font-size: 1rem;">kindly ignore it. </span></p><p><span style="font-size: 1rem;">Thanks</span></p>'
So lets try to clean this HTML string:
from bs4 import BeautifulSoup
raw_html = '<p>Dear Parent, </p><p><span style="font-size: 1rem;">This is a test message, </span><span style="font-size: 1rem;">kindly ignore it. </span></p><p><span style="font-size: 1rem;">Thanks</span></p>'
text_string = BeautifulSoup(raw_html, "lxml").text
print text_string
#u'Dear Parent,\xa0This is a test message,\xa0kindly ignore it.\xa0Thanks'
The above code produces these characters \xa0 in the string. To remove them properly, we can use two ways.
Method # 1 (Recommended): The first one is BeautifulSoup's get_text method with strip argument as True So our code becomes:
clean_text = BeautifulSoup(raw_html, "lxml").get_text(strip=True)
print clean_text
# Dear Parent,This is a test message,kindly ignore it.Thanks
Method # 2: The other option is to use python's library unicodedata
import unicodedata
text_string = BeautifulSoup(raw_html, "lxml").text
clean_text = unicodedata.normalize("NFKD",text_string)
print clean_text
# u'Dear Parent,This is a test message,kindly ignore it.Thanks'
I have also detailed these methods on this blog which you may want to refer.
Try using .strip() at the end of your line line.strip()
worked well for me
try this:
string.replace('\\xa0', ' ')
len(b'\\xa0') == 4
but len(b'\xa0') == 1
. If possible; you should fix upstream that generates these escapes.
string.replace('\xa0', ' ')
I ran into this same problem pulling some data from a sqlite3 database with python. The above answers didn't work for me (not sure why), but this did: line = line.decode('ascii', 'ignore')
However, my goal was deleting the \xa0s, rather than replacing them with spaces.
I got this from this super-helpful unicode tutorial by Ned Batchelder.
'ignore'
is like shoving through the shift stick even though you don't understand how the clutch works..
str.encode(..., 'ignore')
is the Unicode-handling equivalent of try: ... except: ...
. While it might hide the error message, it rarely solves the problem.
.decode('ascii', 'ignore')
line.decode()
in your answer suggests that your input is a bytestring (you should not call .decode()
on a Unicode string (to enforce it, the method is removed in Python 3). I don't understand how it is possible to see the tutorial that you've linked in your answer and miss the difference between bytes and Unicode (do not mix them).
Try this code
import re
re.sub(r'[^\x00-\x7F]+','','paste your string here').decode('utf-8','ignore').strip()
Python recognize it like a space character, so you can split
it without args and join by a normal whitespace:
line = ' '.join(line.split())
I end up here while googling for the problem with not printable character. I use MySQL UTF-8
general_ci
and deal with polish language. For problematic strings I have to procced as follows:
text=text.replace('\xc2\xa0', ' ')
It is just fast workaround and you probablly should try something with right encoding setup.
text
is a bytestring that represents a text encoded using utf-8. If you are working with text; decode it to Unicode first (.decode('utf-8')
) and encode it to a bytestring only at the very end (if API does not support Unicode directly e.g., socket
). All intermediate operations on the text should be performed on Unicode.
In Beautiful Soup, you can pass get_text()
the strip parameter, which strips white space from the beginning and end of the text. This will remove \xa0
or any other white space if it occurs at the start or end of the string. Beautiful Soup replaced an empty string with \xa0
and this solved the problem for me.
mytext = soup.get_text(strip=True)
strip=True
works only if
is at the beginning or end of each bit of text. It won't remove the space if it is inbetween other characters in the text.
It's the equivalent of a space character, so strip it
print(string.strip()) # no more xa0
0xA0 (Unicode) is 0xC2A0 in UTF-8. .encode('utf8')
will just take your Unicode 0xA0 and replace with UTF-8's 0xC2A0. Hence the apparition of 0xC2s... Encoding is not replacing, as you've probably realized now.
0xc2a0
is ambiguous (byte order). Use b'\xc2\xa0'
bytes literal instead.
Generic version with the regular expression (It will remove all the control characters):
import re
def remove_control_chart(s):
return re.sub(r'\\x..', '', s)
You can try string.strip()
It worked for me! :)
This is how I solved this issue as I encountered \xao in html encoded string.
I discovered a None breaking space is inserted to ensure that a word and subsequent HTML markup is not separated due to resizing of a page.
This presents a problem for the parsing code as it introduced codec encoding issues. What made it hard was that we are not privy to the encoding used. From Windows machines it can be latin-1 or CP1252 (Western ISO), but more recent OSes have standardized to UTF-8. By normalizing unicode data, we strip \xa0
my_string = unicodedata.normalize('NFKD', my_string).encode('ASCII', 'ignore')
Success story sharing
b'\xa0'
byte in latin1 encoding, as two bytesb'\xc2\xa0'
in utf-8 encoding. It can be represented as
in html.UnicodeDecodeError: 'ascii' codec can't decode byte 0xa0 in position 397: ordinal not in range(128)
.