ChatGPT解决这个技术问题 Extra ChatGPT

Converting string to byte array in C#

I'm converting something from VB into C#. Having a problem with the syntax of this statement:

if ((searchResult.Properties["user"].Count > 0))
{
    profile.User = System.Text.Encoding.UTF8.GetString(searchResult.Properties["user"][0]);
}

I then see the following errors:

Argument 1: cannot convert from 'object' to 'byte[]' The best overloaded method match for 'System.Text.Encoding.GetString(byte[])' has some invalid arguments

I tried to fix the code based on this post, but still no success

string User = Encoding.UTF8.GetString("user", 0);

Any suggestions?

What is the type of searchResult.Properties["user"][0] ? Try casting it to byte[] first
mshsayem went where I was going. Are you missing a cast to a (byte[]) on the searchResult?
How would I go about doing that in my case? My knowledge of C# syntax is pretty limited to be honest.
You need to find out what type Properties["user"][0] is. If you're sure it's a byte array then you can cast like this profile.User = System.Text.Encoding.UTF8.GetString((byte[])searchResult.Properties["user"][0]);
Turns out there was no need for all that fuss. The username could be fetched without encoding after all.

T
Timothy Randall

If you already have a byte array then you will need to know what type of encoding was used to make it into that byte array.

For example, if the byte array was created like this:

byte[] bytes = Encoding.ASCII.GetBytes(someString);

You will need to turn it back into a string like this:

string someString = Encoding.ASCII.GetString(bytes);

If you can find in the code you inherited, the encoding used to create the byte array then you should be set.


Timothy, I've looked through the VB code and I can't seem to find a byte array as you have mentioned.
On your search result, what is the type of the Properties property?
All I can see is that there are a number items attached to Properties as a string. I'm not sure if that's what you were asking me though.
@AndiAR try Encoding.UTF8.GetBytes(somestring)
For my situation I found that Encoding.Unicode.GetBytes worked (but ASCII didn't)
S
Shridhar

First of all, add the System.Text namespace

using System.Text;

Then use this code

string input = "some text"; 
byte[] array = Encoding.ASCII.GetBytes(input);

Hope to fix it!


J
Jan Turoň

Encoding.Default should not be used...

Some answers use Encoding.Default, however Microsoft raises a warning against it:

Different computers can use different encodings as the default, and the default encoding can change on a single computer. If you use the Default encoding to encode and decode data streamed between computers or retrieved at different times on the same computer, it may translate that data incorrectly. In addition, the encoding returned by the Default property uses best-fit fallback [i.e. the encoding is totally screwed up, so you can't reencode it back] to map unsupported characters to characters supported by the code page. For these reasons, using the default encoding is not recommended. To ensure that encoded bytes are decoded properly, you should use a Unicode encoding, such as UTF8Encoding or UnicodeEncoding. You could also use a higher-level protocol to ensure that the same format is used for encoding and decoding.

To check what the default encoding is, use Encoding.Default.WindowsCodePage (1250 in my case - and sadly, there is no predefined class of CP1250 encoding, but the object could be retrieved as Encoding.GetEncoding(1250)).

...UTF-8/UTF-16LE encoding should be used instead...

Encoding.ASCII in the most scoring answer is 7bit, so it doesn't work either, in my case:

byte[] pass = Encoding.ASCII.GetBytes("šarže");
Console.WriteLine(Encoding.ASCII.GetString(pass)); // ?ar?e

Following Microsoft's recommendation:

var utf8 = new UTF8Encoding();
byte[] pass = utf8.GetBytes("šarže");
Console.WriteLine(utf8.GetString(pass)); // šarže

Encoding.UTF8 recommended by others is an instance of UTF-8 encoding and can be also used directly or as

var utf8 = Encoding.UTF8 as UTF8Encoding;

Encoding.Unicode is popular for string representation in memory, because it uses fixed 2 bytes per char, so one can jump to n-th character in constant time at cost of more memory usage: it is UTF-16LE. In MSVC# The *.cs files are in UTF-8 BOM by default and string constants in them converted to UTF-16LE at compile time (see @OwnagelsMagic comment), but it is NOT defined as default: many classes like StreamWriter uses UTF-8 as default.

...but it is not used always

Default encoding is misleading: .NET uses UTF-8 everywhere (including strings hardcoded in the source code) and UTF-16LE (Encoding.Unicode) to store strings in memory, but Windows actually uses 2 other non-UTF8 defaults: ANSI codepage (for GUI apps before .NET) and OEM codepage (aka DOS standard). These differs from country to country (for instance, Windows Czech edition uses CP1250 and CP852) and are oftentimes hardcoded in windows API libraries. So if you just set UTF-8 to console by chcp 65001 (as .NET implicitly does and pretends it is the default) and run some localized command (like ping), it works in English version, but you get tofu text in Czech Republic.

Let me share my real world experience: I created WinForms application customizing git scripts for teachers. The output is obtained on the background anynchronously by a process described by Microsoft as (bold text added by me):

The word "shell" in this context (UseShellExecute) refers to a graphical shell (ANSI CP) (similar to the Windows shell) rather than command shells (for example, bash or sh) (OEM CP) and lets users launch graphical applications or open documents (with messed output in non-US environment).

So effectively GUI defaults to UTF-8, process defaults to CP1250 and console defaults to 852. So the output is in 852 interpreted as UTF-8 interpreted as CP1250. I got tofu text from which I could not deduce the original codepage due to the double conversion. I was pulling my hair for a week to figure out to explicitly set UTF-8 for process script and convert the output from CP1250 to UTF-8 in the main thread. Now it works here in the Eastern Europe, but Western Europe Windows uses 1252. ANSI CP is not determined easily as many commands like systeminfo are also localized and other methods differs from version to version: in such environment displaying national characters reliably is almost unfeasible.

So until the half of 21st century, please DO NOT use any "Default Codepage" and set it explicitly (to UTF-8 or UTF-16LE if possible).


Actually .Net and Windows use UTF-16 internally for strings. Win32 API can also accept strings encoded in Active Code Page (ACP) which it converts to UTF-16. OEM codepages are only used for console I/O.
@OwnageIsMagic UTF-16LE is sometimes used internally for strings, but .NET interface uses UTF-8 as default, I added a note about Encoding.Unicode in the answer.
How about some fact checking? github.com/dotnet/runtime/blob/… it uses WCHAR type which means 16 bits per character (UTF-16). Also sizeof(char) in C# is 2
I'm quite sure that even CLR specification enforces use of UTF16 for System.String encoding. And regardless encoding of source file: it completely irrelevant. Compiler converts source file encoding to UTF16 during compilation.
You can specify source file encoding with docs.microsoft.com/en-us/dotnet/csharp/language-reference/… -codepage flag to Roslyn compiler. The compiler will first attempt to interpret all source files as UTF-8. If your source code files are in an encoding other than UTF-8 and use characters other than 7-bit ASCII characters, use the CodePage option to specify which code page should be used.
K
Kuganrajh Rajendran
var result = System.Text.Encoding.Unicode.GetBytes(text);

This should be the accepted answer, as the other answers suggest ASCII, but the encoding is either Unicode (which it UTF16) or UTF8.
Indeed, @Abel. The C# currently uses UTF-16 as default and encoding such makes sense more than ASCII. Depends of project of course, but this is default.
C
Cristian Ciupitu

Also you can use an Extension Method to add a method to the string type as below:

static class Helper
{
   public static byte[] ToByteArray(this string str)
   {
      return System.Text.Encoding.ASCII.GetBytes(str);
   }
}

And use it like below:

string foo = "bla bla";
byte[] result = foo.ToByteArray();

I'd rename that method to include the fact that it's using ASCII encoding. Something like ToASCIIByteArray. I hate when I find out some library I'm using uses ASCII and I'm assuming it's using UTF-8 or something more modern.
J
JustinStolle
static byte[] GetBytes(string str)
{
     byte[] bytes = new byte[str.Length * sizeof(char)];
     System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
     return bytes;
}

static string GetString(byte[] bytes)
{
     char[] chars = new char[bytes.Length / sizeof(char)];
     System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
     return new string(chars);
}

This will fail for characters that fall into the surrogate pair range.. GetBytes will have a byte array that misses one normal char per surrogate pair off the end. The GetString will have empty chars at the end. The only way it would work is if microsoft's default were UTF32, or if characters in the surrogate pair range were not allowed. Or is there something I'm not seeing? The proper way is to 'encode' the string into bytes.
Correct, for a wider range you can use something similar to #Timothy Randall's solution: using System; using System.Text; namespace Example{ public class Program { public static void Main(string[] args) { string s1 = "Hello World"; string s2 = "שלום עולם"; string s3 = "你好,世界!"; Console.WriteLine(Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(s1))); Console.WriteLine(Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(s2))); Console.WriteLine(Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(s3))); } } }
@EranYogev why it should fail? I have tested it for the whole range of System.Int32 and it was correct. Can you please explain here or in this question: stackoverflow.com/questions/64077979/…
k
knocte

This what worked for me

byte[] bytes = Convert.FromBase64String(textString);

And in reverse:

string str = Convert.ToBase64String(bytes);

that only works when your string only contains a-z, A-Z, 0-9, +, /. No other characters are allowed de.wikipedia.org/wiki/Base64
The question does not relate to Base64 Strings which have unique character restrictions.
D
Dan Sinclair

Building off Ali's answer, I would recommend an extension method that allows you to optionally pass in the encoding you want to use:

using System.Text;
public static class StringExtensions
{
    /// <summary>
    /// Creates a byte array from the string, using the 
    /// System.Text.Encoding.Default encoding unless another is specified.
    /// </summary>
    public static byte[] ToByteArray(this string str, Encoding encoding = Encoding.Default)
    {
        return encoding.GetBytes(str);
    }
}

And use it like below:

string foo = "bla bla";

// default encoding
byte[] default = foo.ToByteArray();

// custom encoding
byte[] unicode = foo.ToByteArray(Encoding.Unicode);

Note that using Encoding encoding = Encoding.Default results in a compile time error: CS1736 Default parameter value for 'encoding' must be a compile-time constant
a
alireza amini

use this

byte[] myByte= System.Text.ASCIIEncoding.Default.GetBytes(myString);

N
Noam M

The following approach will work only if the chars are 1 byte. (Default unicode will not work since it is 2 bytes)

public static byte[] ToByteArray(string value)
{            
    char[] charArr = value.ToCharArray();
    byte[] bytes = new byte[charArr.Length];
    for (int i = 0; i < charArr.Length; i++)
    {
        byte current = Convert.ToByte(charArr[i]);
        bytes[i] = current;
    }

    return bytes;
}

Keeping it simple


char and string are UTF-16 by definition.
Yes the default is UTF-16. I am not making any assumptions on Encoding of the input string.
There is no text but encoded text. Your input is type string and is therefore UTF-16. UTF-16 is not the default; there is no choice about it. You then split into char[], UTF-16 code units. You then call Convert.ToByte(Char), which just happens to convert U+0000 to U+00FF to ISO-8859-1, and mangles any other codepoints.
Makes sense. Thanks for the clarification. Updating my answer.
I think you are still missing several essential points. Focus on char being 16 bits and Convert.ToByte() throwing half of them away.
P
Pawel Maga

You could use MemoryMarshal API to perform very fast and efficient conversion. String will implicitly be cast to ReadOnlySpan<byte>, as MemoryMarshal.Cast accepts either Span<byte> or ReadOnlySpan<byte> as an input parameter.

public static class StringExtensions
{
    public static byte[] ToByteArray(this string s) => s.ToByteSpan().ToArray(); //  heap allocation, use only when you cannot operate on spans
    public static ReadOnlySpan<byte> ToByteSpan(this string s) => MemoryMarshal.Cast<char, byte>(s);
}

Following benchmark shows the difference:

Input: "Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,"

|                       Method |       Mean |     Error |    StdDev |  Gen 0 | Gen 1 | Gen 2 | Allocated |
|----------------------------- |-----------:|----------:|----------:|-------:|------:|------:|----------:|
| UsingEncodingUnicodeGetBytes | 160.042 ns | 3.2864 ns | 6.4099 ns | 0.0780 |     - |     - |     328 B |
| UsingMemoryMarshalAndToArray |  31.977 ns | 0.7177 ns | 1.5753 ns | 0.0781 |     - |     - |     328 B |
|           UsingMemoryMarshal |   1.027 ns | 0.0565 ns | 0.1630 ns |      - |     - |     - |         - |

A
Ali

A refinement to JustinStolle's edit (Eran Yogev's use of BlockCopy).

The proposed solution is indeed faster than using Encoding. Problem is that it doesn't work for encoding byte arrays of uneven length. As given, it raises an out-of-bound exception. Increasing the length by 1 leaves a trailing byte when decoding from string.

For me, the need came when I wanted to encode from DataTable to JSON. I was looking for a way to encode binary fields into strings and decode from string back to byte[].

I therefore created two classes - one that wraps the above solution (when encoding from strings it's fine, because the lengths are always even), and another that handles byte[] encoding.

I solved the uneven length problem by adding a single character that tells me if the original length of the binary array was odd ('1') or even ('0')

As follows:

public static class StringEncoder
{
    static byte[] EncodeToBytes(string str)
    {
        byte[] bytes = new byte[str.Length * sizeof(char)];
        System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
        return bytes;
    }
    static string DecodeToString(byte[] bytes)
    {
        char[] chars = new char[bytes.Length / sizeof(char)];
        System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
        return new string(chars);
    }
}

public static class BytesEncoder
{
    public static string EncodeToString(byte[] bytes)
    {
        bool even = (bytes.Length % 2 == 0);
        char[] chars = new char[1 + bytes.Length / sizeof(char) + (even ? 0 : 1)];
        chars[0] = (even ? '0' : '1');
        System.Buffer.BlockCopy(bytes, 0, chars, 2, bytes.Length);

        return new string(chars);
    }
    public static byte[] DecodeToBytes(string str)
    {
        bool even = str[0] == '0';
        byte[] bytes = new byte[(str.Length - 1) * sizeof(char) + (even ? 0 : -1)];
        char[] chars = str.ToCharArray();
        System.Buffer.BlockCopy(chars, 2, bytes, 0, bytes.Length);

        return bytes;
    }
}

A
Algemist

This question has been answered sufficiently many times, but with C# 7.2 and the introduction of the Span type, there is a faster way to do this in unsafe code:

public static class StringSupport
{
    private static readonly int _charSize = sizeof(char);

    public static unsafe byte[] GetBytes(string str)
    {
        if (str == null) throw new ArgumentNullException(nameof(str));
        if (str.Length == 0) return new byte[0];

        fixed (char* p = str)
        {
            return new Span<byte>(p, str.Length * _charSize).ToArray();
        }
    }

    public static unsafe string GetString(byte[] bytes)
    {
        if (bytes == null) throw new ArgumentNullException(nameof(bytes));
        if (bytes.Length % _charSize != 0) throw new ArgumentException($"Invalid {nameof(bytes)} length");
        if (bytes.Length == 0) return string.Empty;

        fixed (byte* p = bytes)
        {
            return new string(new Span<char>(p, bytes.Length / _charSize));
        }
    }
}

Keep in mind that the bytes represent a UTF-16 encoded string (called "Unicode" in C# land).

Some quick benchmarking shows that the above methods are roughly 5x faster than their Encoding.Unicode.GetBytes(...)/GetString(...) implementations for medium sized strings (30-50 chars), and even faster for larger strings. These methods also seem to be faster than using pointers with Marshal.Copy(..) or Buffer.MemoryCopy(...).


s
shA.t

Does anyone see any reason why not to do this?

mystring.Select(Convert.ToByte).ToArray()

Convert.ToByte(char) doesn't work like you think it would. The character '2' is converted to the byte 2, not the byte that represents the character '2'. Use mystring.Select(x => (byte)x).ToArray() instead.
J
Janus

If the result of, 'searchResult.Properties [ "user" ] [ 0 ]', is a string:

if ( ( searchResult.Properties [ "user" ].Count > 0 ) ) {

   profile.User = System.Text.Encoding.UTF8.GetString ( searchResult.Properties [ "user" ] [ 0 ].ToCharArray ().Select ( character => ( byte ) character ).ToArray () );

}

The key point being that converting a string to a byte [] can be done using LINQ:

.ToCharArray ().Select ( character => ( byte ) character ).ToArray () )

And the inverse:

.Select ( character => ( char ) character ).ToArray () )

u
user10863293

This work for me, after that I could convert put my picture in a bytea field in my database.

using (MemoryStream s = new MemoryStream(DirEntry.Properties["thumbnailphoto"].Value as byte[]))
{
    return s.ToArray();
}

i
inno

This has been answered quite a lot, but for me, the only working method is this one:

    public static byte[] StringToByteArray(string str)
    {
        byte[] array = Convert.FromBase64String(str);
        return array;
    }

The question does not relate to Base64 Strings which have unique character restrictions.
u
user16863142

Thank you Pawel Maga

your contribution can be completed like this:

    public static byte[] ToByteArray(this string s) => s.ToByteSpan().ToArray();
    public static string FromByteArray(this byte[] bytes) => ToCharSpan(new ReadOnlySpan<byte>(bytes)).ToString();
    public static ReadOnlySpan<byte> ToByteSpan(this string str) => MemoryMarshal.Cast<char, byte>(str);
    public static ReadOnlySpan<char> ToCharSpan(this ReadOnlySpan<byte> bytes) => MemoryMarshal.Cast<byte, char>(bytes);

Are you sure to be in the right place ? Why don't comment instead of posting a new answer that just add precision to another answer ?
Please provide additional details in your answer. As it's currently written, it's hard to understand your solution.
This answers it the best in a clean concise way. Could be in the other answer, but too much theory in that one.