Convert text to binary and binary back to text, with UTF-8 or ASCII encoding and selectable bit grouping.
Input chars: 5UTF-8 bytes: 5Output length: 47
Quick reference: common ASCII characters in 8-bit binary
Character
Decimal
Binary (8 bits)
A
65
01000001
Z
90
01011010
a
97
01100001
z
122
01111010
0
48
00110000
9
57
00111001
(space)
32
00100000
!
33
00100001
?
63
00111111
(newline)
10
00001010
Frequently asked questions
How does text-to-binary conversion actually work?
Each character in the text has a numeric code (its code point). For ASCII characters the code fits in 7 bits and is padded to 8 with a leading zero. The character A is code 65, which is 01000001 in binary. The converter reads each character, looks up its code, and writes out the binary representation byte by byte.
What is the difference between ASCII and UTF-8?
ASCII covers 128 characters (English letters, digits, punctuation) and uses one 7-bit byte per character. UTF-8 is a variable-length encoding that handles every character on Earth — Latin, Cyrillic, Chinese, Arabic, emoji — using one to four bytes per character. ASCII characters are identical in UTF-8 (one byte). Non-ASCII characters take 2-4 bytes. Use UTF-8 unless you specifically need ASCII compatibility.
Why does my binary input fail to decode?
Common causes: (1) the bit groups are not 8 (or 7) bits each — check there are no extra spaces; (2) characters other than 0 and 1 are present; (3) the binary represents an incomplete UTF-8 sequence. The decoder reports the position of the first error so you can fix it. If the input lacks separators, set "Bit grouping" to 8 and "Separator" to None.
How many bits per character should I use?
Standard convention is 8 bits per byte. Pure ASCII fits in 7 bits, but stored data and network protocols always pad to 8. Use 7-bit only when working with old teletype protocols or academic exercises. Non-ASCII characters (accents, Cyrillic, Chinese, emoji) require multi-byte UTF-8, so 7-bit grouping does not apply to them.
Can I convert non-English text?
Yes — keep encoding set to UTF-8. The character é is two bytes in UTF-8: 11000011 10101001. Cyrillic я is also two bytes. CJK characters (Chinese, Japanese, Korean) are three bytes. Emoji typically take four bytes. ASCII mode rejects any character outside the 0-127 range with an error.
Is binary the same as machine code?
No. Binary is just a numeric base — base 2 — that uses two digits, 0 and 1. Machine code is the binary representation of CPU instructions, which is a specific encoding for a specific processor. The binary you see here is text encoded as bytes, not executable instructions. Both happen to use 0s and 1s, which is why "binary" colloquially means "computer-friendly".
How do I convert binary back to text?
Switch to the Binary → Text tab and paste your binary. The decoder splits the input by separator (default: space) into bytes, converts each group from base 2 to its decimal code point, and assembles the result. For UTF-8 it correctly stitches multi-byte sequences back into one character.
All conversion happens in your browser. Nothing is uploaded.
Convert any text into its binary representation and decode binary back into text in a single calculator. Pick the encoding — UTF-8 (default, supports every language and emoji) or strict ASCII (7-bit, English only). Adjust bit grouping (8, 7, or no grouping) and the byte separator (space, none, hyphen, pipe) to match the format you have. The Swap button moves the output back into the input so you can round-trip a value and verify it. Live stats show input characters, UTF-8 byte count, and output length. Examples: the letter A is 01000001, the word Hi is 01001000 01101001, and 0100100001100101011011000110110001101111 decodes to Hello with 8-bit grouping. Multi-byte characters such as é, я, 漢, or 🙂 work in UTF-8 mode but trigger a clear error in ASCII mode. The decoder pinpoints the exact bit group that fails so you can fix invalid input quickly.