What is ASCII? The complete guide to the character encoding standard
If you've ever typed a letter, sent an email, or written a line of code, you've used ASCII. It's one of those foundational technologies that most people interact with every day without ever thinking about it — like plumbing, but for text.
ASCII stands for American Standard Code for Information Interchange. It's a character encoding standard that assigns a number to every letter, digit, punctuation mark, and control character that computers need to process text. The letter "A" is 65. A space is 32. A newline is 10. Every character has a number, and every computer agrees on what those numbers mean.
That agreement is the whole point. Before ASCII, different computer manufacturers used their own encoding schemes. Text created on one machine was gibberish on another. ASCII fixed that — and in doing so, became the invisible backbone of digital communication.
How ASCII works
ASCII maps 128 characters to the numbers 0 through 127, using 7 bits of data per character. That's it. The entire standard fits on a single page.
The 128 characters break down like this:
| Range | Count | What's in it |
|---|---|---|
| 0–31 | 32 | Control characters (tab, newline, backspace, escape) |
| 32 | 1 | Space |
| 33–47 | 15 | Punctuation and symbols (!, ", #, $, etc.) |
| 48–57 | 10 | Digits 0–9 |
| 58–64 | 7 | More punctuation (:, ;, <, =, >, ?, @) |
| 65–90 | 26 | Uppercase letters A–Z |
| 91–96 | 6 | Brackets and symbols ([, \, ], ^, _, `) |
| 97–122 | 26 | Lowercase letters a–z |
| 123–126 | 4 | Braces and symbols ({, |, }, ~) |
| 127 | 1 | DEL (delete) |
You can explore every one of these characters with descriptions and usage examples on our ASCII character reference.
One elegant detail: the uppercase and lowercase letters are exactly 32 apart. "A" is 65, "a" is 97. This means converting between cases is just flipping a single bit — a trick that programmers still use today.
A brief history of ASCII
The story of ASCII starts in the early 1960s, when the American computing industry had a serious compatibility problem.
The problem: digital Babel
IBM used EBCDIC. Teletype machines used Baudot code. The US military had FIELDATA. Universities, government agencies, and private companies each spoke their own binary language. Sharing data between systems meant writing custom translation programs every time — expensive, error-prone, and completely unsustainable as computing grew.
The solution: one standard to rule them all
In 1963, the American Standards Association (now ANSI) published the first version of ASCII. The committee behind it, chaired by Robert W. Bemer — sometimes called "the father of ASCII" — had to make hard choices about which characters to include in just 128 slots.
They prioritized English text processing, telecom control codes, and mathematical notation. The result was practical, compact, and immediately useful for the American computing industry.
Key milestones
- 1963 — First ASCII standard published (ASA X3.4-1963). Only uppercase letters.
- 1967 — Major revision adds lowercase letters and refines control characters.
- 1968 — US President Lyndon B. Johnson mandates ASCII for all federal computers, accelerating adoption.
- 1981 — IBM PC launches with "extended ASCII" (codepage 437), adding 128 extra characters including box-drawing symbols and accented letters. This isn't official ASCII, but it becomes wildly popular.
- 1986 — Final revision of the standard (ANSI X3.4-1986). Still the version in use today.
- 1991 — Unicode 1.0 published, designed as ASCII's successor for multilingual computing.
The presidential mandate in 1968 was the tipping point. Once the US federal government required ASCII, every manufacturer that wanted government contracts had to support it. The network effect took over from there.
ASCII vs Unicode vs UTF-8
People often confuse these three, so here's the short version:
ASCII defines 128 characters using 7 bits. It covers English text and basic symbols. That's all it was designed to do.
Unicode is a universal character catalog. It defines over 150,000 characters across 160+ writing systems — from Arabic to Emoji to ancient Egyptian hieroglyphics. Unicode doesn't specify how to store the data. It just assigns a code point (a number) to every character.
UTF-8 is the most common way to store Unicode characters as bytes. The clever part: the first 128 UTF-8 characters are identical to ASCII. Every valid ASCII file is automatically a valid UTF-8 file. This backwards compatibility is a big reason UTF-8 won — it didn't break anything that already existed.
So when someone asks "is ASCII still used?" — yes, it's literally embedded inside the encoding that powers the modern web. Over 98% of websites use UTF-8, and every one of them is built on ASCII's foundation.
What is extended ASCII?
Standard ASCII only uses 7 bits (0–127). But a byte has 8 bits, leaving room for another 128 characters in the 128–255 range. Various manufacturers used this extra space for their own characters — accented letters, box-drawing symbols, currency signs, and more.
The most famous extension is IBM codepage 437, which shipped with the original IBM PC in 1981. It added characters like ║, ╔, ╗, ░, ▒, ▓ — the box-drawing and block characters that became the visual language of DOS-era computing and ASCII art.
The catch: these extensions were never standardized. Codepage 437 showed one set of characters. Windows-1252 showed a different set. ISO 8859-1 showed yet another. The same byte value could display a completely different character depending on the system — exactly the kind of chaos ASCII was supposed to prevent.
This fragmentation is ultimately why Unicode was created.
Why ASCII still matters
Programming
Every major programming language uses ASCII for its syntax. Keywords, operators, variable names, string delimiters — all ASCII characters. Even languages that support Unicode identifiers still require ASCII for the structural parts of code.
Network protocols
HTTP headers, email (SMTP), URLs, DNS — the foundational protocols of the internet are ASCII-based. When you type a URL, every character is ASCII. Non-ASCII characters in domains and paths get encoded as ASCII-safe sequences (called percent-encoding or Punycode).
Text art and creative expression
ASCII art has been around since the 1960s and is still thriving today. From text art galleries and FIGlet banners to kaomoji emoticons and decorative text styles, ASCII characters remain the building blocks of text-based creativity.
Data interchange
CSV files, JSON, YAML, XML, TOML, INI files — almost every structured data format is ASCII at its core. When systems need to exchange data reliably, they reach for ASCII-based formats because every system in the world can read them.
Common questions about ASCII
How many characters are in ASCII?
128 characters total. 95 are printable (letters, digits, symbols, and space). 33 are control characters (like tab, newline, and escape) that control how text is processed rather than displayed.
Is ASCII the same as Unicode?
No. ASCII defines 128 characters for English text. Unicode defines over 150,000 characters for virtually every writing system. However, the first 128 Unicode code points are identical to ASCII, making ASCII a subset of Unicode.
Is ASCII still used today?
Yes. ASCII is embedded inside UTF-8, which powers over 98% of the web. Every programming language, network protocol, and data format you use daily is built on ASCII. It's one of the most successful and enduring standards in computing history.
Why is ASCII 7-bit?
The original designers used 7 bits because it gave them 128 character slots — enough for English letters, digits, punctuation, and control codes. The 8th bit in a byte was originally reserved for error-checking (parity) in early telecom systems.
The lasting impact
ASCII is 60+ years old. It predates the internet, the personal computer, and the smartphone. It was designed for a world of teletypes and mainframes. And it still works — not as a historical curiosity, but as the active foundation of how every computer on Earth handles text.
That's the mark of a well-designed standard. It solved the right problem, at the right level of abstraction, with enough simplicity to survive decades of technological change. The 128 characters in ASCII aren't just a technical specification. They're the alphabet of the digital world.
Want to explore ASCII yourself? Browse the full ASCII character table, try converting an image to ASCII art, or create styled text using the characters that started it all.
Further reading
- RFC 20 — ASCII format for Network Interchange — The original 1969 IETF specification
- The Unicode Standard — ASCII's successor, maintained by the Unicode Consortium
- UTF-8 encoding table — Interactive reference showing how ASCII maps into UTF-8
- Tom Scott: Unicode, in friendly terms — Excellent 10-minute video on why ASCII wasn't enough