Five in-depth guides covering everything from the basics of hex to how it powers color codes, memory addresses, and real programming decisions.
If you've ever peeked inside a program, a color picker, or a network configuration screen, you've almost certainly encountered strings of letters and numbers like 3F, DEADBEEF, or #FF5733. These are hexadecimal numbers — and once you understand them, a huge portion of how computers work suddenly makes sense.
Every number system is defined by how many unique symbols it uses before it has to "carry over" to the next column. The decimal system you use every day is base 10: it uses ten symbols (0 through 9). When you reach 9 and need to count one more, you roll over to 10 — a new column is born. Binary is base 2: only the symbols 0 and 1 exist, so you roll over at 2 (binary: 10).
Hexadecimal is base 16. It uses sixteen symbols before rolling over. Because we only have ten numeric digits (0–9), hexadecimal borrows six letters from the alphabet: A, B, C, D, E, and F. These represent the values 10 through 15 respectively. So counting in hex looks like this: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10, 11... When we hit F (which equals 15), the next number is hex 10 — which equals decimal 16.
| Hex | Decimal | Binary |
|---|---|---|
| 0 | 0 | 0000 |
| 1 | 1 | 0001 |
| 5 | 5 | 0101 |
| 9 | 9 | 1001 |
| A | 10 | 1010 |
| B | 11 | 1011 |
| C | 12 | 1100 |
| D | 13 | 1101 |
| E | 14 | 1110 |
| F | 15 | 1111 |
Each position in a hexadecimal number represents a power of 16, just like each position in a decimal number represents a power of 10. The rightmost digit is 16⁰ (= 1), the next is 16¹ (= 16), then 16² (= 256), and so on.
Let's convert 1A3 to decimal step by step:
3 × 16⁰ = 3 × 1 = 3A (=10) × 16¹ = 10 × 16 = 1601 × 16² = 1 × 256 = 256You can verify this instantly using the HexConverter tool on our main page.
Computers store and process everything in binary — sequences of 0s and 1s. Binary is great for hardware but painful for humans to read. A single byte (8 bits) might look like 10110101. Reading a memory dump of thousands of bytes in binary would be essentially impossible.
Hexadecimal solves this elegantly. Because 16 is a power of 2 (2⁴ = 16), every single hex digit maps perfectly to exactly 4 binary bits. That means one byte (8 bits) is always exactly two hex digits. The binary string 10110101 becomes simply B5 in hex — far more compact and readable, while still encoding exactly the same information with no ambiguity.
In programming, the prefix 0x is used to tell the compiler or interpreter that a number is written in hexadecimal. So 0xFF means "the hex number FF", which equals 255 in decimal. You'll see this convention in C, C++, Java, JavaScript, Python, Rust, and almost every other modern programming language. Some older documentation uses the suffix h instead (e.g., FFh), and assembly language sometimes uses a leading zero followed by the number (e.g., 0FFh).
Hexadecimal appears constantly across computing: CSS and design tools use hex color codes like #FFFFFF for white; network equipment displays MAC addresses as hex pairs like A4:B1:C1:3F:00:2D; debuggers show memory addresses and values in hex; file editors (hex editors) display raw binary data as hex pairs; IP version 6 addresses are written in hex; and Unicode character codes are expressed in hex (the emoji 🙂 is U+1F642).
Understanding hex isn't just an academic exercise — it's a practical skill that makes you more effective any time you're debugging software, configuring networking equipment, working with colors, or reading technical documentation.
Every web designer knows hex colors. You've typed #FF0000 for red, #000000 for black, and probably copied dozens of six-character codes from color pickers without thinking twice about what those characters actually mean. Let's decode them fully — because once you understand the structure, you'll be able to read and manipulate hex colors intuitively.
A standard CSS hex color like #FF5733 is made of three pairs of hex digits, each pair representing one color channel in the RGB (Red, Green, Blue) color model. The # symbol is just a marker that tells the browser "this is a color value." After that:
FF) = Red channel intensity57) = Green channel intensity33) = Blue channel intensityEach pair is a two-digit hex number ranging from 00 (0 in decimal — completely off) to FF (255 in decimal — maximum intensity). This gives each channel 256 possible values, and combined across three channels, a total of 256 × 256 × 256 = 16,777,216 possible colors.
Once you internalize the structure, you can start reading colors without a picker. Pure red is #FF0000 — red channel maxed out, green and blue off. Pure green is #00FF00. Pure blue is #0000FF. White is #FFFFFF (all channels maximum) and black is #000000 (all channels zero).
Shades of grey always have equal values across all three channels: #808080 is medium grey (128, 128, 128), #333333 is a dark charcoal, and #CCCCCC is a light grey. The moment the three pairs differ from each other, the result has a hue.
CSS supports a three-character shorthand when each pair consists of two identical digits. #FF0000 can be written as #F00, #FFFFFF becomes #FFF, and #334455 can be shortened to #345. The browser expands each character by doubling it: #F becomes #FF, #3 becomes #33, and so on. This only works when both digits in each pair are the same.
Modern browsers support an 8-digit hex color format that includes an alpha (transparency) channel as a fourth pair. The format is #RRGGBBAA. For example, #FF573380 is the same orange as before but at roughly 50% transparency (since 80 in hex = 128 in decimal, which is about half of 255). A value of FF for the alpha channel is fully opaque; 00 is fully transparent.
CSS gives you three main ways to specify the same color. rgb(255, 87, 51), #FF5733, and hsl(14, 100%, 60%) can all produce identical results. Hex is popular because it's compact and widely supported in design tools. RGB is more readable when you're thinking in raw channel values. HSL (Hue, Saturation, Lightness) is often the most intuitive for making colors lighter, darker, or more saturated without a picker. You can convert freely between them — and the hex format always maps directly to RGB values, just expressed in base 16.
To make a color lighter, increase all three channel values proportionally. To make it darker, decrease them. To desaturate (move toward grey), push the three values closer to each other. If you want a color that's exactly halfway between two hex colors, convert both to decimal, average each channel, and convert back. Our Hex Color Converter lets you pick a color visually and see its hex code instantly.
If you've ever wondered why programmers seem to have a love affair with hexadecimal — peppered through code, documentation, debuggers, and protocol specs — you're not alone. To someone coming from a pure mathematics background, it can seem arbitrary. After all, decimal works perfectly well for counting. So why do developers reach for hex so instinctively?
The answer has everything to do with how computers actually store information, and how humans need to read that information without going insane.
At the lowest level, every computer stores data as sequences of bits — individual electrical signals that are either on (1) or off (0). A single byte is 8 bits. A 32-bit integer takes 32 bits. Reading raw binary is exhausting for humans: a 32-bit value like 10111010011011001101111010101101 contains the same information as the hex value BA6CDE AD, but no human can quickly parse the binary form for errors, patterns, or meaning.
Decimal doesn't solve this cleanly either. Decimal and binary don't have a clean mathematical relationship — you can't look at a decimal number and tell at a glance which bits are set. Hex does have that clean relationship: because 16 = 2⁴, every hex digit corresponds to exactly 4 binary bits, no conversion required once you memorize the 16 symbols.
This is the key insight that makes hex so valuable. A byte can hold values from 0 to 255. In decimal, that's one, two, or three characters. In hex, it's always exactly two characters: 00 through FF. This predictable size makes it easy to scan memory dumps, protocol packets, and binary file formats — every two characters is one byte, every four characters is one 16-bit word, every eight characters is a 32-bit integer.
Memory addresses in modern 64-bit computers are typically displayed as 16-digit hex values, like 0x00007FFE9D4A2380. In decimal, that same address is 140731609899904 — harder to read, and harder to see patterns in. When you're debugging a crash and comparing addresses, hex makes it much easier to spot that two addresses differ by exactly 0x100 (256 bytes) rather than having to compute the difference of long decimal numbers.
One of the most common uses of hex in real programming is working with bitmasks — values where individual bits represent independent boolean flags. Consider a Unix file permission value. The permission rwxr-xr-- translates to binary 111101100, which is octal 754 or hex 0x1EC. Programmers often define constants like READ = 0x04, WRITE = 0x02, EXECUTE = 0x01 and combine them with bitwise OR: permissions = READ | WRITE. In hex, these constants make the bit positions obvious. 0x80 immediately tells an experienced programmer that bit 7 is set; the equivalent decimal value 128 carries no such immediate visual information.
Network packet headers, file format magic numbers, and protocol specifications are universally documented in hex. The PNG image format starts with the bytes 89 50 4E 47 0D 0A 1A 0A — those middle values decode to ASCII characters P, N, G (the format name). IPv6 addresses like 2001:0db8:85a3:0000:0000:8a2e:0370:7334 are written in hex. MAC addresses like A4:B1:C1:3F:00:2D use hex pairs. When you're writing code that parses these formats, matching hex constants in your code to hex values in the spec is far more straightforward than translating everything to decimal first.
You'll encounter hex literals in code for many practical reasons: Unicode character escapes (\u0041 is the letter A), HTML entity codes, hardware register addresses in embedded systems, encryption keys and hash outputs, color values, and bitfield constants. Once you're comfortable reading hex, you'll find yourself reading other developers' code more fluently, understanding debugger output faster, and catching subtle bugs that would be invisible if everything were expressed in decimal.
If you've ever used a debugger, read a crash report, or studied operating systems, you've seen memory addresses — long hexadecimal numbers like 0x7FFEE3A20B40. For many developers, these addresses feel like opaque magic: important, but mysterious. This guide demystifies them completely, explaining what a memory address is, why it's expressed in hex, and how to read the information they contain.
Every byte of RAM in your computer has a unique address — a number that identifies its precise location. Think of RAM as a very long street where every house (byte) has its own street number. The CPU uses these addresses to read and write data: "go to address 0x3F00 and read 4 bytes" or "store this value at address 0x7FFF."
Modern 64-bit computers have address spaces that can theoretically reference 2⁶⁴ = 18,446,744,073,709,551,616 bytes — far more RAM than exists in practice. In reality, operating systems and CPU architecture use only a portion of that range, and they partition it into regions for the operating system kernel, the application stack, the heap, loaded libraries, and more.
A 64-bit address in binary would be 64 characters long — impossible to work with mentally. In decimal, that same address is a 20-digit number with no visual structure. In hex, a 64-bit address is exactly 16 characters (since each hex digit encodes 4 bits, and 64 / 4 = 16). This is compact, predictable in length, and easy to compare. Differences between addresses are often round numbers in hex — 0x1000 (4096 bytes), 0x100 (256 bytes), 0x10 (16 bytes) — which reflect how data is actually aligned in memory.
When you run a program on Linux and look at /proc/<pid>/maps, you see something like:
7f8b3c000000-7f8b3c021000 r--p ... libc.so
The two hex numbers are the start and end addresses of a memory region. The difference (0x21000 = 135,168 bytes) tells you how large that region is. The flags (r--p) tell you the permissions: readable, not writable, not executable, private. This kind of output is immediately interpretable once you're comfortable with hex arithmetic — it would be far more cluttered in decimal.
In a typical 64-bit Linux process, stack addresses are high (near 0x7FFF...) and heap addresses are lower (often starting around 0x0000...). When a debugger shows you a segmentation fault at address 0x0000000000000000, that's a null pointer dereference — the program tried to access address zero, which is deliberately unmapped to catch this common bug. An address like 0xDEADBEEF or 0xBAADF00D appearing in a crash report often indicates a debug sentinel value — a deliberate marker placed in uninitialized memory to make it obvious when that memory is accessed incorrectly.
Modern CPUs perform best when data is aligned to addresses that are multiples of the data's size. A 4-byte integer should ideally sit at an address divisible by 4; an 8-byte double at an address divisible by 8. In hex, aligned addresses always end in predictable digits: an address aligned to 16 bytes always ends in 0, aligned to 256 bytes ends in 00, and so on. This makes alignment visible at a glance when reading hex addresses — something that's nearly impossible to see in decimal.
When debugging, being able to mentally add and subtract small hex values is invaluable. If a structure starts at 0x7FFF1000 and you need to find a field that's 24 bytes (0x18 bytes) into it, the field address is 0x7FFF1018. If you need to check whether two pointers are in the same 4KB page, check whether they share the top digits after dropping the last three hex digits (since 4KB = 0x1000). With practice, hex arithmetic becomes as natural as decimal — and far more useful in systems-level work.
Computers work with several different number bases, and each one has a specific reason for existing. Binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16) all represent the same underlying quantities — they're just different notations, like saying "twelve" in English, "zwölf" in German, or writing "12" vs "XII." This guide explains what makes each base useful, where you'll encounter them, and how to move between them.
Binary is the fundamental language of digital electronics. Every bit in a computer is physically a voltage: high voltage represents 1, low voltage represents 0. All computation, storage, and communication at the hardware level is binary. Logic gates, transistors, flip-flops — everything works in binary.
Binary is great for hardware but terrible for humans. Even a modest 16-bit number requires 16 characters: 1010110011010011. Reading binary is useful when you need to see exactly which bits are set in a bitmask or when you're studying how specific CPU instructions manipulate individual bits — but for everyday work, we use shorthand.
Octal uses eight symbols (0–7) and was historically important because early computers had word sizes that were multiples of 3 bits (6-bit, 12-bit, 24-bit, 36-bit words), making octal a natural fit. Each octal digit represents exactly 3 binary bits. You still encounter octal today primarily in Unix/Linux file permissions. The permission value 755 is octal: 7 = 111 (read, write, execute for owner), 5 = 101 (read and execute for group), 5 = 101 (read and execute for others). In code, octal literals are written with a leading zero: 0755 in C, or with the 0o prefix in Python (0o755).
Octal is much less common than it used to be. Modern systems are built around 8-bit bytes, which don't divide evenly into 3-bit groups — so hex has largely displaced octal except in the file permission niche.
Decimal is what humans use naturally, likely because we have ten fingers. It's the default for user interfaces, business logic, financial calculations, and any value that's meant to be communicated to non-technical people. When you write int age = 25; in code, you're using decimal because 25 is a human-meaningful quantity. Compilers and interpreters convert it to binary internally, but programmers never need to think about that in everyday code.
Decimal's weakness in computing is that it has no clean relationship with binary. Converting between decimal and binary requires actual division and multiplication — there's no shortcut. This is why programmers reach for hex when they need a human-readable representation of binary data.
Hexadecimal is the sweet spot between binary (too verbose) and decimal (no binary relationship). Because 16 = 2⁴, each hex digit is exactly 4 bits. This makes hex a lossless, compact shorthand for binary. A full 32-bit value that would take 32 binary digits takes exactly 8 hex digits. A byte is always 2 hex digits. This size predictability, combined with the direct bit-level correspondence, makes hex the dominant notation for low-level programming, memory inspection, and protocol specifications.
| Decimal | Binary | Octal | Hex |
|---|---|---|---|
| 0 | 0000 | 0 | 0 |
| 8 | 00001000 | 10 | 8 |
| 10 | 00001010 | 12 | A |
| 15 | 00001111 | 17 | F |
| 16 | 00010000 | 20 | 10 |
| 64 | 01000000 | 100 | 40 |
| 100 | 01100100 | 144 | 64 |
| 255 | 11111111 | 377 | FF |
Use binary when you need to see individual bit states — debugging a bitmask, understanding a CPU flag register, or studying digital logic. Use octal almost exclusively for Unix file permissions. Use decimal for any human-facing value — ages, prices, counts, dimensions. Use hexadecimal for memory addresses, color codes, raw byte values, network addresses, cryptographic hashes, and any context where you need a compact, human-readable encoding of binary data.
In practice, most programmers are fluent in decimal and hex, can read binary when needed, and only think about octal when setting file permissions. HexConverter's main converter tool shows all four representations simultaneously — a useful way to build intuition across all four bases at once.