Fundamentals of Bitwise Logic
March 07, 2026
I’ve always been fascinated by how computers take high-level code and translate it into physical actions at the bit level. Bitwise operators are what make these translations possible. Because they operate on individual bits on a low level, bitwise operations are incredibly efficient and widely used in many different areas of programming.
This post is a detailed breakdown of how these operators work, the logic behind them, and an overview of their practical applications.
Table of Contents
What are Bits?What are Bitwise Operators?Bit Operations and Their FeaturesCommon UsesInteresting UsesReal-World ApplicationsWhat are Bits?
Bits, short for binary digits, are the most basic units of data for computers. Bits use the binary number system (base 2) to represent information using 1s and 0s. Because a computer’s processor is made up of billions of transistors (somewhat like on/off switches), binary is the only language that the hardware of computers can naturally process. As a result, computers treat all data as binary sequences (sequences of bits) and therefore represent ALL data such as numbers, text, images, audio, or video in binary.
Binary sequences can range from any length. For example, smaller sequences can be 8 bits long (AKA one byte) whilst longer sequences can be 256 bits long (for things like encryption keys).
It’s important to understand how a basic binary sequence (using base 2) is decoded (converted to decimal). When given a sequence, you start from the rightmost digit (position 0) and move left (whilst increasing position). The potential value at each bit is 2^position. The rightmost bit therefore has a potential value of 2⁰=1. The next bit (after moving left one) has a potential value of 2¹=2. The next bit has a potential value of 2²=4, then 2³=8, and so on with the position/exponent increasing every time you move left one bit. Then, looking at the given binary sequence, you need to see if the bit corresponding to a certain position is on (1) or off (0). If it’s on, you add the potential value of the bit to the final decimal (base-10) value.
Binary Sequence 0 0 1 1 0 1 1 0
Position (exponent) 7 6 5 4 3 2 1 0
Value (2^position) 128 64 32 16 8 4 2 1
Calculation 32+16 +4+2 = 54 (final decimal value!)It’s also important to note that for signed values (numbers that could be positive or negative), computers use a system called Two’s Complement, where the leftmost bit, which we call the Most Significant Bit (MSB), is evaluated differently.
In an 8-bit number, for example, the leftmost bit (MSB) would represent -128. If that bit is on (1), you start at -128 and add the remaining positive bits to it as normal. Otherwise, if that bit is off (0), you treat it as you normally would, meaning its value/contribution to the final number would just be zero. This clever math is used to simplify computer arithmetic (but doesn’t apply for unsigned values).
What are Bitwise Operators?
Bitwise operators are symbols that perform operations on binary sequences. Just like how normal arithmetic operators (ex. addition, subtraction, multiplication, etc.) can perform calculations on numeric data, bitwise operators can perform calculations on the binary sequences.
It might be tempting to think of certain bitwise operators as a different form of writing arithmetic operators. For example, bitwise NOT (~) seems very similar to taking the negative version of a number, and bitwise AND (&) seems similar to adding two numbers together. However, bitwise operators are actually completely different compared to typical arithmetic operators! Unlike arithmetic operators, bitwise operators evaluate sequences bit by bit, meaning the final answer they calculate is constructed by comparing one bit at a time (no carry-over). Additionally, bitwise operators are much faster than arithmetic operators because computers actually use a series bitwise operations to perform arithmetic calculations (if interested, you can see some specific operations here)!
Bit Operations and Their Features
In general, bit operations on their own are very simple to understand. You should be familiar with bitwise AND (&), OR (|), XOR(^), and NOT (~), as well as bit shifts (<< >>).
AND (&)
Bitwise AND keeps a bit on (1) if and only if the bit in that position is already on for both numbers. Otherwise, if either one or both numbers have a bit off (0) in that position, the result of AND would be off for that position.
n1 01001101 (77)
n2 01100101 (101)
n1&n2 01000101 (69)Bitwise AND is typically used to check if a certain bit is on, such as in bit masking.
OR (|)
Bitwise OR keeps a bit on (1) if the bit in that position is on for at least one number (either one or both). Otherwise, if both numbers have a bit off (0) in that position, the result of OR would be off for that position.
n1 01001101 (77)
n2 01100101 (101)
n1|n2 01101101 (109)XOR (^)
Bitwise XOR keeps a bit on (1) if and only if the bit in that position is on for exactly one number, or in other words, if the bit in that position is different for both numbers.
n1 01001101 (77)
n2 01100101 (101)
n1^n2 00101000 (40)One very special property of XOR is that it is its own inverse! This means that for any number x: x^x=0. Logically, this makes sense because XOR only returns 1 if two bits are different, and clearly, x has the exact same bit representation as itself.
Another special property of XOR is its associative and commutative nature. This means that if a^b=c, then XORing any two numbers among a, b, or c will yield the third: a^c=b and c^b=a.
You can notice this pattern in the example above, and it’s actually the backbone of things like RAID 5 (how servers recover lost data) and even basic cryptography. If a is your data and b is your secret key, then c is the encrypted message. Only someone with the key (b) can XOR it again to get the original data (a) back.
NOT (~)
Bitwise NOT inverts all the bits in a number such that all occurrences of 0 become 1 and vice versa.
n1 01001101 (77)
~n1 10110010 (-78)If you look at the result of ~77, you might be surprised to see that the answer is -78, not just -77. The reason for this is because, the formula for ~x is actually: ~x = -(x+1) = -x-1
In fact, Two’s Complement, which I mentioned earlier, represents a negative number through this formula! It inverts all bits of a positive number (~x) then adds 1 to the result (-x = ~x+1).
Bit Shifts
Bit shifts move an entire sequence of bits to the left or right, pushing bits off the edge or filling gaps with zeros.
Left Shift (<<)
When you left shift, you push all the bits to the left and fill the new empty spaces on the right with zeros.
n1 00001101 (13)
n1 << 1 00011010 (26)
n1 << 2 00110100 (52)Notice that each time you left shift, you are multiplying the previous number by 2. Shifting by two positions multiplies it by 4 (2²), and three would multiply it by 8 (2³), and so on.
In other words, x << k corresponds to multiplying x by 2ᵏ.
However, it’s important to note that this trick only works if your data type has enough room to store the new shifted value. When you shift an integer too many spaces left, you need to be careful about overflow. In many programming languages, integers take up 32 bits. If you push a 1 off the edge (shift a number too far), the integer might suddenly become a very small number (because you lost your most significant bits), or even a negative number (because you shifted a 1 into the sign bit slot!)
To combat this issue, you can simply use a larger data type. If a 32-bit integer isn't enough to hold your shifted values, you might move up to a 64-bit integer, which can store numbers up to 2⁶³-1 (which is over 9 quintillion).
Right Shift (>>)
As you might guess, when you right shift, you push all the bits to the right and discard any bits on the far right (that “fall off”).
n1 00001101 (13)
n1 >> 1 00000110 (6)
n1 >> 2 00000011 (3)This time, notice that right shifting is the equivalent of floor division (integer division) by 2. In other words, x >> k corresponds to dividing x by 2ᵏ, then rounding the result down to an integer (truncating).
Common Uses
There are so many common applications of bitwise operations that can make your code faster, cleaner, and, well, a lot cooler.
Checking Even/Odd
Instead of using the modulo operator (x % 2), you can use (x & 1). If the result is 1, the number is odd. If the result is 0, it's even. This works because all odd numbers in binary must end in a 1!
Using (x & 1) is also slightly faster than (x % 2), but the difference is relatively small in modern high-level languages.
Detecting Powers of Two
If you want to check whether a number (x) is a power of 2, you can check if the expression (x & (x - 1)) equals 0. This works because all powers of 2 must only have one bit turned on in its binary representation. As a result, when you subtract 1 from a power of two, that single "on" bit (1) turns off, and every bit to its right (which were all 0s) turns on. When you take the AND of both values, they should be different in every bit position, satisfying (x & (x - 1)) == 0.
Bit Masking
A bit mask is a predefined binary pattern (typically an integer value represented in binary) that is used in combination with bitwise operators to selectively read, set, clear, or toggle specific bits within another binary number.
A common bit mask that contains only one bit turned on at the nth position is (1 << n).
x |= (1<<n) sets the nth bit of x to 1.
x &= ~(1<<n) sets the nth bit of x to 0.
x ^= (1<<n) inverts/flips the nth bit of x.
There are many other more complex usages of bitmasks as well. For instance, you don't have to limit your mask to just one bit. You can create a mask that targets an entire group of bits to read or modify them simultaneously.
Representing Sets
A number’s binary representation can also be used to represent a set. If you had an array {3, 2, 6, 4, 10} and you wanted to select the elements at index 0 and index 2 (the values 3 and 6), you can represent that with the binary number 10100. In this case, the bits at the positions you want to "keep" are turned on, and the others are off. This binary string represents the integer 20.
Array: 3, 2, 6, 4, 10
Index: 0 1 2 3 4
Mask: 1 0 1 0 0 = 10100 = 20A bitset is a data structure that compactly stores a sequence of bits and provides efficient methods for manipulating them. It’s primarily used to represent sets of boolean values much more efficiently than an array of booleans.
Generating Subsets
We can also use bitsets to generate all possible subsets of n items. If a list has n items, there are 2ⁿ possible combinations. You simply count from 0 to 2ⁿ-1 in binary to see all combinations of those items.
Each number in that count acts as a "mask" that tells you which items to include in that specific subset.
n=3 (Items: A, B, C)
0 000 -> { }
1 001 -> {C}
2 010 -> {B}
3 011 -> {B, C}
4 100 -> {A}
5 101 -> {A, C}
6 110 -> {A, B}
7 111 -> {A, B, C}Interesting Uses
In theory, understanding how each of the bitwise operators work is very simple, but using them in combination with each other can become very complex and unintuitive.
Bitwise Arithmetic
You can actually perform full subtraction, multiplication, and division using only shifts and bitwise operators, which is how the CPU handles these operations.
Addition, for example, can be rewritten as: a+b = ((a&b)<<1)+(a^b)
This works because the result of a^b (XOR) is what you get when you add the binary forms of a and b except without any carry-over. Since XOR only returns 1 when bits are different, it perfectly calculates the sum for each column while ignoring the carry-over.
The result of a&b identifies exactly where a carry occurs when both bits are 1. We then shift it left to represent the carry propagating to the next column.
Finally, we need to add the result of those operations to each other without actually using the plus sign. To achieve this, you simply repeat the process until there are no carries left, using either a while loop or recursion.
Gosper’s Hack
Gosper’s Hack is a bitwise formula used to iterate through all n-bit values that have k bits set to 1, from lowest to highest. Instead of checking every single number (which would be slow), it uses a combination of Two's Complement and XOR to skip directly to the next value in lexicographical order.
I really love this algorithm because it’s incredibly clever, and I’m planning a separate post to break it down in full. Until then, if you’re curious about this algorithm, you can check out these two really helpful resources:
Real-World Applications
Bitwise operators are commonly used in modern software to handle massive amounts of data efficiently. For example:
- Digital Colors (RGB): A single hex color like #FF5733 is actually a 24-bit integer where shifts and masks can be used to isolate specific Red, Green, or Blue values.
- Compression: File formats like .zip or .png use bit-level manipulation to swap long sequences of data for shorter bit-codes, shrinking files without losing information.
- Cryptography: Because XOR is its own inverse, it is the perfect tool for quickly scrambling and unscrambling private data.
- Feature Flags: Bitmasks are used to efficiently manage user permissions or enable/disable software features.
Thanks for reading, and I hope you found this breakdown useful! Let me know in the comments section if you have any questions or if there are other algorithms you’d like to see a deep dive on next.
Comments (0)