Converting decimal to binary is a common task in computer programming. Decimal is a base10 number system that uses ten digits, 0 through 9. Binary, on the other hand, is a base2 number system that uses only two digits, 0 and 1. Converting decimal to binary involves dividing the decimal number by 2 repeatedly and recording the remainder at each step.
Conversion
Here’s a stepbystep guide on how to convert decimal to binary:
 Choose the decimal number you want to convert to binary.
 Divide the decimal number by 2.
 Record the remainder (0 or 1).
 Divide the quotient from step 2 by 2.
 Record the remainder.
 Repeat steps 4 and 5 until the quotient is 0.
 Write the remainders in reverse order to get the binary representation of the decimal number.
Example
Let’s walk through an example to illustrate the process. Suppose we want to convert the decimal number 13 to binary.

The decimal number we want to convert is 13.

We divide 13 by 2 to get a quotient of 6 and a remainder of 1.

We record the remainder of 1.

We divide 6 by 2 to get a quotient of 3 and a remainder of 0.

We record the remainder of 0.

We divide 3 by 2 to get a quotient of 1 and a remainder of 1.

We record the remainder of 1.

We divide 1 by 2 to get a quotient of 0 and a remainder of 1.

We record the remainder of 1.

We write the remainders in reverse order to get the binary representation of 13: 1101.
Quotient Remainder Binary $13/2=6$ 1 1 $6/2=3$ 0 01 $3/2=1$ 1 101 $1/2=0$ 1 1101
BinaryDecimal Table
Here’s a table that shows the decimal values of the first 16 binary numbers:
Decimal  Binary 

0  0000 
1  0001 
2  0010 
3  0011 
4  0100 
5  0101 
6  0110 
7  0111 
8  1000 
9  1001 
10  1010 
11  1011 
12  1100 
13  1101 
14  1110 
15  1111 