2. The Context of Cybersecurity: Cyberspace 9 Each 1 or 0 is called a bit which is short for binary digit. By adding another bit to the string length, we double the number of possibilities. The formula for calculating the total number of possibilities for a given bit string length is straightforward: 2[BIT STRING LENGTH] = Total Number of Possibilities These powers of two grow quickly. For example, it would require just sixty-three bits to encode every single grain of sand on Earth with its own unique bit string! One problem with data encoding schemes is that they are arbitrary. They cannot be used to communicate with others unless the mappings are shared, therefore, it is helpful to create standard data encodings that everybody can agree to use. The first standard binary encoding of the English character set was ASCII (American Standard Code for Information Interchange). It uses binary strings of length seven. This makes available 2⁷ = 128 representations. 128 seems like way too many for the twenty-six letters of the alphabet, but every unique symbol needs its own string including each uppercase and lowercase letter. To encode the uppercase and lowercase letters as well as the ten numerical digits, sixty-two combinations are needed. Add to this the space character, comma, etc., and other common symbols, plus control signals like backspace, and there is a need for well beyond sixty-four combinations. The total combinations always need to be rounded up to the nearest power of two, so even if only sixty-five distinct encodings were needed, seven bits would have to be used. 7-bit ASCII was created in the 1960s, and since that time, the byte has emerged as the standard unit of data in computing. A byte is eight bits. The original ASCII encoding is still used, but a leading zero bit has been added to make each character into a byte length instead of just seven bits. Figure 2.2 A shade of orangish brown created with parts 213 red, 111 green and 56 blue. In addition to text, binary strings can also be used to encode images. One standard encoding used for colors is called the RGB (red, green, blue) color model. It is a 24-bit system that uses a byte for each part red, green, and blue. By varying the proportions of each color, RGB can represent 224 different color combinations—more than sixteen million. Using this scheme, computer screens can “read” these binary strings to illuminate thousands of extremely tiny pixels (picture elements) to display detailed color images. To a computer the orangish brown color in Figure 2.2 looks like this: 110101010110111100111000
RkJQdWJsaXNoZXIy MTM4ODY=