bit

We explain what a bit is, what are its different uses and the methods in which this computing unit can be calculated.

A bit is the smallest unit of information that computing uses.

What is a bit?

In computing is called bit (acronym in English forBinary digit, that is, "binary digit") to a value from the binary numbering system. Said system is so named because it comprises only two base values: 1 and 0, with which an infinite number of binary conditions can be represented: on and off, true and false, present and absent, etc.

A bit is, then, the minimum unit of information used by computing, whose systems are all supported by said binary code. Each bit of information represents a specific value: 1 or 0, but by combining different bits many more combinations can be obtained, for example:

2-bit model (4 combinations):

00 - Both off

01 - First off, second on

10 - First on, second off

11 - Both on

With these two units we can represent four point values. Now suppose we have 8 bits (one octet), equivalent in some systems to abyte: you get 256 different values.

In this way, the binary system operates paying attention to the value of the bit (1 or 0) and its position in the represented string: if it is on and appears in a position to the left, its value is doubled, and if it appears to the right, cut in half. For example:

To represent the number 20 in binary

Binary value net10100

Numeric value per position: 168421

Result: 16 +0 +4 +0 + 0 =20

Another example: to represent the number 2.75 in binary, assuming the reference in the middle of the figure:

Binary value net01011

Numeric value per position: 4210.50.25

Result: 0 +2 +0 +0.5 + 0.25 =2,75

The bits in value 0 (off) are not counted, only those of value 1 (on) and their numerical equivalent is given based on their position in the string, thus forming a representation mechanism that will later be applied to alphanumeric characters ( called ASCII).

In this way, the operations of the microprocessors of the computers- There can be 4, 8, 16, 32 and 64 bit architectures. This means that the microprocessor handles that internal number of registers, that is, the calculation capacity that the Arithmetic-Logical Unit possesses.

For example, the first computers x86 series (the Intel 8086 and Intel 8088) had processors 16-bit, and the noticeable difference between their speeds had to do not so much with their processing capacity, as with the additional help of a 16-bit and 8-bit bus respectively.

Similarly, bits are used to measure the storage capacity of a digital memory.

!-- GDPR -->