Wiktionary
n. 1 (context computing English) The number obtained by subtracting a given ''n''-digit binary number from (which yields the same result as the logical complement). 2 (context computing English) The convention by which bit patterns with high bit 0 represent positive numbers from 0 to directly, while bit patterns with high bit 1 represent negative numbers from 0 to , ''n'' being the word size of the machine, and the numeric complement of a number is its ones' complement.
Wikipedia
Bits
Unsigned
value
Ones'
complement
value
0111 1111
127
127
0111 1110
126
126
0000 0010
2
2
0000 0001
1
1
0000 0000
0
0
1111 1111
255
−0
1111 1110
254
−1
1000 0010
130
−125
1000 0001
129
−126
1000 0000
128
−127
The ones' complement of a binary number is defined as the value obtained by inverting all the bits in the binary representation of the number (swapping 0s for 1s and vice versa). The ones' complement of the number then behaves like the negative of the original number in some arithmetic operations. To within a constant (of −1), the ones' complement behaves like the negative of the original number with binary addition. However, unlike two's complement, these numbers have not seen widespread use because of issues such as the offset of −1, that negating zero results in a distinct negative zero bit pattern, less simplicity with arithmetic borrowing, etc.
A ones' complement system or ones' complement arithmetic is a system in which negative numbers are represented by the inverse of the binary representations of their corresponding positive numbers. In such a system, a number is negated (converted from positive to negative or vice versa) by computing its ones' complement. An N-bit ones' complement numeral system can only represent integers in the range −(2−1) to 2−1 while two's complement can express −2 to 2−1.
The ones' complement binary numeral system is characterized by the bit complement of any integer value being the arithmetic negative of the value. That is, inverting all of the bits of a number (the logical complement) produces the same result as subtracting the value from 0.
Many early computers, including the CDC 6600, the LINC, the PDP-1, and the UNIVAC 1107, use ones' complement notation; the descendants of the UNIVAC 1107, the UNIVAC 1100/2200 series, continue to use it, but the majority of modern computers use two's complement.