Digital Signals

A digital signal represents information as a series of binary digits. A binary digit (or bit) can only take one of two values - one or zero. For that reason, the signals used to represent digital information are often waveforms that have only two (or sometimes three) discrete states. In the signal waveform shown below, the signal alternates between two discrete states (0 volts and 5 volts) which could be used to represent binary zero and binary one respectively. If it were actually possible for the signal voltage to instantly transition from zero to five volts (or vice versa), the signal could be said to be discontinuous. In reality, such an instantaneous transition is not physically possible, and a small amount of time is required for the voltage to increase from zero to five volts, and again for the signal to drop from five to zero volts. These finite time periods are referred to as the rise time and the fall time respectively.


A simple digital signal

A simple digital signal


In the simple digital signal represented above, alternating binary ones and zeroes are represented by different voltage levels. A binary one would appear on the transmission line as a short voltage pulse, while a binary zero would be represented as an absence of voltage. This rather simplistic signalling scheme has a number of serious flaws, one of which is that a long series of consecutive ones (or a long series of consecutive zeroes) presents the receiver with the problem of determining exactly how many bits are actually being transmitted. For this to be possible, the duration of each bit-time must be known to both the transmitter and the receiver, and the receiver?s internal clock must be synchronised exactly with that of the transmitter, so that the correct number of consecutive identical bits can be calculated by the receiver. In the example shown below, there are no more than two consecutive bits with the same value, which would not normally present the receiver with too much of a problem. Extended runs of binary numbers having the same value, however, would prove far more of a challenge.


Data representation in a digital signal

Data representation in a digital signal


Our simple example in the first diagram uses a positive voltage to represent a one, and the absence of a voltage to represent a zero (for historical reasons, the terms mark and space are often used to refer to the binary digits one and zero respectively). This prompts the question of how the receiver knows whether the transmitter is transmitting a long stream of zeroes, or has simply ceased to transmit. There are, in fact, many different digital encoding schemes that overcome this problem, together with that of long streams of bits having the same value, which we will look at in more detail elsewhere. For now, it is enough to understand that digital signals convey binary data in the form of ones and zeros, using different, discrete signal levels to represent the different logical values. If the signalling scheme used employs a positive voltage to represent one logic state, and a negative voltage to represent the other, the signal is said to be bipolar.

The number of bits that can be transmitted by the signalling scheme in one second is known as its data rate, and is expressed as bits per second (bps), kilobits per second (kbps) or megabits per second (Mbps). The duration of a bit is the time the transmitter takes to output the bit (and as such is obviously related to the data rate). The modulation or signalling rate is the rate at which the signal level is changed, and depends on the digital encoding scheme used (and is also directly related to the data rate). A special case of digital signalling involves the generation of clock signals used to provide synchronisation and timing information for various signal-processing and computing devices. Clock ticks are triggered by either the rising or falling edge (or in some cases both the rising and falling edges) of an alternating digital signal.

The physical communications channel between two communicating end points will inevitably be subject to external noise (electromagnetic interference), so errors will occasionally occur. The degree to which the receiver will be able to correctly interpret incoming signals will depend upon several factors, including its ability to synchronise with the transmitter, the signal-to-noise ratio (SNR), which is a measure of the difference between the transmitted signal strength and the level of background noise, and the data rate. The data rate is significant in this respect because it is directly related to the baseband frequency used. Signals at higher frequencies tend to be more susceptible to very short but high-intensity bursts of external noise (impulse noise), because as frequency increases, there is a greater likelihood that one or more bits in the data stream will become corrupted by a so-called "spike".

In order for the receiver to correctly interpret an incoming stream of bits, it must be able to determine where each bit starts and ends. In order to do this, it needs to somehow be synchronised with the transmitter. It will need to sample each bit as it arrives to determine whether the signal level is high (denoting a binary one) or low (denoting a binary zero). In the simple digital encoding schemes considered so far, each bit will be sampled in the middle of the bit-time, and the measured value compared to pre-determined threshold values to determine whether it is a logic high or a logic low (or neither).

Timing information becomes more critical as data rates increase and the bit duration becomes shorter, especially for data transfers involving large blocks of data consisting of thousands of bits of information. At relatively low data rates, and for asynchronous data transmission involving only a few bits or bytes of data at any one time, the receiver?s internal clock signal will normally suffice to maintain synchronisation with the transmitter long enough to sample the incoming bits in each block of data received at (or close to) the centre of each bit-time (synchronous and asynchronous transmission are dealt with in more detail elsewhere). For larger blocks of data, however, the receiver?s internal clock cannot be relied upon to remain synchronised with the transmitter. A more reliable timing mechanism is required to maintain synchronisation between receiver and transmitter.

One option would be for the transmitter to transmit a separate timing signal which the receiver could use to synchronise its sampling operations on the incoming data stream. This would significantly increase the overall bandwidth required for data transmission, and make the digital transmission system far more difficult to design and implement. Fortunately this is not necessary, because the required timing signal can be embedded in the data itself. This is achieved by encoding the data in such a way that there is a guaranteed transition in signal level (from high to low or from low to high) at some point during each bit-time. One such encoding scheme, called Manchester encoding, is illustrated below. This scheme guarantees a transition in the middle of each bit-time that serves as both a clocking mechanism and as a method of encoding the data. A low-to-high transition represents a binary one, while a high-to-low transition represents a binary zero. This type of encoding is known as bi-phase digital encoding. Such schemes are said to be self-clocking, and have no net dc component (there are both positive and negative voltage components of equal duration, during each bit-time).


Manchester encoding is a bi-phase digital encoding scheme

Manchester encoding is a bi-phase digital encoding scheme


One of the main advantages of digital communications is that virtually any kind of information can be represented digitally, which means that many different kinds of data may be transmitted over the same physical transmission medium. In fact, a number of different digital data streams may share the same physical transmission medium at the same time, thanks to advanced multiplexing techniques (multiplexing will be discussed in detail elsewhere). The number of bits required to represent each item of data transmitted will depend on the type of information being sent. Alpha-numeric characters in the ASCII character set, for example, require eight bits per character. Other character encoding schemes can represent a far greater number of characters, but require more bits to represent each character. Analogue information (for example audio or video data) can be represented digitally by sampling the analogue waveform many hundreds, or even thousands of times per second, and then encoding the sample data using a finite range of discrete values (a process known as quantising). The values derived using the quantisation process are then represented as binary numbers, and as such can be transmitted over a digital communications medium as a bit stream. The sampling, quantisation, and conversion to binary format represent an analogue-to-digital conversion (ADC).


The sampling process repeatedly measures the instantaneous voltage of the analogue waveform

The sampling process repeatedly measures the instantaneous voltage of the analogue waveform



The quantisation process assigns a discrete numeric value to each sample

The quantisation process assigns a discrete numeric value to each sample



The quantised values are encoded as binary numbers

The quantised values are encoded as binary numbers


The number of bits used to represent each sample will depend on the total number of discrete values required to represent the original data so that the original analogue waveform can be reproduced at the receiver to an acceptable standard. The more samples taken per unit time, the more closely the reconstructed analogue waveform will reflect the original waveform (or, to put it another way, the higher the resolution will be). The cost of higher resolution is that more bits will be required to digitally encode each sample, increasing the bandwidth required for transmission. Analogue human voice signals are encoded for transmission over digital circuits in the public switched telephone service (PSTN) using eight bits per sample, giving a range of 256 possible values for each sample. The signals are sampled eight thousand times per second, giving a total requirement of 8 x 8,000 bits per second, or 64 kbps. This is adequate for voice transmission over the telephone network which has traditionally been restricted to a bandwidth of less than 4 kHz (the significance of this restriction will be discussed elsewhere).

For high-quality real-time video transmission, the data rate (and hence the required transmission bandwidth), will be far higher. Various data compression techniques can be used to maximise the bandwidth utilisation, but a significant amount of bandwidth will still need to be available to guarantee high-quality real-time video transmission, and the complexity of the signal processing required will be greater.

The ability to interleave video, audio, and other forms of data on the same digital transmission links has already been mentioned. Another important advantage of digital signalling is the fact that, because it employs discrete signalling levels, a receiver need only determine whether the sampled voltage represents a logic high (1) or a logic low (0). Small variations in level can otherwise be ignored as having no significance, unlike the continuously varying analogue signals, where even small variations in the amplitude may convey information (or represent fluctuations due to noise). Digital signals suffer from attenuation of course, in the same way that analogue signals suffer from attenuation. Unlike analogue signals, however, as long as a receiver can distinguish between logic high and logic low, the incoming signals can be amplified and repeated with no loss of data whatsoever. The regenerated signal that leaves a digital repeater is identical to the digital signal originally transmitted by the source transmitter.