All video and audio signals start and end in the analogue world.... because our eyes and ears are analogue. Digital signals are used for processing, transport and storage.
Analogue signals have no defined limits... so could be considered infinite. This makes them very hard to handle accurately and effectively, which is why we use digital technology.
When a signal is turned into digits it will be sampled and quantised. Samples set the frequency bandwidth and quantising (number of bits) the dynamic range or contrast.
Each extra binary bit will double the permutations and therefore increase the dynamic range by 6dB. So the number of bits multiplied by 6 gives a good indication of the system dynamic range.
Digital signals are protected against damage by error correction and concealment. Correction uses mathematical rules to replace missing data (bit like a game of Soduku) whereas concealment will hide errors by making a best guess. When both these systems fail, the system will mute.... ever had a CD which skips part of a track?
4:2:2 is the sampling structure used for most broadcast digital video. It describes the component video data rates and therefore their quality. It represents Luma at full resolution, the two Chroma signals are only half resolution. There are several other sampling structures in use.
When moving digital data, we will talk about kilobits (Kb) megabits (Mb) and gigabits (Gb) per second... but remember bits are different from the bytes (KB, MB, GB) you use to measure storage. A byte is eight times bigger... so a file transfer of 100Mb per second is only writing 12.5MB per second on a disk.