About character encoding:
Computers store information with series of bits, that is, zeros and ones. (1001001110100...) (tangent: they are usually called zero and one, but they can also be thought of as off/on, false/true, etc., anything with two states really.)
"how to represent strings of characters (that is, plain text) with a series of zeros and ones" is a question with no single correct answer. A character encoding is something that gives one answer.
Using one encoding to encode (convert to zeroes and ones) and another different encoding to decode (convert from zeroes and ones) will produce gibberish:
Let's say that encoding A says "000 = a, 001 = b, 010 = c" , but encoding B says "000 = x, 001 = y, 010 = z" (normally theres more than 3 bits, this is just an example). If you encode "abc" with A you get 000001010, but if you decode that with B you get "xyz".
The process is reversible: if you encode "xyz" using B, you get the same ones and zeroes as above, which you then decode with A to get "abc".
This is basically how the gibberish in the previous chapter was made and then deciphered.