About the chinese/english/kanji.
Even in Japan they use keyboards with our alphabet, and they write words by typing the phoneme of each character. You can test how it works in Google Translate, as they created a pseudo IME in the interface.
https://translate.google.com/?sl=ja&tl=en&text=うあwlcなすwhれいくlんrうぇbfhsdvfb&op=translate
Now try mashing the keyboard and see how not all letters are turned into japanese.
The Kanjis from Japan are originally chinese characters imported into the Japanese language. The reason chinese was detected is because usually Japanese mixes Katakana/Hiragana with the Kanji (Chinese doesn't have that). But for the most part, the meanings are the same, so even if you get chinese to english, the translation should be fine.
Since that part had no Katakana/Hiragana, it thought it was chinese.
Those are most likely related to "corrupted text", as in, in computers, everything that you write is saved as bits (the 0101010101), and bytes are considered 8 bits, hence the maximum amount of characters is "256". But as technology improved and new languages and symbols were required, alternatives were created (today we commonly use UTF-8/Unicode for example).
UTF8 is popular because of its compatibility with simple western charsets, for example, most characters use only 7 bits (up to 01111111), and when something requires more, the 8th bit becomes an identifier to consider the next byte part of the same letter (this means that some letters will weight 16 bits / 2 bytes. It's a way of saving space in the past, and supporting multiple languages.).
Japanese has its own charset, and if I'm not wrong by default it already uses 16 bits (they do need much more than 256 character.
But what happens when you get a Japanese text with a large amount of 1 and 0 that are supposed to be read with a decoder, but you get the wrong one, and now you get a bunch of garbled text. The opposite is also true.
These should explain the author/mangaka's choice on doing those.
In short, there is no translation for those.