Encoding systems: ASCII, PETSCII, Unicode

Encoding systems

The process of converting from one type of information to another is called encoding. You can think of it as a similar process to Morse Code where longer and shorter beeps represent letters.

(Exercise: try to use the encoding system above to decode this: .... . .-.. .-.. ---- .-- ---- .-. -..)

Or the process of dialing a phone number, where pressing the keys generate a specific sound which tells the computers at the phone central who to connect you to.

Using the technique of encoding and decoding binary numbers can be made to represent everthing you see on the computer. There are standards, conventions which describe how something should be encoded and decoded so that one type of information can be translated in multiple places.

n.b. That encoding is not the same as encrypting! In the process of encoding the mapping between the code and the desired result are known and communicated. In encryption that mapping is purposely kept a secret.

Text encodings

As a computer can only work with numbers, it cannot process of letters text directly. In order to work with text, textual characters need to be translated into numbers and vice versa. This is done via the process of text encodings.

It might be your first reaction to think that this shouldn't be so difficult. We could represent the letter in binary code. An a encoded as a 0, b as a 1 and c as a 01 etc. And in fact this is more or less how text encodings work. However, at the time when computing was being developed different encodings emerged.

ASCII encoding

ASCII table

The dominant encoding at the time became ASCII (for American Standard Code for Information Interchange.), which was created on behalf of the U.S. Government in 1963 to allow for information interchange between their different computing systems.

The encoding uses a 7-bit system, which means that they could only store characters in 128 (2^7=128) numbers (0000 0000 until 0111 1111). The resulting encoding schema assigned to each of these 128 numbers:

Thanks to the simplicity of the encoding it quickly became a standard for the American computing industry.

ASCII imperialism

Thanks to the power of the US Military and US corporations the American computing industry became the global computing industry. Computers that we use today are rooted in American networking history, and so is the ASCII standard. However, the reality is that ASCII can only represent 26 Latin letters in the English alphabet but computers are used all over the world, by people speaking different languages. They would often end up with American computers that could not represent their language in ASCII. Think for example of scripts like Greek, Cyrillic and Arabic or even Latin scripts that use accents such as the ü or ø. Altough 128 might sound like a lot of characters, it is not enough to represent all different languages.

Exercise - the limits of ASCII

Not all the characters fit within 128 or even 256 bits.

Try this for example:

For a time people used the 8th bit, numbers 1000 0000 to 1111 1111 (or 128 to 256) to encode the specific parts of their own language. That way there was an overlap with ASCII but the own language could also be encoded.

A list of all the 128 ASCII characters, and their corresponding numbers can be seen in this table:

Exercise - decode binary into ASCII

Use an ASCII code table and decode the following binary code:

0100 1000
0110 0101
0110 1100
0110 1100
0110 1111
0010 0000
0101 0111
0110 1111
0111 0010
0110 1100
0110 0100
0010 0001

ASCII flavours: PETSCII

Commodore 64 (1982)

The Commodore 64, also known as the C64 or the CBM 64, is an 8-bit home computer introduced in January 1982 by Commodore International (first shown at the Consumer Electronics Show, January 7–10, 1982, in Las Vegas). It has been listed in the Guinness World Records as the highest-selling single computer model of all time, with independent estimates placing the number sold between 12.5 and 17 million units.

Preceded by the Commodore VIC-20 and Commodore PET, the C64 took its name from its 64 kilobytes (65,536 bytes) of RAM. With support for multicolor sprites and a custom chip for waveform generation, the C64 could create superior visuals and audio compared to systems without such custom hardware.

https://en.wikipedia.org/wiki/Commodore_64

Part of the Commodore 64's success was its sale in regular retail stores instead of only electronics or computer hobbyist specialty stores. Commodore produced many of its parts in-house to control costs, including custom integrated circuit chips from MOS Technology. In the United States, it has been compared to the Ford Model T automobile for its role in bringing a new technology to middle-class households via creative and affordable mass-production.

Kahney, Leander (September 9, 2003). "Grandiose Price for a Modest PC". CondéNet, Inc. Archived from the original on September 14, 2008. Retrieved September 13, 2008.

PETSCII

The Commodore PET's lack of a programmable bitmap-mode for computer graphics, as well as it having no redefinable character set capability, may be one of the reasons PETSCII was developed; by creatively using the well-thought-out block graphics, a higher degree of sophistication in screen graphics is attainable than by using plain ASCII's letter/digit/punctuation characters. In addition to the relatively diverse set of geometrical shapes that can thus be produced, PETSCII allows for several grayscale levels by its provision of differently hatched checkerboard squares/half-squares. Finally, the reverse-video mode (see below) is used to complete the range of graphics characters, in that it provides mirrored half-square blocks.

https://en.wikipedia.org/wiki/PETSCII

Draw PETSCII art in the browser:

Use PETSCII as a font!

PETSCII bots!

Unicode universalism

'As electronic text was increasingly being exchanged online and between language areas, issues emerged when text encoded in one language was shared and read on systems assuming an encoding in another language. Unicode was a response to the incompatible text encoding standards that were proliferating.

When different encodings assign the same binary numbers to different characters, this results in illegible documents. The solution, partly made possible by increased computing capacity, was to strive for a single universal encoding which would encompass all writing systems'

(Roel Roscam Abbing, Peggy Pierrot, Femke Snelting (2016), Modifying the Universal)

You can experience this following the "break Yahoo" exercise below.

So in order to overcome the limitations of ASCII people created the Unicode Consortium to create a single universal character encoding:

'The Unicode standards are designed to normalise the encoding of characters, to efficiently manage the way they are stored, referred to and displayed in order to facilitate cross-platform, multilingual and international text exchange. The Unicode Standard is mammoth in size and covers well over 110,000 characters, of which [..] 1,000 are [..] emoji.'

(Roel Roscam Abbing, Peggy Pierrot, Femke Snelting (2016), Modifying the Universal)

In effect the Unicode Standard combined all the different national character encodings together into a single large ledger in order to try to represent all languages. It is divided in so called blocks, which are basically number tables that describe which number is connected to which character. The table starts counting at 0x0 and continues all the way up to 0x10FFFF.

The first block actually corresponds with ASCII: https://en.wikibooks.org/wiki/Unicode/Character_reference/0000-0FFF

It contains many different scripts for supporting large and smaller language groups, including for example Ethiopian and Cherokee: https://en.wikibooks.org/wiki/Unicode/Character_reference/1000-1FFF

However there are also blocks that describe Arrows and other symbols: https://en.wikibooks.org/wiki/Unicode/Character_reference/2000-2FFF

Emoji are also part of the unicode table.

From Unicode codepoint, to character, to glyph

Image representing a Unicode codepoint, a character and various glyphs

In the Unicode standard every text character has a representation as a number or codepoint. The standard defines what codepoint is connected to what character, but not what glyph should look like. That is left to individual font designers.

Next to the letters, there are many control characters that are used for example to display text from right to left and reverse or to join two separate characters together into one (as happens with Asian language scripts)

One of the curious things about Unicode is that there are many homographs (characters look identical, have different code points).

For example: Greek Ο, Latin O, and Cyrillic О are identical to the eye, but different to the computer.

You can tell when you take a closer look:

Ο = 0x39f for Greek
O = 0x4f for Latin О = 0x41e for Cyrillic

Try to copy the Greek or Cyrillic О and search for it in the document by pressing ctrl + f and entering it in the search bar. Then do the same with the Latin O.

Exercise - generating "Unicode-codepoint-art"

Here is a piece of python code that will fill try to print out all Unicode characters by counting from 0x0 to 0x10FFFF and converting that into a printable character:

python -c 'exec """\nimport time\nwhile True:\n for i in range(0x0,0x10FFFF):\n print unichr(i)*80\n time.sleep(0.05)\n"""'

With this technique it is also possible to make animations:

python -c 'exec """\nimport time\nwhile True:\n for i in range(127761,127768):\n print unichr(i)*10\n time.sleep(0.05)\n"""'

Exercise - what does a japanese website look like in ASCII?

  1. Go to https://www.yahoo.co.jp/ in your browser.
  2. Save the page to your computer. You could use CMD+S or CTRL+S for this.
  3. Open the page in a text editor.
  4. Edit line 3 of the file: <meta http-equiv="content-type" content="text/html; charset=UTF-8"> and change the encoding of the document from UTF-8 into ASCII.
  5. Open the page.

You have now Mojibaked the page!

(Mojibake article on Wikipedia)