Monthly Archives: February 2018

On automatic correction of OCR output

Although this project began because I found many historical questions led to statutory source material, it has taken a technical turn into creating reliable and useful texts of the laws. Whilst I wasn’t surprised to find that the raw OCR of the eighteenth and early nineteenth century publications was foul, I had hoped it could be knocked into reasonable shape simply by correcting obvious, predictable errors, such as the long s being interpreted as an f.

This turned out to be true to a certain extent. I’m running a fairly simple bash script that takes a list of errors and their corrections, and one by one works through each word of the OCR’d text of circa 90 volumes published before 1820, and the results are promising. The errors are much more diverse than I presumed, but are still fairly uniform. For example, the combination of long s followed by h, as in parish, is often read as lh, lii, jh, and so on.

A bigger problem is when the s interpreted as f produces another english word, such as ‘lame’ or ‘fame’ for same. For this I have used the same script to check for phrases. Day makes sense preceeded by same, so correcting nonsense phrases like lame day and fame day is quite safe. And as the statutes are quite formulaic, with many repeated phrases, this approach is quite suited to them. Even better, as more words are corrected, the more these phrases are made apparent. With the word ‘act’ corrected from the very many misreadings, one can start correcting the phrase ‘act parted’ into ‘act passed.’

Another approach is to think in terms of parts of words. Given that the verb ‘establish’, often rendered as eftablifh, has a number of derivatives – established, establishing, disestablishment and so on – it makes sense to correct the stem of the word, rather than check for each variant.

All to the good, but this is a big body of text. There’s something like 14 million words in Pickering’s collection of the statutes alone. And that means there’s going to be a lot of mistakes, and more importantly, a lot of types of mistakes. The long s alone has at least 3 types of common misreading, as f, j, and l, and even more when it gets taken in conjunction with its following letter.

Working out how to tackle this has been gratifyingly interesting. There’s all sorts of technical ways of doing this, by looking at the texts as individual words, as stems, or lemmas, of words, as a collection of phrases, as strings of characters. There’s also some deeper, mathematical, ways of thinking about this, that would alleviate having to compile a near-infinite list of possible errors that do not run afoul of false positives for any eighteenth century text. For example, the lame king is not to be found in the statutes, but no doubt turns up in some novel of the time.

It should, for example, be possible to search the statutes for every string close to, but not identical with, the phrase ‘the authority aforesaid’ and correct it, without having to produce a list of every possible variant. Such a more subtle process should be quicker than the ‘brute force’ method I am currently using.

This is leaving aside the other causes of errors: those caused by the quality of the digitization, the quality of the printing and the markings of readers in the volumes digitized, and most problematic for this project, the mis-recognition of the layout of the pages. The convention of annotating laws with marginal notes – and these notes are not part of the statute itself – complicates the page design, and the raw OCR often integrates the comments into the main body of the text. On reflection, I should have taken more care of that when putting the books through the OCR machine, but that comes with a considerable cost in time. There may be ways of automating the detection of such errors.

Work on error correction continues, with the pleasant collateral that it is a fascinating problem, and not mere drudgery. In the meantime, I have a growing set of lists of automatic correction pairs on github. These have been split into certain categories: place names, latin, phrases, as well as English words. Depending on the text being corrected, some will be relevent and others not. Note that because of the script I am using (which I hope to publish soon), spaces in phrases and split words are escaped with a backslash, as in ‘authority\ aforesaid’.