A propos Jean-Baptiste Camps

Diplômé de l'École nationale des chartes et docteur en études médiévales de l'Université Paris-Sorbonne (la Chanson d'Otinel…, dir. Dominique Boutet ), Jean-Baptiste Camps est responsable pédagogique du master «Humanités numériques» à l'École nationale des chartes (PSL). [CV et liste des publications, voir : http://paris-sorbonne.academia.edu/JeanBaptisteCamps]

Roadmap for an R Stemmatology Package

We have written a post, quite some time ago, about an R Stemmatology Package1, and started its development (it is available on Github2). Now, we come back to this long overdue subject, to draft a roadmap for future developments.

NB: this is a working post, that will evolve in the course of development of this package.

Main tasks

1. Documentation

  • Document all the existing functions, datasets (necessary for 2);
  • Document new functions when they are added.

2. R packaging

  • Finish documentation;
  • Other packaging tasks?

3. Implementation of new exploratory methods / methods to detect contamination

  • Finish implementation of methods presented in Camps 2013a et 2013b3;
  • Implement cardiograms from Wattel et Van Mulken 1996, Den Hollander 2004 4;
  • find and implement new methods.

4. Implementation of new tree building algorithms

  • Implementation of tree building algorithms, different from the PCC method5, proposed by other researchers;

5. Implementation of algorithms for theoretical stemmatology

  • Implement a function for bifidity calculations.

  1. J.B. Camps, «An R Stemmatology Package ?», Sacré Gr@@l, 14 mai 2013, en ligne: https://graal.hypotheses.org/665 []
  2. Jean-Baptiste Camps, Florian Cafiero, Stemmatology : an R Stemmatology package, Paris, 2013-…, en ligne: https://github.com/Jean-Baptiste-Camps/stemmatology []
  3. JB Camps, «Detecting Contaminations in Textual Traditions. Computer Assisted and Traditional Methods», unpublished paper presented to the International Medieval Congress, Leeds, 2013; Id., «Sélection des lieux variants et construction d’un stemma: nouvelles expérimentations», communication non publiée présentée au XXVII Congrès International de Linguistique et de Philologie Romanes, Nancy, 2013. []
  4. See E. Wattel & M.J.P. Van Mulken, « Schock Waves in Text Traditions, Cardiograms of the Medieval Litterature », in Studies in Stemmatology, P. Van Reenen, M.J.P. Van Mulken, Amsterdam, 1996, p.105-121; A.A. Den Hollander, « How shock waves revealed successive contamination. A cardiogram of early sixteenth-century Dutch Bibles. » in Studies in Stemmatology 2, dir. P. Van Reenen, A.A. Den Hollander & M.J.P. Van Mulken, Amsterdam, 2004, p. 99-112. You can also have a look at our post, Florian Cafiero, «Le scandale du stemma contaminé. Note sur l’usage des cardiogrammes en philologie», Sacré Gr@@l, 18 décembre 2011, en ligne: https://graal.hypotheses.org/517. []
  5. Eric Poole, « The Computer in Determining Stemmatic Relationships », Computers and the Humanities 8-4 (1974), p. 207‑16; Id., « L’analyse stemmatique des textes documentaires », in La pratique des ordinateurs dans la critique des textes, Paris, 1979, p. 151‑161; Jean-Baptiste Camps et Florian Cafiero, « Genealogical variant locations and simplified stemma: a test case », in Analysis of Ancient and Medieval Texts and Manuscripts: Digital Approaches, dir. Tara Andrews & Caroline Macé, Turnhout, 2015, p. 69‑93 (Lectio, 1). []

Jean-Baptiste Camps

Diplômé de l'École nationale des chartes et docteur en études médiévales de l'Université Paris-Sorbonne (la Chanson d'Otinel…, dir. Dominique Boutet ), Jean-Baptiste Camps est responsable pédagogique du master «Humanités numériques» à l'École nationale des chartes (PSL). [CV et liste des publications, voir : http://paris-sorbonne.academia.edu/JeanBaptisteCamps]

More Posts - Website

Follow Me:
LinkedIn

Homemade manuscript OCR (1): OCRopy

This post is the first of a small series centred on optical text recognition applied to manuscripts. There are many interesting research projects dealing with this question, but my purpose here is quite different: I wish to demonstrate how it is now possible for a single researcher to quickly get usable results with some of the open source tools that are out there, starting with OCRopy and CLSTM.

The field of optical character recognition (OCR) applied to manuscripts (handwritten text recognition) is rapidly evolving, especially now that artificial intelligence methods, such as neural networks, are getting more widespread. For the scholar of medieval manuscripts, it has interesting applications, and it can help in the constitution of textual databases or in the collation of witnesses.

Ocropy is a « collection of document analysis programs »1, that uses a Long Short Term Memory (LSTM) architecture for Recurrent Neural Networks2. It has been successfully applied to a variety of printed scripts, including old prints3. In particular, it is already being used to acquire the texts of incunabula. There are already some very good documentation on this subject, particularly Uwe Springmann’s slides from a workshop in Munich in 20154 and a paper about the Ocrocis environment5.

My interest in OCRopy started with the modest goal of digitising the text of XIXth century editions of Old French texts (namely, the collection of the Anciens poëtes de la France). With it, I got close to 1% error after some time, but soon, motivated by the successful use of OCRopy with incunabula and news I heard from a colleague6, I turned to trying it on manuscripts.

Setting up OCRopy

For the Ubuntu user (probably also for other Linux distributions), OCRopus is very easy to install (it may prove harder with Mac OS, though), following the 4 simple steps described on the repository. Once installed, the next step is getting the image files for the text you want to recognise, preferably in TIF format, and in good resolution (I used 600 DPI images, but the more common recommendation is 300 DPI). Some digital libraries offer such files with an open license, for instance the E-Codices platform7.

Capture d'écran de Scan Tailor. Ici, séparation d'une page en deux

Fig. 1: screenshot of Scan Tailor. Here, splitting the pages.

Depending on the case, you might have to preprocess the images, to rectify orientation, clean the image, crop it, etc. There are many tools to do that, for instance ScanTailor (fig. 1). Once your images are ready, and in a ./tif folder, you are good to go.

Preparing data: from layout analysis to ground truth production

Binarisation and layout analysis: column and line segmentation

Before anything, you will need (if you haven’t done it before) to binarise your images, and to try to detect columns and lines8. This first command,

$ ocropus-nlbin tif/* -o book

will binarise your images and place them in a book folder, and,

$ ocropus-gpageseg book/*.bin.png

will try to perform an identification of columns and lines. For both these commands, you might have to use the -n option, to deactivate error checking, and the skipping of pages or lines in your data. This part of the tools, layout analysis, is not based on training (yet) and can only be configured through a few options, not necessarily very documented. Some are related to scale, others to noise or baseline thresholds; I haven’t had much experience playing with those, so any feedback in comment to this post will be appreciated. Indeed, with manuscripts, this phase is often quite problematic, since lines or columns are not necessarily very regular…

On my experimentations on ms. Bodmer 68, using uniform and gaussian modes, I had the following error rates in column and line segmentation on fol. 134r, 136v and 142r :

fol. Column errors % Line errors %
Default
134r 25 60 6 7
136v 13 31 2 2
142r 22 52 1 1
Gaussian
134r 27 64 4 5
136v 16 38 4 5
142r 22 52 1 1

I do not count here errors related to lines amputated from beginning or end, and false positive (noise lines). In my experience, this is the part were Ocropy (as well as the other tools I tried) are least effective. In the end, I did column segmentation myself, with an image processing tool, and corrected line segmentation manually with Gimp. One can hope there will be improvements in this area in the near future9

Fig. 2a: a successfull segmentation

Fig. 2a: a successful segmentation

Fig. 2b: an extreme segmentation failure

Fig. 2b: an extreme segmentation failure

Ground truth

The first step in training a model to OCR a manuscript will then be to create some ground truth data, that is a correct (or, as correct as possible) transcription of a sample of the manuscript, on which to train the model, so it will learn to recognise the handwriting. This can be easily done with Ocropy, by creating an html file, with line images and text boxes:

$ ocropus-gtedit html -H 35 book/*/*.bin.png -o gt.html

and then,… well, then you have to transcribe a part of the text, to have something to train with (fig. 3).

Fig. 3: ground truth production

Fig. 3: ground truth production

The question is: how many lines do you need to get an effective model? Usually, with deep learning, the answer is: the more data, the better. For my part, I have had good results with as few as 400 lines and have not had the chance to train with more than 2000 yet. According to Uwe Springmann and David Kaumanns10, good results on incunabula were obtained with between 1000 and 5000 training lines, the latter for harder cases. In any case, I would advise you to start small (for instance, 400 lines), then train, use the model to annotate 400 more, correct, re-train, and so on until you get a satisfying model.

Training on a medieval manuscript: the ms. Bodmer 68 of the Chanson d’Otinel

Training

Once you have some ground truth data, you can begin training (that can take some time). First, you will have to extract ground truth from the html file used for transcription:

$ ocropus-gtedit extract gt.html

The lines you transcribed will go into the book folder, into the subfolders created for each page, and will be labelled <image-name>.gt.txt.

Be careful that all the line of the gt.html file containing text will be extracted (empty lines will be skipped), not only the one you edited. So, if you have some uncorrected lines with text, you have to pay attention not to include them in training or testing data.

So, in order to train, place 90% of your data in a train folder, and 10% in a test folder, to estimate the success of each model. It is then time to train. The last thing to do, if you have special characters, is to modify the file chars.py, located in your ocrolib installation (in Ubuntu, it will be in /usr/local/lib/python2.7/dist-packages/ocrolib/). The syntax is simple enough (fig. 4).

Fig. 4: the beginning of my chars.py

Fig. 4: the beginning of my chars.py

Once it is done, you can launch training with:

$ ocropus-rtrain -o myModel -d 1 train/*/*.bin.png

The -d 1 option is here to allow you to visualise the training steps (fig. 5). You can safely remove it if you want to gain time. In each training iteration, a line is read, and the results from the guess of the neural network (OUT) are compared to the ground truth (TRU), and aligned text (ALN).

54009 43.77 (616, 48) train/0017/010014.bin.png
TRU: u'D efo\ua75b\u017f hat\u0131ll\u0131e a une l\u0131ue g\u0363nt'
ALN: u'D efo~~ hat~ll~e a une l~ue g~nt'
OUT: u'D efo~\u203a hat\u2020ll\u2020e a une l\u2020ue gnt'

Fig. 5: Ocropy training

Fig. 5: Ocropy training

The visualisation shows you ground truth and training image, predicted and aligned results, probabilities for characters (green for space, blue for character with highest probability and yellow for absence of character) and the evolution of error11.

Every n iterations, the model is saved (default is 1000, but you can modify it with the -F option). Other useful options are:

-r LRATE, --lrate LRATE
LSTM learning rate, default: 0.0001
-S HIDDENSIZE, --hiddensize HIDDENSIZE
# LSTM state units, default: 100
-N NTRAIN, --ntrain NTRAIN
# lines to train before stopping, default: 1000000
--load LOAD start training with a previously trained model

The --load options allows you to retrain on an existing model (or, if you want, to split training in a few different sessions), and the -N to decide after how many iterations the training must end. But let’s have a look at the two other parameters, that will have an important effect on the training results (and training time); -r is the learning rate. In my experience, the default value is quite good, but I would be happy to have feedback on it; -S is the number of state units of the model. Increasing it to 200 or even 400 did wonder: my models learned faster and/or achieved lower error rates. On the other hand, training was more computing-intensive, and it considerably augmented the time/computing power needed for each iteration. I will develop this point in my next post, about CLSTM.

The other question is how long a training is needed to get good results. According to Uwe Springmann and David Kaumanns experiments on incunabula, they got best results with 30,000 to 200,000 iterations. Expressed as a number of epochs (an epoch is a number of iterations equal to the number of training lines, so with 400 training (ground truth) lines, it will be 400 iterations), they advise a number of 100 epochs12. As for me, best results were obtained for each model:

  • on modern print: 6 epochs (6287 training lines, 39 000 iterations, error at 1.75%, 200 state units);
  • on manuscript Bodmer 68: training 1, after 27 epochs (1722 training lines, 46 000 iterations, error at 17%, 100 state units); training 2, 31 epochs (1722 training lines, 54 000 iterations, error at 9.7%, 100 state units);
  • on manuscript Digby 23, with CLSTM: 74 epochs (403 training lines, 30 000 iterations, error at 9%, 400 state units).

For now, it seems that the raw number of iterations is a better indicator that the number of epochs, and 30 000 iterations is a good start, but it remains largely to be explored.

If you have such feedback to share, please do not hesitate to put it in a comment or send it to me. If I get enough of it, I will try to statistically estimate better configurations and make a post about it.

Evaluating the results

Once you have trained and produced a few models, the next step is to comparatively assess their performance and error rate. To do that, you can use the following bash code:
for i in *.pyrnn.gz; do
echo "$i" >> modeltest
ocropus-rpred -m $i test/*/*.bin.png
ocropus-errs test/*/*.gt.txt 2>>modeltest
done

This way, you will get a modeltest file, with the error rate of all models. For each model, you will have something like this:
bodmer-00054000.pyrnn.gz
errors 615
missing 0
total 6340
err 9.700 %
errnomiss 9.700 %

that is, the raw number of errors, of missing characters, total number of characters, and percentage of error.

On manuscript Bodmer 68, I started a first training, with 1722 lines of my transcription of the Chanson d’Otinel, and the error followed the evolution presented in fig. 6.

Fig. 6: first training on manuscript Bodmer 68

Fig. 6: first training on manuscript Bodmer 68

As you can see, the model soon arrived at around 20% of error; at iteration 46 000, it was at 16,32% of error, and never decreased afterwards.

I wasn’t quite satisfied with these results, that were not so usable. So, I went through a phase of checking the correctness of training data and its alignment with segmented lines, editing chars.py to include all the characters I needed (such as long ſ), etc. And I trained again: you can see how effective the increase of quality of input data was on this new training (fig. 7).

Fig. 7: second training on manuscript Bodmer 68

Fig. 7: second training on manuscript Bodmer 68

This time, the model got to 9.7% error, at iteration 54 000, and error never decreased afterwards. You can also observe some punctual spikes in error at iteration 23 000 and 57 000.

To understand the source of the errors, and eventually do some post-treatment to lower error rate, you can have a look at the confusions of characters, using the command:

$ ocropus-econf test/*/*.gt.txt

This will give you the most frequent confusions, in a form similar to:

32 _
29 _
21 _
13 _ ı
10 _
8 z _
7 n m
6 _
6 _ t
6 _ u

As you can see, in my case, most errors were related to whitespace, or confusions between whitespace and characters or abbreviations, or, also, in the succession of strokes constituting n, m or ı. If you want to see as well if the errors are dependant of a certain context, you can try the -C option:

$ ocropus-econf -C2 test/*/*.gt.txt

In my case, the most frequent confusions with context were more explicit:

5 Q _nt Q ͣnt
4 ml_t ml̃t
2 t_re t͛re
2 aı_n aı́n
1 cuıne cum_e
1 e ot e_ot
1 m_nt munt
1 tr_uſ trıuſ

As you can see, most errors were related to abbreviations (superscript a, tildes, etc.) or succession of minim strokes (ı, n, m, u and their various combinations) that are also hard for humans when they cannot perceive the full word.

On the other hand, the results obtained were quite usable, now. To give you an idea, here is a sample of the recognition, compared with the binarised folio of the manuscript (fig. 8). I underline errors, and indicate missing characters with ø.

v erent leſ noz uenør tut eſfreez
p aſſent auant ſıſ ont returnez
p ar gͣnt eſfoeꝛ ont paıenſ reculez
Q uatte aıpenz δe terre meſurez
D eſ abatuz e δeſaceruelez
e ſt tut lı champ pleıø e ẽꝯbrez
L ez vn pareı ſa reſteıt coꝛſabrez
Ø enſeınıe eſcrıe paıen a meı eſtez
L eſcu enbrace verſ leſ noø eſt alez
p ar gͣnt u͛tu eſt aſ eſtrøuſ fermez
Ø a euſt leſ noz Gtaanẽt δeſturbez
Q nt en leſcu la feru amırez [missing accent]
p ar teu u͛tu ken ſun frũt lar entez
Deſuz le halme a lun δeſ oılz quaſøeø
L ı paıen eſt δel colp eſpontez
N en a ſucurſ tut eſt abanδunez
J gnelemẽt le laıſıſt amørez
ø reıſ bonſ uaſſalſ a lenfeſ apelez
c o eſt Galδı́n e fauchet lı haſtez
e δaıgremũt balδeδeı́n lafıez .

Fig. 8: ms. Bodmer 68, col. 221b, binarised.

Fig. 8: ms. Bodmer 68, col. 221b, binarised.

In some cases, it might be useful to do some automatic post-treatment, to correct the most frequent errors, for instance with a database of known/existing words. In any case, once you are satisfied with your model, it is time to OCR the rest of the manuscript.

Acquiring the rest of the text and extracting it

To apply recognition to the full document, you can do:

$ ocropus-rpred -m myModel.pyrnn.gz book/*/*.bin.png

This will predict the text of the full book folder. You can then extract it in several formats. You can get it as HOCR, using:

$ ocropus-hocr book/*.bin.png

The HOCR format is interesting, as it gives you, for each text line, its alignment with the original document (that you can transform, with XSLT, to TEI facsimile elements, if you so wish):

<div class='ocr_page' title='image book2/0001.bin.png; bbox 0 0 2140 2836'>
<span class='ocr_line' title='bbox 375 173 1813 296'>Q uatre cenz mılıe cheualerſ puıſ aueır.</span><br />
<span class='ocr_line' title='bbox 374 264 1492 354'>p uıſ men cũbatre a carll̃ ⁊ a franceıſ.</span><br />
<span class='ocr_line' title='bbox 322 342 1382 437'>G ueneſ reſpunt ne uuſ a ceſte feız.</span><br />
<span class='ocr_line' title='bbox 361 429 1586 520'>D e uoz paıenſ mult grant ꝑte ı au͛reız.</span><b

The only problem with this format is with extracting manually corrected lines, that I fear may be omitted in the export procedure. You can also extract it with:

$ ocropus-gtedit text book/*/*.bin.png

This will give you a correct.txt file, with the text, and a separate reference.html file, with the indexed line images.

If you want to correct the results from the prediction, before extracting it, you can do instead:

$ ocropus-gtedit html -H30 book/*/*.bin.png

It will give you a correction.html file, that you can again edit and correct. Here, we have to be cautious, because, when extracting text, the predicted (.txt) lines are used by default, instead of the ground truth ones (.gt.txt). So, once we have corrected, we will need to remove all existing text files, extract, and rename the files from .gt.txt to .txt (yes, it is a bit of a bother):

rm book/*/*.txt
ocropus-gtedit extract correction.html
rename "s/.gt.txt/.txt/g" book/*/*.gt.txt
ocropus-gtedit text book/*/*.bin.png
ocropus-hocr book/*.bin.png

And now you have your text ! You can convert it to TEI or some other format, and start editing.

From one manuscript to the next: trying to reuse a model on another manuscript

So, from the beginning of this post, I have talked of training a model for a specific manuscript. One question remains: can this model be applied to another similar manuscript, or retrained on an other manuscript ?

I tried to apply the model created for Bodmer 68 (last third of the XIIIth century, Gothic Textualis libraria, anglo-norman) to another, quite different, manuscript I wanted to OCR, the Digby 23 (first half of the XIIth century, Praegothica, anglo-norman as well). I will talk more about the model developed for this manuscript with CLSTM in my next post, but the outline is that, without retraining, I had an error of 33% (instead of 9.7%). After a few epochs of retraining (with the --load option), it went down to 15%, but never below. In the same time, parts of the model started “exploding”, and lowering learning rate13 did not solve the problem:

# oops, got FloatingPointError overflow encountered in exp
 Traceback (most recent call last):
 File "/usr/local/bin/ocropus-rtrain", line 286, in
 pcs = network.trainSequence(line,cs,update=do_update,key=fname)
 File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 890, in trainSequence
 self.outputs = array(self.lstm.forward(xs))
 File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 605, in forward
 xs = net.forward(xs)
 File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 661, in forward
 outputs = [net.forward(xs) for net in self.nets]
 File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 559, in forward
 self.WIP,self.WFP,self.WOP)
 File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 428, in forward_py
 go[t] = ffunc(gox[t])
 File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 376, in ffunc
 return 1.0/(1.0+exp(-x))

It soon turned out a lot more effective to retrain a new model from scratch. Actually, every time, in my experience, retraining an existing model, be it for print or manuscript, though it might be faster, was less effective in the end.

That’s it for now ! Do not hesitate to leave observations, remarks or feedback in the comments.

  1. Thomas M. Breuel, Ocropy: Python-based tools for document analysis and OCR, 2014, https://github.com/tmbdev/ocropy. []
  2. Thomas M. Breuel, Adnan Ul-Hasan, Mayce Ali Al-Azawi and Faisal Shafait, « High-Performance OCR for Printed English and Fraktur Using LSTM Networks », 2013 12th International Conference on Document Analysis and Recognition, 2013, DOI: 10.1109/icdar.2013.140, URL: http://staffhome.ecm.uwa.edu.au/%7E00082689/papers/Breuel-LSTM-OCR-ICDAR13.pdf. []
  3. You can find a list of publications related to Ocropy on the project wiki, http://github.com/tmbdev/ocropy/wiki/Publications. For an example of application to modern print, you can see the blogs posts by Dan Vanderkam, « Extracting text from an image using Ocropus », danvk.org, http://www.danvk.org/2015/01/09/extracting-text-from-an-image-using-ocropus.html ; Id, « Training an Ocropus OCR model », danvk.org, http://www.danvk.org/2015/01/09/extracting-text-from-an-image-using-ocropus.html. []
  4. OCR und Nachkorrektur alter Drucke für die Geisteswissenschaften / OCR and postcorrection of early printings for digital humanities, Centrum für Informations- und Sprachverarbeitung (CIS) Ludwig-Maximilians-Universität München, 2015, http://www.cis.uni-muenchen.de/ocrworkshop/program.html []
  5. Uwe Springmann and David Kaumanns, Ocrocis: a high accuracy OCR method to convert early printings into digital text, 2015, http://cistern.cis.lmu.de/ocrocis/tutorial.pdf. []
  6. Thanks to Thibault Clérice for the ideas and discussions, and for initially pointing me to the documentation for using OCRopy. []
  7. E-Codices: Virtual Manuscript Library of Switzerland, http://www.e-codices.ch/. []
  8. I used the same folder architecture as the one in the documentation by U. Springmann, of the cited workshop, OCR and postcorrection []
  9. From what I understand from the Ocropy Github page, the original developer of Ocropy, T. Breuel, is planning on working on deep learning for layout analysis. []
  10. U Springmann and D. Kaumanns, Ocrocis…, p. 13 []
  11. For more information, have again a look at U. Springmann’s slides. []
  12. U. Springmann and D. Kaumanns, Ocrocis…, p. 13 []
  13. As advised in a Github issue of Ocropy, http://github.com/tmbdev/ocropy/issues/5 []

Jean-Baptiste Camps

Diplômé de l'École nationale des chartes et docteur en études médiévales de l'Université Paris-Sorbonne (la Chanson d'Otinel…, dir. Dominique Boutet ), Jean-Baptiste Camps est responsable pédagogique du master «Humanités numériques» à l'École nationale des chartes (PSL). [CV et liste des publications, voir : http://paris-sorbonne.academia.edu/JeanBaptisteCamps]

More Posts - Website

Follow Me:
LinkedIn

Philologie numérique et méthodes quantitatives

Ce court billet sert seulement à vous signaler que, dans le cadre du programme E-Philologie, les deux contributeurs principaux de ce carnet donneront un cours sur Philologie numérique et méthodes quantitatives, qui débute demain et se poursuivra, sur un rythme hebdomadaire, durant tous les lundis d’avril.

L’essentiel de ce cours sera tourné vers les méthodes de stylométrie, et les questions d’attribution, datation et localisation de textes anciens et modernes (en vrac, parmi les corpus sur lesquels nous expérimenterons: les œuvres de Corneille et Molière, les chansons de geste, les œuvres de Chrétien de Troyes, …). En prime, nous ferons peut-être aussi un peu de stemmatologie.

Vous pourrez trouver plus d’informations sur le carnet d’E-Philologie (https://ephilolog.hypotheses.org/).

Jean-Baptiste Camps

Diplômé de l'École nationale des chartes et docteur en études médiévales de l'Université Paris-Sorbonne (la Chanson d'Otinel…, dir. Dominique Boutet ), Jean-Baptiste Camps est responsable pédagogique du master «Humanités numériques» à l'École nationale des chartes (PSL). [CV et liste des publications, voir : http://paris-sorbonne.academia.edu/JeanBaptisteCamps]

More Posts - Website

Follow Me:
LinkedIn

« Médiévistique numérique »

Galerie

Pour les médiévistes de nos lecteurs, je me permets de relayer ici l’annonce d’une séance méthodologique consacrée à la « Médiévistique numérique » (co-coordonnée par votre serviteur) du séminaire de jeunes chercheurs, « Questes ». Que les jeunes chercheurs parmi vous n’hésitent pas à venir … Continuer la lecture

Genealogical Variant Locations & Simplified Stemma: a Test Case

Galerie

Sources for the article, including the datasets used and functions that can reproduce the results, are available as part of the stemmatology package. It can be found on Github: https://github.com/Jean-Baptiste-Camps/stemmatology Be aware that this is not the original prototype, used … Continuer la lecture

Setting bounds in a homogeneous corpus

Galerie

We inaugurate here a new category, dedicated to sources and research documents. Mostly, we will publish in this category scripts, databases and documents related to articles we have published. The main purpose is, by making our sources available, to allow … Continuer la lecture

MASHS 2012 – Modèles et Apprentissage en Sciences Humaines et Sociales – Paris, 4 et 6 juin 2012

Galerie

L’appel à communication pour le cru 20121 du colloque Modèles et Apprentissage en Sciences Humaines et Sociales, qui se tiendra à Paris, à l’Université Paris 1 Panthéon-Sorbonne, les 4 et 5 Juin 2012, est paru. Vous le trouverez sur le … Continuer la lecture

Louis Havet, Cesare Segre, critique verbale et diasystème

Galerie

Dire clairement et fournir les outils pour penser ce que tout le monde croyait savoir est peut-être ce qui caractérise certaines avancées scientifiques. Dans deux articles publiés en 1976 et 1978, Cesare Segre définit ce qu’il nomme le « diasystème » des … Continuer la lecture

TEI, LaTeX et les éditions critiques sur papier : I. Les différents packages

Galerie

Cette galerie contient 3 photos.

En matière d’édition critique électronique, la norme qui semble aujourd’hui s’imposer suppose l’emploi du XML TEI. Mais, pour la version papier de cette même édition — on peut après tout souhaiter disposer également d’une version papier, ne serait-ce que parce que … Continuer la lecture

Zotero et BibLaTeX : créer une bibliographie classée par thèmes

Galerie

Cette galerie contient 3 photos.

Parmi les utilisations un peu plus avancées que l’on peut faire du couple Zotero-BibLaTeX, nous verrons aujourd’hui comment créer une bibliographie thématique. Pour exposer quelque peu mon objectif : je souhaite utiliser une collection Zotero pour créer ma bibliographie de thèse, … Continuer la lecture

N-grams et identification des auteurs

Galerie

Cette galerie contient 3 photos.

Ces derniers temps, les études dans le domaine de l’authorship attribution ou de la classification des textes ont pris un souffle nouveau par le biais de l’utilisation de n-grams1, ouvrant des perspectives nouvelles pour la création de modèles indépendants de … Continuer la lecture

Colloque Modèles et apprentissages en Sciences Humaines et Sociales (MASHS 2011)

Galerie

Les 23 et 24 juin prochains se tiendra à Marseille, au Centre de la Vieille Charité, la cinquième édition du colloque Modèles et apprentissages en Sciences Humaines et Sociales (MASHS 2011), consacrée à la modélisation mathématique ou informatique appliquée aux … Continuer la lecture

Une nouvelle revue consacrée à la Philologie numérique : Digital Philology: A Journal of Medieval Cultures

Une nouvelle revue consacrée à la Philologie numérique vient de naître. Fondée par  les romanistes Stephen G. Nicols (tenant historique de la « Nouvelle Philologie »1) et Nadia R. Altschul, Digital Philology: A Journal of Medieval Cultures paraîtra deux fois par an aux presses de l’Université John Hopkins (Baltimore, Maryland). Chaque année, un des deux numéros se construira autour d’une thématique précise, tandis que l’autre sera ouvert aux candidatures. Voici l’appel à contribution pour les numéros « ouverts » de 2012 et 2013.

Digital Philology: A Journal of Medieval Cultures

Call for Submissions

Digital Philology is a new peer-reviewed journal devoted to the study of medieval vernacular texts and cultures. Founded by Stephen G. Nichols and Nadia R. Altschul, the journal aims to foster scholarship that crosses disciplines upsetting traditional fields of study, national boundaries, and periodizations. Digital Philology also encourages both applied and theoretical research that engages with the digital humanities and shows why and how digital resources require new questions, new approaches, and yield radical results.

Digital Philology will have two issues per year, published by the Johns Hopkins University Press. One of the issues will be open to all submissions, while the other one will be guest-edited and revolve around a thematic axis.

Contributions may take the form of a scholarly essay or focus on the study of a particular manuscript. Articles must be written in English, follow the 3rd edition (2008) of the MLA style manual, and be between 5,000 and 9,000 words in length, including footnotes and list of works cited. Quotations in the main text in languages other than English should appear along with their English translation.

Digital Philology welcomes submissions for the 2012 and 2013 open issues. Inquiries and submissions (as a Word document attachment) should be sent to dph|a|jhu.edu, addressed to the Editor (Albert Lloret) and Managing Editor (Jeanette Patterson). Digital Philology will also publish reviews of books and digital projects. Correspondence regarding digital projects and publications for review may be addressed to Timothy Stinson at  tlstinson|a|gmail.com.

Editorial Board

Tracy Adams (Auckland University)

Benjamin Albritton (Stanford University)

Nadia R. Altschul (Johns Hopkins University)

R. Howard Bloch (Yale University)

Kevin Brownlee (University of Pennsylvania)

Jacqueline Cerquiglini-Toulet (Université Paris Sorbonne – Paris IV)

Suzanne Conklin Akbari (University of Toronto)

Lucie Dolezalova (Charles University, Prague)

Alexandra Gillespie (University of Toronto)

Jeffrey Hamburger (Harvard University)

Daniel Heller-Roazen (Princeton University)

Sharon Kinoshita (University of California, Santa Cruz)

Joachim Küpper (Freie University of Berlin)

Deborah McGrady (University of Virginia)

Christine McWebb (University of Waterloo)

Stephen G. Nichols (Johns Hopkins University)

Timothy Stinson (North Carolina State University)

Lori Walters (Florida State University)

Digital Philology: A Journal of Medieval Cultures

Call for Submissions

Digital Philology is a new peer-reviewed journal devoted to the study of medieval vernacular texts and cultures. Founded by Stephen G. Nichols and Nadia R. Altschul, the journal aims to foster scholarship that crosses disciplines upsetting traditional fields of study, national boundaries, and periodizations. Digital Philology also encourages both applied and theoretical research that engages with the digital humanities and shows why and how digital resources require new questions, new approaches, and yield radical results.

Digital Philology will have two issues per year, published by the Johns Hopkins University Press. One of the issues will be open to all submissions, while the other one will be guest-edited and revolve around a thematic axis.

Contributions may take the form of a scholarly essay or focus on the study of a particular manuscript. Articles must be written in English, follow the 3rd edition (2008) of the MLA style manual, and be between 5,000 and 9,000 words in length, including footnotes and list of works cited. Quotations in the main text in languages other than English should appear along with their English translation.

Digital Philology welcomes submissions for the 2012 and 2013 open issues. Inquiries and submissions (as a Word document attachment) should be sent to dph@jhu.edu, addressed to the Editor (Albert Lloret) and Managing Editor (Jeanette Patterson). Digital Philology will also publish reviews of books and digital projects. Correspondence regarding digital projects and publications for review may be addressed to Timothy Stinson at tlstinson@gmail.com.

Editorial Board

Tracy Adams (Auckland University)

Benjamin Albritton (Stanford University)

Nadia R. Altschul (Johns Hopkins University)

R. Howard Bloch (Yale University)

Kevin Brownlee (University of Pennsylvania)

Jacqueline Cerquiglini-Toulet (Université Paris Sorbonne – Paris IV)

Suzanne Conklin Akbari (University of Toronto)

Lucie Dolezalova (Charles University, Prague)

Alexandra Gillespie (University of Toronto)

Jeffrey Hamburger (Harvard University)

Daniel Heller-Roazen (Princeton University)

Sharon Kinoshita (University of California, Santa Cruz)

Joachim Küpper (Freie University of Berlin)

Deborah McGrady (University of Virginia)

Christine McWebb (University of Waterloo)

Stephen G. Nichols (Johns Hopkins University)

Timothy Stinson (North Carolina State University)

Lori Walters (Florida State University)

  1. Voir notamment Stephen G. Nichols, « Introduction: Philology in a Manuscript Culture », Speculum, 65-1 (Jan. 1990), p. 1-10. []

Jean-Baptiste Camps

Diplômé de l'École nationale des chartes et docteur en études médiévales de l'Université Paris-Sorbonne (la Chanson d'Otinel…, dir. Dominique Boutet ), Jean-Baptiste Camps est responsable pédagogique du master «Humanités numériques» à l'École nationale des chartes (PSL). [CV et liste des publications, voir : http://paris-sorbonne.academia.edu/JeanBaptisteCamps]

More Posts - Website

Follow Me:
LinkedIn

Une solution bibliographique pour LaTeX et les Humanités : BibLaTeX

Depuis bien longtemps, les utilisateurs de LaTeX s’arrachaient les cheveux devant la seule véritable solution de gestion de bibliographique qui leur était proposée, j’ai nommé  BibTeX. N’accablons pas trop BibTeX : créé en 1985, peu après LaTeX, par Oren Patashnik et Leslie Lamport (le créateur de LaTeX himself), BibTeX répondait à un besoin de solution de gestion semi-automatisée des références bibliographiques, fondée sur une base de données contenant les références (fichier .bib) et un fichier de style (.bst). Cette solution était particulièrement innovante pour l’époque, où personne n’entendait encore parler de Zotero, et l’est encore à certains égards. Rendons grâce à ses auteurs.

Maintenant, avouons-le, BibTeX a un (très) gros défaut : ses fichiers de style. Ceux-ci sont écrit dans un code très particulier (qu’on qualifie de notation polonaise inversée ou reverse polish notation1, il fallait y penser). À titre d’exemple, voici un bout de bon vieux code bst de BibTeX (souvenirs, souvenirs…) :

INTEGERS { nameptr namesleft numnames }

FUNCTION {format.names}
{ 's :=
#1 'nameptr :=
s num.names$ 'numnames :=
numnames 'namesleft :=
{ namesleft #0 > }
{ s nameptr "\bgroup\sc {vv~}{ll~}\egroup{{}}{(ff)}{, jj}" format.name$ 't :=
nameptr #1 >
{ namesleft #1 >
{ ", " * t * }
{ numnames #2 >
{ "" * }
'skip$
if$
t "others" =
{ " et~al." * }
{ " et " * t * }
if$
}
if$
}
't
if$
nameptr #1 + 'nameptr :=
namesleft #1 - 'namesleft :=
}
while$
}

Si vous ne vous êtes pas enfuis en courant, vous aurez la joie d’apprendre qu’il existe désormais une alternative relativement plus simple, plus souple, et plus riche à BibTeX, qui s’appelle BibLaTeX. Développé et maintenu par Philipp Lehman, BibLaTeX réunit un grand nombre d’outils qui étaient auparavant dispersés dans divers modules. BibLaTeX fonctionne en utilisant les mêmes bases de données .bib que BibTeX, et, si à l’origine il se servait en partie du moteur de BibTeX, il peut désormais s’en dispenser complètement grâce à Biber, et gère ainsi nativement l’UTF8. En outre, et c’est là son plus grand avantage comparatif,  il utilise un langage pour les styles beaucoup plus abordable que celui de BibTeX :

\renewcommand*{\mkbibnamefirst}[1]{\textit{#1}}
\renewcommand*{\mkbibnamelast}[1]{\textit{#1}}
\renewcommand*{\mkbibnameprefix}[1]{\textit{#1}}
\renewcommand*{\mkbibnameaffix}[1]{\textit{#1}}
\DeclareFieldFormat{booktitle}{#1\isdot}
\DeclareFieldFormat{journaltitle}{#1\isdot}
\DeclareFieldFormat{issuetitle}{#1\isdot}
\DeclareFieldFormat{maintitle}{#1\isdot}
\DeclareFieldFormat{pages}{#1}
\DeclareFieldFormat{title}{#1\isdot}
\DeclareFieldFormat[article]{title}{#1}
\DeclareFieldFormat[inbook]{title}{#1}
\DeclareFieldFormat[incollection]{title}{#1}
\DeclareFieldFormat[inproceedings]{title}{#1}
\DeclareFieldFormat[patent]{title}{#1}
\DeclareFieldFormat[thesis]{title}{#1}
\DeclareFieldFormat[unpublished]{title}{#1}

Pour quelqu’un qui connaît un peu LaTeX, c’est tout de suite beaucoup plus parlant. Qui plus est, en plus d’être relativement accessible, ces styles sont configurables à l’extrême, et ce bien plus qu’à peu près n’importe quel autre format de styles bibliographiques que j’ai pu rencontrer. Je me suis lancé dans la création d’un style, et j’ai été agréablement surpris à la fois du degré de détail jusques auquel on peut aller et de l’élégance du code.

En outre, pour ceux qui ne voudraient malgré tout pas se lancer dans la création d’un style, il existe des styles tout prêts qui correspondent beaucoup mieux aux normes des humanités françaises (il y a notamment des styles de base qui fonctionnent en Auteur, titre, et qui gèrent le titre abrégé, les ibid., etc.). Pour les historiens, le style de la Historische Zeitschrift peut être un point de départ intéressant.

Parmi les nombreuses autres fonctionnalités qui me restent à explorer figurent celles jadis éparpillées dans les modules babelbibbibtopicbibunitschapterbib,citeinlinebibmcite et mciteplusmlbibmultibibsplitbib, plus d’autres propres à BibLaTeX. Les fonctionnalités évoquées dans la description du paquet on de quoi faire pâlir d’envie les utilisateurs d’autres logiciels de gestion bibliographique : la possibilité de diviser sa bibliographie en sections, de réaliser plusieurs bibliographies dans le même document, de diviser sa bibliographie selon l’emploi des références dans les chapitres, ou selon des thèmes et mots-clés, ou par types de documents…

Concrètement, pour l’impatient qui voudrait se lancer tout de suite dans BibLaTeX, il suffit d’ajouter dans le préambule la commande suivante :

\usepackage[babel]{csquotes}% Si vous ne l'aviez pas déjà.
\usepackage[backend=biber,%si on utilise biber, sinon bibtex.
sorting=nyt,%si on veut que les références soient triées dans la bibliographie par nom/date/titre.
style=style_choisi%par ex. historische_zeitschrift.
]{biblatex}
\addbibresource{Mon_fichier.bib}

Pour citer une référence, il suffira d’insérer à l’endroit opportun la commande :

\cite{clé}

Puis, pour afficher la bibliographie :

\printbibliography

Pour terminer, voici quelques exemples obtenus avec mon style de citation maison (encore en version bêta). Tout d’abord quelques citations en note :

Citations

Des citations en note

Et maintenant un extrait de la bibliographie :

bibliographie

Un extrait de la bibliographie

  1. Pour comprendre le principe de cette notation, il faut imaginer que les chiffres et opérations que l’on y met s’empilent, et que l’on pioche à partir du sommet de la pile : ainsi, si l’on écrit 10 3 2 + * le résultat est 50, puisque l’on prend les deux éléments du sommet de la pile (3 et 2) pour réaliser la première opération, donc 3+2 = 5 puis 5 est reposé au sommet de la pile, et on reprend les deux éléments du sommet de la pile pour faire la seconde opération, soit 5*10 = 50. Clair, non ? []

Jean-Baptiste Camps

Diplômé de l'École nationale des chartes et docteur en études médiévales de l'Université Paris-Sorbonne (la Chanson d'Otinel…, dir. Dominique Boutet ), Jean-Baptiste Camps est responsable pédagogique du master «Humanités numériques» à l'École nationale des chartes (PSL). [CV et liste des publications, voir : http://paris-sorbonne.academia.edu/JeanBaptisteCamps]

More Posts - Website

Follow Me:
LinkedIn