What happens when you train a neural network to write investment treaties? Below are the results (paper).

Preferences









Note: This application uses the entirety of a country’s treaty practice as benchmark. Predicted output may thus not necessarily correspond to a country’s most recent treaty design.

Sampling the data from the trained neural network...

Generated % % article on




Background

We employed a 2-layer LSTM torch-rnn with 512 nodes per layer, sequence length of 200 characters and a dropout factor of 0.5 to train it on 80% of the data on 1626 English-language delineated bilateral investment treaties (10% were used for validation and test sets) for each issue area corpus separately. Corpus texts were constructed by concatenating split article texts back to one large text file, preserving the article numbers and names in headers, which precede each article text within each treaty. We trained each model for 10 epochs (case with priors) or 50 epochs (case without priors). Validation set loss was typically lower than train set loss due to a high dropout factor specified, signalling little overfitting. Then we specified the starting sequence of “#Article” (signifies a new article delimiter in train data) and generated 150 strings of 100,000 characters from the trained model with a temperature of 0.5 (a factor between 0 and 1 by which the predicted character probabilities are divided to supply more innovative results) for each issue area.

Attribution