Maschinelle Gesetzestextanalyse – neue Möglichkeiten für die Rechtsetzungslehre?

Kolloquium «Rechtsetzungslehre»

 

Referate

Bernhard Waltl, Technische Universität München:
Lexia – A Collaborative Legal Data Science Environment

 

André Ourednik, Schweizerisches Bundesarchiv:
The Swiss Federal Official Compilation 1947–1998 as a Structured Dataset for Legal and Historical Analysis

Datum

Montag, 19. Juni 2017

Zeit

16.15–18.00 Uhr

Ort

Universität Zürich
Rechtswissenschaftliches Institut
Rämistrasse 74
8001 Zürich

Lageplan

Raum

RAI-F-041

Sprachen    

Deutsch / Englisch

Kontakt

Stefan Höfler

Lexia – A Collaborative Legal Data Science Environment

Bernhard Waltl, Technische Universität München

Due to several reasons, data science is becoming more and more attractive to legal practitioners and legal scientists. Computer scientists from the Technical University of Munich and legal scientists from the Ludwig-Maximilians University developed a generic legal data science environment supporting various state-of-the-art techniques to extract, classify, quantify, model and visualize linguistic and semantic properties of legal texts.

Based on open source software and following the principles of modular software design, a web application has been implemented that allows the semantic analysis of legal documents, e.g., laws, judgments, contracts, etc. The system’s modularity and language independence fosters the re-use of software components, allows faster prototyping and lowers the barrier of adaption to new domains or legislations.

The Swiss Federal Official Compilation 1947–1998 as a Structured Dataset for Legal and Historical Analysis

André Ourednik, Schweizerisches Bundesarchiv

The legally binding Official Compilation (OC) is a record of changes to the federal law in chronological order (different from the Systematic compilation, in thematic order and synthesizing the current state of federal law). By scanning, OCR and field recognition, the Swiss Federal Archives have retrieved over 400,000 individual legal decrees of the OC, starting with the revised 1947 edition up to 1998, and published them online in the Akoma Ntoso format.

In difference to most literary texts, legal texts are historicized and deeply nested: in titles, chapters etc. The international Akoma Ntoso standard provides an XML publication framework for this structure. If fully implemented, it can serve to reconstruct the complete history of a legal text, i.e., all its amendments; a software (yet to develop) could use it as a source for the reconstruction of any given decree in validity at any given date. In a quantitative, remote-reading approach, we’ve also used it to retrieve the amount of law formulated per legal domain, which in turn could serve as a proxy for the detection of historical events and shifts in societal paradigms. However, our current Akoma Ntoso version of the OC needs consolidation and further development to serve as a more versatile and reliable basis for analysis. For this, visual tools, crowd sourcing, content comparisons with other data sets, such as the terminology database of the Swiss Federal Administration, are important options. We also hypothesize that methods of semantic analysis would greatly contribute to this task.