Digitization and Computational Analytics (DCA)
The Campuslab "Digitization and Computational Analytics" (German: "Digitalisierung und computergestützte Analytik") is a cross-institutional and inter-departmental unit dedicated to the development of innovative digital and data-analytic methods in the Humanities, Social Sciences and beyond. It aims to bridge the gap between the Humanities and Social Sciences on the one hand and Computer Science and the Natural Sciences on the other hand. The Campuslab is coordinated by the GCDH.
Speaker: Prof. Caroline Sporleder
Administrative Support: Bettina Brandt
Joint Postdoc SUB/Campuslab: Dr. Piroska Lendvai
Pilot Phase (2016-2017)
In its pilot phase, the Campuslab helped to kickstart digital methods research across a broad spectrum of disciplines in the Humanities and Social Sciences through 10 pilot projects, ranging from sentiment analysis for political speeches and election manifestos over geographical information systems for tracing the cultural development of historical spaces to mining 3D models of archaeological artefacts. Four lab areas were established, covering the areas Text, 3D, Space, and Visualisation.
Consolidation Phase (from 2018)
While the pilot phase was dedicated to establishing a broad basis of computational methods research in the Humanities and Social Sciences, the consolidation phase focuses on further strengthening cross-disciplinary collaboration within the Humanities and Social Sciences as well as reaching out to the Natural and Computer Sciences. This is achieved through a variety of measures, including a fellowship programme, which enables early-career researchers to spend some time in Göttingen in close collaboration with an interdisciplinary group of local scholars. In addition to the lab areas established in the pilot phase, the Campuslab will branch out into areas such as audio-video processing, digital image analysis, and network analysis with methods ranging from speech recognition over computer vision to multi-spectral imaging.
The Campuslab organises a regular Brown Bag Lunch Meeting and occasionally talks and workshops. To be updated about the activities of the Campuslab please subscribe to the mailing list at:
List and chart of supported Projects:
|Lab Nr.||Project Title||Institution||People||Abstract|
|1||Digitales Textlabor||Seminar für dt. Philologie|
FB neuere dt. Literatur
|Prof.Dr. Heike Sahm|
Dr. Berenike Herrmann
Digital Text Lab
Text Lab implements methods for literature scientific markup for text corpora und digital historical editions. It involves gamification as acquisition and quality control for literary markup method in research-oriented university teaching.
It conducts grammatical and fine-grained narratological and figurative analysis, identification of speech and thought representation types, and the investigation of metaphor.
|2||IT-AFK||Professur für Anwendungssysteme |
|Prof. Dr. Matthias Schumann|
Janne Kleinhans (M.A.)
| "IT-gestützte Analyse von Freitextaufgaben für die Lehre"|
Automatizing essay scoring via text classification in the Business Education domain via text analysis approaches.
The project targeted corpus collection, interfacing existing tools and
reusing labeled data for running machine learning experiments.
|3||3D-Digitalisierungslabor||Archäologisches Institut||Prof. Dr. Martin Langner|| |
The lab addresses digitization of physical objects and the digital reconstruction of collection items. Its method field is visualization, the targetd DH domains are Cultural Heritage and Image Science.
-Methods for pattern recognition
-Shape Comparison of artifacts (Ancient portraiture, Greek terracotta figurines, Medieval seals, plaster casts)
-Shape Analysis of artifacts.
-A method for object mining is envisioned.
-Creating a 3D repository for all collections on campus
|4||"TrAiN"||Institut für Informatik / |
|Dr. Marco Büchler||TrAIN aims to conduct research pertaining to two essential Digital Transformation processes, namely Optical Character Recognition (OCR) and Handwritten Text Recognition (HTR), applied to historical data. The project focuses on the letter collection from the Grimm Brothers. TrAIN compares the outputs produced by the HTR of the original letters and by the OCR of the printed edition, and investigates two common scholarly tasks: text reuse detection and authorship attribution. Text reuse algorithms are employed to align the OCR output with the HTR output, and author attribution techniques to identify the stylistic markers of the Grimm Brothers.|
|5||"KOLIMO"||Seminar dt. Philologie|
FB neuere dt. Literatur
|Prof. Dr. Gerhard Lauer|
Dr. Berenike Herrmann
|KoLiMo is developing a large literary collection to verify hypotheses of textual scholarship, Identify and assess quantitative indicators of style, to enable synchronic and diachronic comparison of epochs, movements, and authorship. The main tasks in creating this new benchmark resource consist of corpus cleanup, preprocessing, grammatical analysis and labeling, standardized metadata creation, acquisition, mapping and labeling, standardized markup, quality control, and online presentation.|
|6||"Datarama"||Max-Planck-Institut zur Erforschung multireligiöser und multiethnischer |
|Dr. Norbert Winnige|
Prof. Dr. Steven Vertovec
The DATARAMA is a research and presentation tool which provides a novel solution to presentation challenges for a wide field of disciplines.
Datarama is an immersive projection environment with interactive selection, management and handling of multiple types and sources of data.
•Two introduction videos presenting the DATARAMA and its set of distinctive visual data solutions for a range of scientific applications, in order to deal with complexities and challenges of modern data workflows.
|7||"Digital Publishing of the Liber Ordinarius"||Musikwissenschaftliches Seminar||Prof. Dr. Andreas Waczkat|
The project explores digital means for the transcription, presentation and evaluation of the text and its sources. Its focus is on the testing the usability of existing software for this purposes
We experiment with a combination of different specialized tools for the different stages of preparing a source text edition, and also try to adapt tools designed for non-research-purposes, especially plagiarism detectors. We make use of highly specialist software such as Transkribus
|8||"PoliLab"||Institut für Politikwissenschaft||Prof. Dr. Andreas Busch||PoliLab targets computer-assisted analysis of political discourse such as Chancellor’s speeches and party programs. Its methods include text categorization, topic and sentiment analysis and tracking of content changes over time.|
|9||"Die Erschließung des Staatsgebietes"||Institut für historische Landesforschung||Prof. Dr. Arndt Reitemeier|
Dr Niels Petersen
Die Erschließung des Staatsgebiets: Chausseebau in Nordwestdeutschland 1764-1843
|10||"LingLab"||Seminar dt. Philologie|
mit linguistisch ausgerichteten Abteilungen
|Prof. Dr. Anke Holler|
Prof. Dr. Marco Coniglio
Dr. Annika Herrmann
|The idea of project LingLab is to create an innovative collaboration platform to concentrate the workflow of empirically working linguists into one system and provide a tool to ease research data management and data publication. With LingLab research projects can be publically accessed at a very early stage in the research process. By giving access to an automatically created data paper and the relevant material, linguistic data can be easily replicated and reused by other interested researchers. This ensures the sustainability of research materials and invites researchers to intensify networking and collaboration.|