Workshop on the results of the SPARK project of the Swiss National Science Foundation (SNSF) Dynamic Data Ingestion (DDI): Server-side data harmonization in historical research. A centralized approach to networking and providing interoperable research data to answer specific scientific questions. The workshop will take place in four sessions via Zoom. In sessions 1 and 2, participants will be introduced to the functions of the virtual research environment Nodegoat (VRE) and create a data model and import a data sample, which they will use in sessions 3 and 4 for the exercises on data ingestion. At the end, each participant will have a working VRE that can be used for further research or also used in teaching. It is highly recommended to attend all 4 sessions. The workshop is primarily aimed at members of the Phil.-Hist. faculty of the University of Bern, but is generally open to other interested parties on planet earth. The zoom link to the workshop will be sent to participants after registration. The workshop will be led by Nodegoat developers Pim van Bree and Geert Kessels (LAB1100), together with Kaspar Gubler, Institute of History, University of Bern.
Members of the Phil.-Hist. faculty of the University of Bern can apply for an VRE free of charge at the following link: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html Other participants can obtain an VRE at nodegoat.net. Or get Nodegoat Open Soruce on GitHub: https://github.com/nodegoat/nodegoat
The workshops always take place on Wednesdays from 2 – 5 pm. The workshops are recorded and can therefore be re-watched if a session cannot be attended.
Dates: 28.04.2021 / 05.05.2021 / 12.5.2021 / 26.5.2021
Registration for the workshop until 25.04.2021 to: firstname.lastname@example.org
Session 1: Data Modelling (people and books)
In session 1 we get to know the central functions of Nodegoat (NG). Since NG is managed via the web browser, no additional software needs to be installed on the computer and location-independent working is no problem. With NG, research projects can be created, research data managed, analyzed, visualized, published on the Internet and shared with other researchers without any special programming skills. We will create our first data model, which we will fill with data in the next sessions. As we will see, NG is not a rigid ”boutique solution” that only fits a specific question or data model. Students and researchers can use NG to create custom data models based on their specific questions.
Session 2: Importing Data (including a VIAF id for each person)
In Session 2, we will import our first data sample and an identifier (VIAF) for each person. So we will work with a prosopographically oriented data model, but we can easily extend it for other research questions.
Session 3: Ingesting Biographical Data (like other IDs, or birth/death)
In session 3, we will learn about the principles of “Dynamic Data Ingestion”. There are numerous data sources on the Internet, whether for research or for the interested public. What types of data sources are there? And what about data quality? We will first explore these questions before connecting our Nodegoat environment to a typical data source via an interface (API) and importing the first test data. These data can be further identifiers of persons or also information about the life data.
Session 4: Ingesting Related Data (like published books of people)
In Session 4, we will enrich the data on the persons and check what other data is available on the Internet. For example, publications that we can add and, if we have full texts, additionally analyze in nodegoat. Finally, we will also look at the data harmonization capabilities in nodegoat.