It is more challenging than ever for researchers to be on top of all the latest publications and other developments in their areas of research, regardless of their career stage or the fields they work in. One of the main challenges is the exponential growth of publications. In nearly all fields, there is more work being published every year than any one researcher, or research team, can possibly read and synthesize. This is especially challenging for interdisciplinary researchers, junior researchers, or researchers who are moving into new areas because it can take a fairly long time to get the lay of the land in an unfamiliar literature. This makes research and discovery more difficult than it needs to be for teams and individual researchers, and it holds back the development and communication of collective knowledge.
Talking to colleagues and mentors, reading the latest articles in the top-ranked journals, going to conferences, and building diverse research teams are all indispensable strategies for keeping on top of the literature and for discovering and synthesizing new knowledge. However, these strategies (1) are often costly and slow, and (2) are generally biased, though not always in negative ways.
This workshop will cover another set of tools from network science, text mining, and scientometrics that can help us rapidly get up to speed on the state of knowledge in a field, and to mine existing knowledge to identify promising areas for discovery and further research.
This one-day workshop offers a practical introduction to fundamentals and recent developments in automated content analysis. The workshop is designed with social scientists in mind, but participants from other fields (including digital humanities) are also welcome. We assume that participants have little to no prior experience with methods for automated content analysis.
In this two-day workshop, you will learn a variety of tools for analyzing data on social and information networks using the programming language R. The first day will be an introduction to R and RStudio, followed by a series of classic topics in network analysis, such as centrality analysis and community detection. The second day will cover visualization, handling extremely dense networks, and developing statistical models for network data.
The workshop sessions will combine short lectures and demonstrations with hands-on time analyzing your own (or our) network data.