Resources by Stefan

Goodreads-books

Creators: Zając, Zygmunt
Publication Date: 2019
Creators: Zając, Zygmunt

The primary reason for creating this dataset is the requirement of a good clean dataset of books. It contains important features such as book titles, authors, average ratings, ISBN identifiers, language codes, number of pages, ratings count, text reviews count, publication dates, and publishers. A distinctive aspect of this dataset is its ability to support a wide range of book-related analyses, such as trends in book popularity, author influence, and reader preferences. The data set is 1.56 MB large and was scraped via the Goodreads API. It encompasses over 10,000 observations, each representing a unique book entry with multiple attributes. The structure of the dataset is straightforward, consisting of a single CSV file with the following key columns:

  • bookID: A unique identification number for each book.
  • title: The official title of the book.
  • authors: Names of the authors, with multiple authors separated by a delimiter.
  • average_rating: The average user rating for the book.
  • isbn & isbn13: The 10-digit and 13-digit International Standard Book Numbers, respectively.
  • language_code: The primary language in which the book is published (e.g., ‘eng’ for English).
  • num_pages: The total number of pages in the book.
  • ratings_count: The total number of ratings the book has received from users.
  • text_reviews_count: The total number of text reviews written by users.
  • publication_date: The original publication date of the book.
  • publisher: The name of the publishing house.

COVID-19 Twitter Chatter Dataset

Creators: Banda, Juan M.; Tekumalla, Ramya; Wang, Guanyu; Yu, Jingyuan; Liu, Tuo; Ding, Yuning; Artemova, Katya; Tutubalina, Elena; Chowell, Gerardo
Publication Date: 2024
Creators: Banda, Juan M.; Tekumalla, Ramya; Wang, Guanyu; Yu, Jingyuan; Liu, Tuo; Ding, Yuning; Artemova, Katya; Tutubalina, Elena; Chowell, Gerardo

Due to the relevance of the COVID-19 global pandemic, we are releasing our dataset of tweets acquired from the Twitter Stream related to COVID-19 chatter. Since our first release we have received additional data from our new collaborators, allowing this resource to grow to its current size. Dedicated data gathering started from March 11th yielding over 4 million tweets a day. We have added additional data provided by our new collaborators from January 27th to March 27th, to provide extra longitudinal coverage. Version 10 added ~1.5 million tweets in the Russian language collected between January 1st and May 8th, gracefully provided to us by: Katya Artemova (NRU HSE) and Elena Tutubalina (KFU). From version 12 we have included daily hashtags, mentions and emoijis and their frequencies the respective zip files. From version 14 we have included the tweet identifiers and their respective language for the clean version of the dataset. Since version 20 we have included language and place location for all tweets. The dataset is 14.2 GB large.

Social capital I: measurement and associations with economic mobility

Creators: Chetty, Raj; Jackson, Mathew O.; Kuchler, Theresa; Stroebel, Johannes; Hiller, Abigail; Oppenheemer, Sarah
Publication Date: 2022
Creators: Chetty, Raj; Jackson, Mathew O.; Kuchler, Theresa; Stroebel, Johannes; Hiller, Abigail; Oppenheemer, Sarah
Social capital – the strength of our relationships and communities – has been shown to play an important role in outcomes ranging from income to health. This dataset provides a detailed analysis of social capital across various U.S. communities, focusing on its impact on economic mobility. Using privacy-protected data on 21 billion friendships from Facebook, we measure three types of social capital in each neighborhood, high school, and college in the United States:

  • Cohesiveness: the degree to which social networks are fragmented into cliques
  • Economic connectedness: the degree to which low-income and high-income people are friends with each other
  • Civic engagement: rates of volunteering and participation in community organizations

The dataset is approximately 8 MB in size and structured into different geographical levels, including ZIP codes, high schools, and colleges across the United States. Each entry details the three key measures of social capital—economic connectedness, cohesiveness, and civic engagement—allowing for targeted analysis at various community levels.

 

Facebook Privacy-Protected Full URLs Data Set

Creators: Messing, Solomon; DeGregorio, Christina; Hillenbrand, Bennett; King, Gary; Mahanti, Saurav; Mukerjee, Zagreb; Nayak, Chaya; Persily, Nate; State, Bogdan; Wilkins, Arjun
Publication Date: 2020
Creators: Messing, Solomon; DeGregorio, Christina; Hillenbrand, Bennett; King, Gary; Mahanti, Saurav; Mukerjee, Zagreb; Nayak, Chaya; Persily, Nate; State, Bogdan; Wilkins, Arjun

This is a codebook for data on the demographics of people who viewed, shared, and otherwise interacted with web pages (URLs) shared on Facebook, between January 1, 2017 and October 31, 2022. The data has about 68 million URLs, over 3.1 trillion rows, and over 71 trillion cell values. It results from a collaboration between Facebook and Social Science One (at IQSS at Harvard), originally prepared for Social Science One grantees and describes the “full” URLs dataset, including its scope, structure, and fields. This is version 10 of the codebook and data (released 4/13/2023), first described by Gary King and Nathaniel Persily at https://socialscience.one/blog/update-social-science-one. The dataset’s structure is organized to facilitate detailed analysis. Each entry corresponds to a unique URL and includes aggregated user interaction metrics. These metrics are further broken down by various demographic dimensions, such as age, gender, and country. For users in the United States, additional categorizations include political page affinity, offering insights into how different political leanings may influence content engagement.

Video Game Sales

Creators: Smith, Gregory
Publication Date: 2016
Creators: Smith, Gregory

This dataset contains a list of video games with sales greater than 100,000 copies. It was generated by a scrape of vgchartz.com. The dataset has a size of 1,36 MB and includes games released up to the year 2016, offering a historical perspective on video game sales over several decades. It allows for in-depth analysis of sales trends across different regions, platforms, and genres, making it a valuable resource for market analysis and strategic planning within the video game industry. Each entry in the dataset includes the following attributes:

  • Rank: Overall sales ranking of the game.
  • Name: Title of the game.
  • Platform: The platform on which the game was released (e.g., PC, PS4).
  • Year: Year of the game’s release.
  • Genre: Genre classification of the game.
  • Publisher: Company that published the game.
  • NA_Sales: Sales figures in North America (in millions).
  • EU_Sales: Sales figures in Europe (in millions).
  • JP_Sales: Sales figures in Japan (in millions).
  • Other_Sales: Sales figures in the rest of the world (in millions).
  • Global_Sales: Total worldwide sales (in millions).

World Happiness Report

Creators: F. Helliwell, John; Layard, Richard; Sachs, Jeffrey D. ; De Neve, Jan-Emmanuel; Aknin, Lara B.; Wang, Shun
Publication Date: 2012
Creators: F. Helliwell, John; Layard, Richard; Sachs, Jeffrey D. ; De Neve, Jan-Emmanuel; Aknin, Lara B.; Wang, Shun

The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others. The dataset has a size of 80,86 kB.

Popular Movies of TMDb

Creators: Mondal, Sankha Subhra
Publication Date: 2020
Creators: Mondal, Sankha Subhra

This dataset of the 10,000 most popular movies across the world has been fetched through the read API.
TMDB’s free API provides for developers and their team to programmatically fetch and use TMDb’s data.
Their API is to use as long as you attribute TMDb as the source of the data and/or images. Also, they update their API from time to time. The data set is 3.2 MB large. It offers valuable insights into global cinematic trends and preferences.

Each movie entry in the dataset includes the following attributes:

  • title: The name of the movie.
  • overview: A brief summary of the movie’s plot.
  • original_language: The language in which the movie was originally produced.
  • vote_average: The average user rating of the movie on TMDb.

goodbooks-10k

Creators: Zając, Zygmunt
Publication Date: 2017
Creators: Zając, Zygmunt

The dataset contains six million ratings for ten thousand most popular books (with most ratings). It offers a rich resource for analyzing reading habits, book popularity, and user engagement within the literary community. There are also books marked to read by the users, book metadata (author, year, etc.) and tags/shelves/genres.

ratings contains ratings sorted by time. Ratings go from one to five. Both book IDs and user IDs are contiguous. For books, they are 1-10000, for users, 1-53424.

to_read  provides IDs of the books marked “to read” by each user, as user_id,book_id pairs, sorted by time. There are close to a million pairs.

books has metadata for each book (goodreads IDs, authors, title, average rating, etc.). The metadata have been extracted from goodreads XML files.

book_tags contains tags/shelves/genres assigned by users to books. Tags in this file are represented by their IDs. They are sorted by goodreads_book_id  ascending and count descending.

The date set is 68.8 MB large.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.