Showing 201-208 of 272 results

CO2 emissions and ancillary data for 343 cities from diverse sources

Creators: Nangini, Cathy; Peregon Anna; Ciais, Philippe; Weddige, Ulf; Vogel, Felix; Wang, Jun; Bréon, François-Marie; Bachra, Simeran; Wang, Yilong; Gurney, Kevin; Yamagata, Yoshiki; Appleby, Kyra; Telahoun, Sara; Canadell, Josep G; Grübler, Arnulf; Dhakal, Shobhakar; Creutzig, Felix
Publication Date: 2019
Creators: Nangini, Cathy; Peregon Anna; Ciais, Philippe; Weddige, Ulf; Vogel, Felix; Wang, Jun; Bréon, François-Marie; Bachra, Simeran; Wang, Yilong; Gurney, Kevin; Yamagata, Yoshiki; Appleby, Kyra; Telahoun, Sara; Canadell, Josep G; Grübler, Arnulf; Dhakal, Shobhakar; Creutzig, Felix

This dataset collects anthropogenic carbon dioxide emissions data, supplemented with various socio-economic and environmental factors, across 343 cities worldwide. A dataset of dimensions 343 × 179 consisting of CO2 emissions from CDP (187 cities, few in developing countries), the Bonn Center for Local Climate contains action and reporting data (73 cities, mainly in developing countries), and data collected by Peking University (83 cities in China). Further, a set of socio-economic variables – called ancillary data – were collected from other datasets (e.g. socio-economic and traffic indices) or calculated (climate indices, urban area expansion), then combined with the emission data. The remaining attributes are descriptive (e.g. city name, country, etc.) or related to quality assurance/control checks. The file size is 1,8 MB and the majority (88%) of the cities reported emissions between 2010 and 2015. Structurally the dataset contains

  • City Identification: Each entry includes city name, country, and other descriptive attributes.
  • CO₂ Emissions Data: Reported emissions with quality assurance/control checks.
  • Ancillary Variables: Socio-economic data, traffic indices, climate indices, urban area expansion metrics, and more.

Please open using Tab as separator and ” as text delimiter.

Third Eye Data: TV News Archive chyrons

Creators: TV News Archive
Publication Date: 2017
Creators: TV News Archive

The Third Eye: TV News Archive Chyrons dataset captures and analyzes the “lower third” text, known as chyrons, displayed during live TV news broadcasts. This dataset provides a unique look into the real-time editorial choices of major news networks, offering insights into how different media outlets frame news stories. Using Optical Character Recognition (OCR) technology, chyrons are extracted and archived continuously, making it possible to track how key topics are covered over time.

At its inception in September 2017, the dataset collected chyrons from four major news networks: BBC News, CNN, Fox News, and MSNBC. Within just two weeks of its launch, over four million chyrons had already been captured, highlighting the vast amount of real-time data available. The dataset has been continuously updated since, allowing for longitudinal studies of media framing and news presentation trends. It’s size is approximately 12.5 kB in TSV format.

The dataset is structured into several key components. Each chyron entry includes:

  • The exact chyron text, showing the wording used by the network.
  • Timestamps, allowing analysis of how frequently specific topics appear.
  • Channel identifiers, enabling comparisons between different networks.
  • Duration data, indicating how long a chyron remained on screen, which can suggest emphasis or prioritization of certain stories.

By leveraging this dataset, researchers, journalists, and media analysts can examine bias in news presentation, media influence on public perception, and breaking news coverage trends. It serves as a powerful tool for studying news framing, editorial strategies, and the evolution of televised news narratives across competing networks.

One million comic books panel

Creators: Iyyer, Mohit; Manjunatha, Varun; Guha, Anupam; Vyas, Yogarshi; Boyd-Graber, Jordan; Daumé III, Hal; Davis, Larry
Publication Date: 2016
Creators: Iyyer, Mohit; Manjunatha, Varun; Guha, Anupam; Vyas, Yogarshi; Boyd-Graber, Jordan; Daumé III, Hal; Davis, Larry
Visual narrative is often a combination of explicit information and judicious omissions, relying on the viewer to supply missing details. In comics, most movements in time and space are hidden in the “gutters” between panels. To follow the story, readers logically connect panels together by inferring unseen actions through a process called “closure”. While computers can now describe what is explicitly depicted in natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels. We construct a dataset, COMICS, that consists of over 1.2 million panels (120 GB) paired with automatic textbox transcriptions. An in-depth analysis of COMICS demonstrates that neither text nor image alone can tell a comic book story, so a computer must understand both modalities to keep up with the plot. We introduce three cloze-style tasks that ask models to predict narrative and character-centric aspects of a panel given n preceding panels as context. Various deep neural architectures underperform human baselines on these tasks, suggesting that COMICS contains fundamental challenges for both vision and language. Overall, the dataset is organized into three components:
  • Panel Images: Each panel is stored as an image file, capturing the visual content of the comic scenes.

  • Textbox Transcriptions: Textual content from each panel is extracted using OCR, allowing for analysis of dialogues, narratives, and other textual elements.

  • Metadata: Additional information such as panel dimensions, position within the page, and associated comic book identifiers is included to facilitate detailed analyses.

Twitch Livestreaming Interactions

Creators: Rappaz, Jérémie; McAuley, Julian; Aberer, Karl
Publication Date: 2021
Creators: Rappaz, Jérémie; McAuley, Julian; Aberer, Karl

This is a dataset of users consuming streaming content on Twitch. We retrieved all streamers, and all users connected in their respective chats, every 10 minutes during 43 days. The dataset is unique because it captures real-time interactions between users and streamers at a high temporal resolution, allowing for detailed analysis of how audiences engage with live content. Below are the key features that make this dataset particularly valuable. The dataset is 6,47 GB in size and covers a 43-day period in July 2019, with data collected every 10 minutes, resulting in 6,148 time steps.

Overall, it includes:

  • Users: 100k
  • Streamers (items): 162.6k
  • Interactions: 3M
  • Time steps: 6148

Structurally, the dataset encompasses the following information:

  • User ID: Anonymized identifier for each user.

  • Stream ID: Identifier for each streaming session.

  • Streamer Username: Name of the channel or streamer.

  • Time Start: The initial time step (in 10-minute intervals) when the user was observed in the chat.

  • Time Stop: The final time step (in 10-minute intervals) when the user was observed in the chat.

Social Recommendation Data

Creators: Cai, Chenwei; He, Ruining; McAuley, Julian; Zhao, Tong; King, Irwin
Publication Date: 2017
Creators: Cai, Chenwei; He, Ruining; McAuley, Julian; Zhao, Tong; King, Irwin

These datasets include ratings as well as social (or trust) relationships between users. Data are from LibraryThing (a book review website) and epinions (general consumer reviews). Those specific user ratings allow for detailed analysis of user preferences. By capturing the social (or trust) relationships between users, this dataset enables the study of how social connections influence user behavior and recommendations. The dataset is approximately 660 MB in size and includes:

Number of Observations:

  • LibraryThing:

    • Users: 73,882
    • Items: 337,561
    • Ratings: 979,053
    • Social Relations: 120,536
  • Epinions:

    • Users: 116,260
    • Items: 41,269
    • Ratings/Feedback: 181,394
    • Social Relations: 181,304

The dataset is structured into:

  • User Information: Anonymized user identifiers.

  • Item Information: Identifiers for items such as books or products.

  • Ratings/Feedback: User-provided ratings or feedback scores for items.

  • Social Relations: Mappings of social or trust relationships between users.


Steam Video Game and Bundle Data

Creators: Kang, Wang-Cheng; McAuley, Julian; Pathak, Apurva; Gupta, Kshitiz
Publication Date: 2018
Creators: Kang, Wang-Cheng; McAuley, Julian; Pathak, Apurva; Gupta, Kshitiz
This datasets collect user interactions and metadata from the Steam platform, aimed at facilitating research in recommendation systems and user behavior analysis. They encompass a vast number of user reviews, capturing detailed feedback and engagement levels. Also, it provides insights into game bundles, detailing which games are frequently purchased together, aiding in the analysis of bundling strategies and their effectiveness. The The dataset encompasses user reviews and interactions from October 2010 to January 2018 and is approximately 1.4 GB in size and includes:
  • Reviews: 7,793,069
  • Users: 2,567,538
  • Items: 15,474
  • Bundles: 615

Structurally, the dataset comprises several key components:

  • User Reviews: Each review entry includes the user ID, game ID, review text, and associated metadata such as timestamps and ratings.

  • Game Metadata: Information about each game, including game ID, title, genre, developer, and pricing details.

  • Bundle Details: Descriptions of game bundles, specifying the bundle ID, included games, and pricing information.

 

EndoMondo Fitness Tracking Data

Creators: Ni, Jianmo; Muhlstein, Larry; McAuley, Julian
Publication Date: 2019
Creators: Ni, Jianmo; Muhlstein, Larry; McAuley, Julian

This is a collection of workout logs from users of EndoMondo. It contains sequential sensor data such as GPS coordinates (latitude, longitude, altitude), heart rate measurements, speed, and distance, making it valuable for studying workout patterns, performance tracking, and personalized fitness recommendations. Additionally, it includes user metadata such as anonymized user IDs, gender, and sport type, along with contextual factors like weather conditions. The dataset has a size of approximately 2.9 GB and consists of 1,104 users with 253,020 recorded workouts.

The dataset covers multiple components:

  • User Information: Anonymized user identifiers and gender.

  • Workout Details: Each workout log includes sport type, sequential data for GPS coordinates (latitude, longitude, altitude) with timestamps, heart rate measurements, and derived metrics such as speed and distance.

Pinterest Fashion Compatibility

Creators: Kang, Wang-Cheng; Kim, Eric; Leskovec, Jure; Rosenberg, Charles; McAuley, Julian
Publication Date: 2019
Creators: Kang, Wang-Cheng; Kim, Eric; Leskovec, Jure; Rosenberg, Charles; McAuley, Julian

This dataset is a structured collection of images and metadata designed to study the compatibility of fashion products within real-world scenes. It enables detailed analysis of how fashion items appear in different settings and supports applications in machine learning, recommendation systems, and virtual styling tools. One of its key features is the scene-product pairing, where fashion items in real-world images are annotated with bounding boxes and linked to corresponding product images. In total, the dataset includes 47,739 scene images, 38,111 product images, and 93,274 scene-product pairs, making it a comprehensive resource for fashion compatibility research.

The dataset is about 29 MB large and includes:

  • Scenes: 47,739
  • Products: 38,111
  • Scene-Product Pairs: 93,274

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.