SAT_perspective / README.md
array's picture
Update README.md
3fe2c61 verified
metadata
license: mit
task_categories:
  - question-answering
  - visual-question-answering
size_categories:
  - 1K<n<10K
dataset_info:
  features:
    - name: images
      sequence: image
    - name: question
      dtype: string
    - name: choices
      sequence: string
    - name: answer_idx
      dtype: int32
    - name: datatype
      dtype: string
    - name: house_ind
      dtype: int32
    - name: cam_position
      sequence:
        sequence: float32
    - name: cam_rotation
      sequence: float32
    - name: image_reason
      sequence: image
  splits:
    - name: val
      num_bytes: 11647657977.101
      num_examples: 6527
  download_size: 343936818
  dataset_size: 11647657977.101
configs:
  - config_name: default
    data_files:
      - split: val
        path: data/val-*

SAT_perspective Dataset

Paper

SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models

This dataset is part of the SAT (Spatial Aptitude Training) project, which introduces a dynamic benchmark for evaluating and improving spatial reasoning capabilities in multimodal language models.

Dataset Description

The SAT_perspective dataset contains 6,527 spatial reasoning questions that test perspective-taking abilities. Each question presents a scene and asks about spatial relationships from a new viewpoint, requiring models to reason about how objects would appear from different camera positions.

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("array/SAT_perspective", split="val")

# Access a sample
sample = dataset[0]
print(sample["question"])
print(sample["choices"])

Dataset Structure

Each example in the dataset contains the following fields:

  • images: List of input images showing the original scene (PIL Image objects)
  • question: Text question asking about spatial relationships from a new perspective
  • choices: List of possible answers (typically 2 options)
  • answer_idx: Index of the correct answer in the choices list (integer)
  • datatype: Type of spatial reasoning task (value: "perspective")
  • house_ind: House/scene identifier (integer)
  • cam_position: Camera position coordinates as 3D float arrays
  • cam_rotation: Camera rotation values as float arrays
  • image_reason: Rendered image from the new perspective that the question is asking about. This provides the ground truth visualization showing what the scene looks like from the target viewpoint.

Example

{
    "images": [<PIL.Image.Image>],  # Original view
    "question": "If I go to the 'X' marked point in the image and turned left by 90 degrees, will the Chair get closer or further away?",
    "choices": ["Closer", "Further"],
    "answer_idx": 0,
    "datatype": "perspective",
    "house_ind": 0,
    "cam_position": [[2.75, 0.9009997844696045, 6.25], [3.75, 0.9009997844696045, 6.75]],
    "cam_rotation": [96.0, 6.0],
    "image_reason": [<PIL.Image.Image>]  # View from new perspective
}

Citation

If you use this dataset, please cite:

@misc{ray2025satdynamicspatialaptitude,
      title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models},
      author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
      year={2025},
      eprint={2412.07755},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.07755},
}