| | --- |
| | dataset_name: transformers_code_embeddings |
| | license: apache-2.0 |
| | language: code |
| | tags: |
| | - embeddings |
| | - transformers-internal |
| | - similarity-search |
| | --- |
| | |
| | # Transformers Code Embeddings |
| |
|
| | Compact index of function/class definitions from `src/transformers/models/**/modeling_*.py` for cross-model similarity. Built to help surface reusable code when modularizing models. |
| |
|
| | ## Contents |
| |
|
| | - `embeddings.safetensors` — float32, L2-normalized embeddings shaped `[N, D]`. |
| | - `code_index_map.json` — `{int_id: "relative/path/to/modeling_*.py:SymbolName"}`. |
| | - `code_index_tokens.json` — `{identifier: [sorted_unique_tokens]}` for Jaccard. |
| |
|
| | ## How these were built |
| |
|
| | - Source: 🤗 Transformers repository, under `src/transformers/models`. |
| | - Units: top-level `class`/`def` definitions. |
| | - Preprocessing: |
| | - Strip docstrings, comments, and import lines. |
| | - Replace occurrences of model names and symbol prefixes with `Model`. |
| | - Encoder: `Qwen/Qwen3-Embedding-4B` via `transformers` (mean pooling over tokens, then L2 normalize). |
| | - Output dtype: float32. |
| |
|
| | > Note: Results are tied to a specific Transformers commit. Regenerate when the repo changes. |
| |
|
| | ## Quick usage |
| |
|
| | ```python |
| | from huggingface_hub import hf_hub_download |
| | from safetensors.numpy import load_file |
| | import json, numpy as np |
| | |
| | repo_id = "hf-internal-testing/transformers_code_embeddings" |
| | |
| | emb_path = hf_hub_download(repo_id, "embeddings.safetensors", repo_type="dataset") |
| | map_path = hf_hub_download(repo_id, "code_index_map.json", repo_type="dataset") |
| | tok_path = hf_hub_download(repo_id, "code_index_tokens.json", repo_type="dataset") |
| | |
| | emb = load_file(emb_path)["embeddings"] # (N, D) float32, L2-normalized |
| | id_map = {int(k): v for k, v in json.load(open(map_path))} |
| | tokens = json.load(open(tok_path)) |
| | |
| | # cosine similarity: dot product |
| | def topk(vec, k=10): |
| | sims = vec @ emb.T |
| | idx = np.argpartition(-sims, k)[:k] |
| | idx = idx[np.argsort(-sims[idx])] |
| | return [(id_map[int(i)], float(sims[i])) for i in idx] |
| | ```` |
| |
|
| | ## Intended use |
| |
|
| | * Identify similar symbols across models (embedding + Jaccard over tokens). |
| | * Assist refactors and modularization efforts. |
| |
|
| | ## Limitations |
| |
|
| | * Embeddings reflect preprocessing choices and the specific encoder. |
| | * Symbols from the same file are present; filter by model name if needed. |
| |
|
| | ## Repro/build |
| |
|
| | See `utils/modular_model_detector.py` in `transformers` repo for exact build & push commands. |
| |
|
| | ## License |
| |
|
| | Apache-2.0 for this dataset card and produced artifacts. Source code remains under its original license in the upstream repo. |
| |
|
| | ``` |
| | |