LateOn-Code
The LateOn-Code collection is composed of PyLate models optimized for code retrieval. These late interaction models are first pre-trained following the methodology of CoRNStack. These pre-trained models are then further fine-tuned on train sets of CoIR using the nv-retriever methodology to mine hard negatives while preventing false negatives.
We started from the two best ColBERT models on the BEIR benchmark for their respective sizes. The first one, LateOn-Code is based on in-house LateOn model, a new version of GTE-ModernColBERT-v1 built on ModernBERT-base (also developed at LightOn). This version underwent significantly deeper training, crossing the 57 mark on BEIR, almost a 2.5-point improvement and is thus SOTA by a large margin. We'll release this base model along with training data and boilerplates in the near future, so stay tuned! The second, LateOn-Code-edge is a smaller model based on the edge-colbert model family from mixedbread, using the smallest variant (Ettin-17M) for maximum efficiency. For more details on the training setup, please refer to our blogpost.
The original CoRNStack data in a format compatible with PyLate can be found here while the fine-tuning data can be found here. Training boilerplates can be found here in the PyLate repository
MTEB (Code, v1) benchmark results
Pre-trained models achieve very competitive results as the 17M model outperforms the very strong granite-embedding-small-english-r2 by an average of 1.7. This is truly impressive, as the granite model is almost three times bigger (17M vs 48M), but is also a beast on its own in the <100M parameters range. It also outperforms the larger granite variant (149M). The larger version nicely scales by improving over the performance of its little sibling by 6.5 on average.
Although the pre-training results are already very impressive given that they are mostly out-of-domain, running a proper fine-tuning using the training data of CoIR significantly boost the performance of the models. Notably, the 17M model increases from 57.50 to 66.64 (+9.14), getting pretty close to EmbeddingGemma-300M while being 17 times smaller. The larger one increases from 63.77 to 74.12 (+10.35), strongly outperforming EmbeddingGemma-300M and getting closer to strong LLM models such as Qwen3-Embedding-0.6B and C2LLM-0.5B while being much smaller.
| Model | Params | Type | Avg | Apps | COIR CSNet | CodeEdit | CodeFB MT | CodeFB ST | CSNet CC | CSNet | CodeTrans Contest | CodeTrans DL | CosQA | StackOF QA | Synth T2SQL |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Baseline | |||||||||||||||
| BM25 | - | Lexical | 44.41 | 4.76 | 40.86 | 49.85 | 59.19 | 68.15 | 53.97 | 60.01 | 47.78 | 34.42 | 18.75 | 70.26 | 24.94 |
| Small (≤50M) | |||||||||||||||
| granite-embedding-small-english-r2 | 47M | Single vector | 55.84 | 13.54 | 60.46 | 57.16 | 52.19 | 76.85 | 48.42 | 78.28 | 77.63 | 33.63 | 35.58 | 90.04 | 46.33 |
| LateOn-Code-edge-pretrain | 17M | Multi vector | 57.50 | 10.81 | 73.78 | 62.07 | 51.92 | 76.65 | 63.22 | 88.03 | 71.31 | 33.16 | 30.53 | 74.63 | 53.83 |
| LateOn-Code-edge | 17M | Multi vector | 66.64 | 26.22 | 81.60 | 62.21 | 74.25 | 87.12 | 79.26 | 87.85 | 75.36 | 37.08 | 40.54 | 85.63 | 62.57 |
| Δ (fine-tune - pretrain) | +9.14 | +15.41 | +7.82 | +0.14 | +22.33 | +10.47 | +16.04 | -0.18 | +4.05 | +3.92 | +10.01 | +11.00 | +8.74 | ||
| Medium (100M–300M) | |||||||||||||||
| granite-embedding-english-r2 | 149M | Single vector | 57.22 | 13.96 | 64.65 | 59.35 | 52.54 | 77.18 | 47.67 | 80.79 | 77.07 | 35.03 | 37.01 | 91.80 | 49.55 |
| CodeRankEmbed | 137M | Single vector | 60.47 | 23.45 | 83.20 | 59.98 | 42.61 | 78.10 | 68.89 | 89.50 | 66.43 | 34.49 | 35.17 | 80.53 | 63.27 |
| GTE-ModernBERT | 149M | Single vector | 71.66 | 57.72 | 83.10 | 55.83 | 86.15 | 86.00 | 93.61 | 88.76 | 72.35 | 37.27 | 43.36 | 91.14 | 64.61 |
| embeddinggemma-300m | 300M | Single vector | 68.76 | 84.39 | 75.54 | 62.10 | 51.42 | 80.26 | 73.71 | 90.15 | 85.51 | 33.52 | 43.60 | 86.47 | 58.42 |
| LateOn-Code-pretrain | 149M | Multi vector | 63.77 | 23.09 | 80.27 | 68.74 | 50.21 | 82.66 | 71.47 | 91.05 | 82.20 | 34.46 | 34.15 | 85.61 | 61.34 |
| LateOn-Code | 149M | Multi vector | 74.12 | 54.76 | 86.57 | 64.99 | 82.22 | 90.40 | 89.32 | 90.40 | 87.44 | 41.00 | 45.23 | 93.43 | 63.67 |
| Δ (fine-tune - pretrain) | +10.35 | +31.67 | +6.30 | -3.75 | +32.01 | +7.74 | +17.85 | -0.65 | +5.24 | +6.54 | +11.08 | +7.82 | +2.33 | ||
| Large (≥500M) | |||||||||||||||
| C2LLM-0.5B | 500M | Single vector | 75.46 | 61.02 | 86.71 | 71.39 | 92.29 | 88.63 | 96.29 | 89.20 | 84.27 | 33.99 | 38.30 | 89.40 | 74.08 |
| Qwen3-Embedding-0.6B | 600M | Single vector | 75.42 | 75.34 | 84.69 | 64.42 | 90.82 | 86.39 | 91.72 | 91.01 | 86.05 | 31.36 | 36.48 | 89.99 | 76.74 |
Best result across all sizes is underlined. Best within each size category is bolded.
Colgrep
The LateOn-Code family model can easily be used within ColGrep, an easy-to-use search tool that give their powerful search capabilities to coding agent. It has been designed to extend grep capabilities to get the best of both world and is very effective to enhance the quality of the answer while diminishing answer time and tokens consumption. Given the performance of the very light-weight 17M model, it can easily run quickly on any computer.
Install ColGrep
# macOS / Linux
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/lightonai/next-plaid/releases/latest/download/colgrep-installer.sh | sh
# Windows (PowerShell)
powershell -c "irm https://github.com/lightonai/next-plaid/releases/latest/download/colgrep-installer.ps1 | iex"
Search
# Semantic search — find code by meaning
colgrep "function that retries HTTP requests"
# Regex search
colgrep -e "async fn\s+\w+"
# Hybrid — regex narrows candidates, semantics ranks them
colgrep -e "Result<" "error handling" --include="*.rs"
Install for Claude Code
colgrep --install-claude-code
Choose a Model
# Set the model
colgrep set-model lightonai/LateOn-Code # default: lightonai/LateOn-Code-edge
For more information about ColGrep, please refer to the official documentation
PyLate
This is a PyLate model finetuned from lightonai/LateOn-Code-edge-pretrain on the apps, synthetictext2sql, cosqa, codefeedbackst, codefeedbackmt, stackoverflowqa, codetranscontest, codetransdl, CodeSearchNet_go, CodeSearchNet_java, CodeSearchNet_javascript, CodeSearchNet_php, CodeSearchNet_python, CodeSearchNet_ruby, CodeSearchNet_ccr_go, CodeSearchNet_ccr_java, CodeSearchNet_ccr_javascript, CodeSearchNet_ccr_php, CodeSearchNet_ccr_python and CodeSearchNet_ccr_ruby datasets. It maps sentences & paragraphs to sequences of 48-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
Model Details
Model Description
- Model Type: PyLate model
- Document Length: 2048 tokens
- Query Length: 256 tokens
- Output Dimensionality: 48 tokens
- Similarity Function: MaxSim
- Training Datasets:
- apps
- synthetictext2sql
- cosqa
- codefeedbackst
- codefeedbackmt
- stackoverflowqa
- codetranscontest
- codetransdl
- CodeSearchNet_go
- CodeSearchNet_java
- CodeSearchNet_javascript
- CodeSearchNet_php
- CodeSearchNet_python
- CodeSearchNet_ruby
- CodeSearchNet_ccr_go
- CodeSearchNet_ccr_java
- CodeSearchNet_ccr_javascript
- CodeSearchNet_ccr_php
- CodeSearchNet_ccr_python
- CodeSearchNet_ccr_ruby
- Language: English, code
- License: Apache 2.0
Model Sources
- Documentation: PyLate Documentation
- Repository: PyLate on GitHub
- Hugging Face: PyLate models on Hugging Face
Full Model Architecture
ColBERT(
(0): Transformer({'max_seq_length': 2047, 'do_lower_case': True, 'architecture': 'ModernBertModel'})
(1): Dense({'in_features': 256, 'out_features': 512, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
(2): Dense({'in_features': 512, 'out_features': 48, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
)
Usage
First install the PyLate library:
pip install -U pylate
Retrieval
Use this model with PyLate to index and retrieve documents. The index uses FastPLAID for efficient similarity search.
Indexing documents
Load the ColBERT model and initialize the PLAID index, then encode and index your documents:
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path="pylate_model_id",
)
# Step 2: Initialize the PLAID index
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
)
Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path="pylate_model_id",
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
Evaluation
Metrics
Py Late Information Retrieval
- Dataset:
['CodeSearchNetPython', 'CodeSearchNetJavascript', 'CodeSearchNetGo', 'CodeSearchNetRuby', 'CodeSearchNetJava', 'CodeSearchNetPhp'] - Evaluated with
pylate.evaluation.pylate_information_retrieval_evaluator.PyLateInformationRetrievalEvaluator
| Metric | CodeSearchNetPython | CodeSearchNetJavascript | CodeSearchNetGo | CodeSearchNetRuby | CodeSearchNetJava | CodeSearchNetPhp |
|---|---|---|---|---|---|---|
| MaxSim_accuracy@1 | 0.855 | 0.707 | 0.92 | 0.737 | 0.755 | 0.802 |
| MaxSim_accuracy@3 | 0.958 | 0.815 | 0.978 | 0.87 | 0.914 | 0.91 |
| MaxSim_accuracy@5 | 0.972 | 0.845 | 0.987 | 0.899 | 0.937 | 0.932 |
| MaxSim_accuracy@10 | 0.98 | 0.877 | 0.991 | 0.921 | 0.951 | 0.953 |
| MaxSim_precision@1 | 0.855 | 0.707 | 0.92 | 0.737 | 0.755 | 0.802 |
| MaxSim_precision@3 | 0.3193 | 0.2717 | 0.326 | 0.29 | 0.3047 | 0.3033 |
| MaxSim_precision@5 | 0.1944 | 0.169 | 0.1974 | 0.1798 | 0.1874 | 0.1864 |
| MaxSim_precision@10 | 0.098 | 0.0877 | 0.0991 | 0.0921 | 0.0951 | 0.0953 |
| MaxSim_recall@1 | 0.855 | 0.707 | 0.92 | 0.737 | 0.755 | 0.802 |
| MaxSim_recall@3 | 0.958 | 0.815 | 0.978 | 0.87 | 0.914 | 0.91 |
| MaxSim_recall@5 | 0.972 | 0.845 | 0.987 | 0.899 | 0.937 | 0.932 |
| MaxSim_recall@10 | 0.98 | 0.877 | 0.991 | 0.921 | 0.951 | 0.953 |
| MaxSim_ndcg@10 | 0.9244 | 0.7937 | 0.9607 | 0.8357 | 0.8655 | 0.8824 |
| MaxSim_mrr@10 | 0.9058 | 0.7668 | 0.9505 | 0.8076 | 0.8367 | 0.8592 |
| MaxSim_map@100 | 0.9064 | 0.7696 | 0.9508 | 0.8095 | 0.8379 | 0.86 |
Code Search Network
- Dataset:
CodeSearchNet_mean - Evaluated with
pylate.evaluation.code_search_network_evaluator.CodeSearchNetworkEvaluator
| Metric | Value |
|---|---|
| MaxSim_accuracy@1 | 0.796 |
| MaxSim_accuracy@3 | 0.9075 |
| MaxSim_accuracy@5 | 0.9287 |
| MaxSim_accuracy@10 | 0.9455 |
| MaxSim_precision@1 | 0.796 |
| MaxSim_precision@3 | 0.3025 |
| MaxSim_precision@5 | 0.1857 |
| MaxSim_precision@10 | 0.0946 |
| MaxSim_recall@1 | 0.796 |
| MaxSim_recall@3 | 0.9075 |
| MaxSim_recall@5 | 0.9287 |
| MaxSim_recall@10 | 0.9455 |
| MaxSim_ndcg@10 | 0.8771 |
| MaxSim_mrr@10 | 0.8544 |
| MaxSim_map@100 | 0.8557 |
Training Details
Training Datasets
apps
- Dataset: apps at 68d15dc
- Size: 4,985 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'Polycarp has $n$ different binary words. A word called binary if it contains only charact...{'document': "for _ in range(int(input())):\n n = int(input())\n mass = []\n zo = 0\n oz...{'document': "t=int(input())\nfor _ in range(t):\n n=int(input())\n l=list(map(int,input().split()))... - Loss:
pylate.losses.contrastive.Contrastive
synthetictext2sql
- Dataset: synthetictext2sql at 68d15dc
- Size: 99,996 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'What is the total volume of timber sold by each salesperson, sorted by salesperson?', 'qu...{'document': 'SELECT salesperson_id, name, SUM(volume) as total_volume FROM timber_sales JOIN salesp...{'document': 'SELECT salesperson_id, SUM(volume) as total_volume FROM timber_sales JOIN salesperson ... - Loss:
pylate.losses.contrastive.Contrastive
cosqa
- Dataset: cosqa at 68d15dc
- Size: 9,018 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': '1d array in char datatype in python', 'query_id': 9}{'document': 'def _convert_to_array(array_like, dtype):\n """\n Convert Matrix attribu...{'document': 'def astype(array, y):\n """A functional form of theastypemethod.\n\n Args:\n ... - Loss:
pylate.losses.contrastive.Contrastive
codefeedbackst
- Dataset: codefeedbackst at 68d15dc
- Size: 125,124 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'You are tasked with implementing a Python class that extends a base class and overrides i...{'document': '```python\nclass TestsslFinding(VSFinding):\n def process_finding(self, finding):\n...{'document': '```python\nfrom googlecloudsdk.calliope import base\nfrom googlecloudsdk.api_lib.sql i... - Loss:
pylate.losses.contrastive.Contrastive
codefeedbackmt
- Dataset: codefeedbackmt at 68d15dc
- Size: 52,941 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': "'user': Embark on a comprehensive journey through the intricate realm of quantum computin...{'document': "Regrettably, there are no standard Python libraries available for quantum computing th...{'document': "The provided code block constructs a quantum circuit with a Hadamard gate (which allow... - Loss:
pylate.losses.contrastive.Contrastive
stackoverflowqa
- Dataset: stackoverflowqa at 68d15dc
- Size: 13,934 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'sphinxsearch-0.9 in mediawiki-1.32.0 error 2019/01/14 12:04:51 [error] 21549#21549: *3558...{'document': 'The SearchDatabase class that SphinxSearch extends was changed from REL1_31 to REL1_32...{'document': 'I was running MediaWiki 1.16.0. I upgraded to MediaWiki 1.16.2 and this resolved the ... - Loss:
pylate.losses.contrastive.Contrastive
codetranscontest
- Dataset: codetranscontest at 68d15dc
- Size: 561 training samples
- Approximate statistics based on the first 561 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'Julia set from __future__ import division\n\ncX = -0.7\ncY = 0.27015\nmaxIter = 300\n\nde...{'document': '#include <windows.h>\n#include <string>\n#include <complex>\n\nconst int BMP_SIZE = 60...{'document': '#include <windows.h>\n#include <ctime>\n#include <string>\n\nconst int BMP_SIZE = 600,... - Loss:
pylate.losses.contrastive.Contrastive
codetransdl
- Dataset: codetransdl at 68d15dc
- Size: 564 training samples
- Approximate statistics based on the first 564 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'x = tf.range(12)\ntf.size(x)\nX = tf.reshape(x, (3, 4))\ntf.zeros((2, 3, 4))\ntf.ones((2,...{'document': "x = paddle.arange(12)\nx.numel()\nX = paddle.reshape(x, (3, 4))\npaddle.zeros((2, 3, 4...{'document': 'x = torch.arange(12)\nx.numel()\nX = x.reshape(3, 4)\ntorch.zeros((2, 3, 4))\ntorch.on... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_go
- Dataset: CodeSearchNet_go at 9f89bdc
- Size: 166,972 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'getStringValue func getStringValue(b []rune) (int, error) {\n\tif b[0] != \'"\' {\n\t\tre...{'document': '// getStringValue will return a quoted string and the amount\n// of bytes read\n//\n//...{'document': '// stringValue returns the string value of string literal e.', 'document_id': 18454} - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_java
- Dataset: CodeSearchNet_java at 9f89bdc
- Size: 162,773 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'SCryptUtil.check public static boolean check(String passwd, String hashed) {\n try...{'document': 'Compare the supplied plaintext password to a hashed password.\n\n@param passwd Plai...{'document': 'Compute the the hash value for the String.\n\n@param passwd\nthe password String\n@ret... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_javascript
- Dataset: CodeSearchNet_javascript at 9f89bdc
- Size: 56,734 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'function (state, action) {\n return _.defaults({\n isValidating: action.isValidat...{'document': 'Update is validating result\n@param {State} state - state to update\n@param {Action} a...{'document': 'Updates state with newsletter settings submit error\nHolds information only for latest... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_php
- Dataset: CodeSearchNet_php at 9f89bdc
- Size: 240,327 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'BreadcrumbCollection.addOne public function addOne($title, $url, array $data = [])\n {...{'document': 'Add a breadcrumb item to collection.\n\n@param string $title\n@param string $url\n...{'document': 'Add a breadcrumb to the collection.\n\n@param string $title\n@param string $url\n@... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_python
- Dataset: CodeSearchNet_python at 9f89bdc
- Size: 251,063 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'AbstractElement.settext def settext(self, text, cls=\'current\'):\n """Set the tex...{'document': 'Set the text for this element.\n\n Arguments:\n text (str): The text...{'document': 'Set text value as sole Text child node of element; any existing\n Text nodes ar... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ruby
- Dataset: CodeSearchNet_ruby at 9f89bdc
- Size: 24,731 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'CelluloidPubsub.Reactor.handle_parsed_websocket_message def handle_parsed_websocket_messa...{'document': 'method that checks if the data is a Hash\n\n if the data is a hash then will stringify...{'document': "If the message can be parsed into a Hash it will respond to the reactor's websocket co... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ccr_go
- Dataset: CodeSearchNet_ccr_go at 9f89bdc
- Size: 167,278 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'getStringValue func getStringValue(b []rune) (int, error) {\n\tif b[0] != \'"\' {\n\t\tre...{'document': ' nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\n\t\t\tb[i-1] = c\n\t\t\tb = append(b[:i], b[...{'document': '\t\t\treturn 0, "", fmt.Errorf("nothing following final escape in %q", s)\n\t\t\t}\n\t... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ccr_java
- Dataset: CodeSearchNet_ccr_java at 9f89bdc
- Size: 164,900 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'SCryptUtil.check public static boolean check(String passwd, String hashed) {\n try...{'document': ' int r = (int) params >> 8 & 0xff;\n int p = (int) params & 0...{'document': '\n } catch (Exception e) {\n throw new IllegalStateException("Validity checks ... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ccr_javascript
- Dataset: CodeSearchNet_ccr_javascript at 9f89bdc
- Size: 58,017 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'function (state, action) {\n return _.defaults({\n ', 'query_id': 0}{'document': ' isValidating: action.isValidating,\n lastAction: IS_VALIDATING\n }, state)\n ...{'document': ' baz: action.payload,\n };\n default:\n return state;\n }\n}', 'd... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ccr_php
- Dataset: CodeSearchNet_ccr_php at 9f89bdc
- Size: 241,177 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'BreadcrumbCollection.addOne public function addOne($title, $url, array $data = [])\n {...{'document': ' return $this->addBreadcrumb(\n BreadcrumbItem::make($title, $url, $data)\n...{'document': ' $this->breadcrumbs->push(new Breadcrumb($title, $url));\n }', 'document_id': 135... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ccr_python
- Dataset: CodeSearchNet_ccr_python at 9f89bdc
- Size: 251,758 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'AbstractElement.settext def settext(self, text, cls=\'current\'):\n """Set the tex...{'document': ' only one text content element of each class associated with the element.\n """...{'document': '\n Jython and has been superseded by the \'ast\' module in Python 2.6 and\n ... - Loss:
pylate.losses.contrastive.Contrastive
CodeSearchNet_ccr_ruby
- Dataset: CodeSearchNet_ccr_ruby at 9f89bdc
- Size: 24,918 training samples
- Approximate statistics based on the first 1000 samples:
query positive negative_0 negative_1 negative_2 negative_3 negative_4 negative_5 negative_6 negative_7 negative_8 negative_9 negative_10 negative_11 negative_12 negative_13 negative_14 negative_15 negative_16 negative_17 negative_18 negative_19 negative_20 negative_21 negative_22 negative_23 negative_24 negative_25 negative_26 negative_27 negative_28 negative_29 negative_30 negative_31 negative_32 negative_33 negative_34 negative_35 negative_36 negative_37 negative_38 negative_39 negative_40 negative_41 negative_42 negative_43 negative_44 negative_45 negative_46 negative_47 negative_48 negative_49 type dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict dict details - Samples:
query positive negative_0 {'query': 'CelluloidPubsub.Reactor.handle_parsed_websocket_message def handle_parsed_websocket_messa...{'document': " delegate_action(data) if data['client_action'].present?\n else\n han...{'document': ' elsif data[\'method\']\n # RPC notice.\n event = { name: data[\'method\'], ... - Loss:
pylate.losses.contrastive.Contrastive
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 128per_device_eval_batch_size: 128learning_rate: 3e-05num_train_epochs: 1bf16: Truedataloader_num_workers: 8accelerator_config: {'split_batches': True, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 128per_device_eval_batch_size: 128per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 3e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Truedataloader_num_workers: 8dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': True, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplerrouter_mapping: {}learning_rate_mapping: {}
Training Logs
Click to expand
| Epoch | Step | Training Loss | CodeSearchNetPython_MaxSim_ndcg@10 | CodeSearchNetJavascript_MaxSim_ndcg@10 | CodeSearchNetGo_MaxSim_ndcg@10 | CodeSearchNetRuby_MaxSim_ndcg@10 | CodeSearchNetJava_MaxSim_ndcg@10 | CodeSearchNetPhp_MaxSim_ndcg@10 | CodeSearchNet_mean_MaxSim_ndcg@10 |
|---|---|---|---|---|---|---|---|---|---|
| 0.0000 | 1 | 6.4113 | - | - | - | - | - | - | - |
| 0.0391 | 1250 | 3.2574 | - | - | - | - | - | - | - |
| 0.0781 | 2500 | 19.7862 | 0.9377 | 0.7986 | 0.9622 | 0.8487 | 0.8837 | 0.8834 | 0.8857 |
| 0.1172 | 3750 | 4.6875 | - | - | - | - | - | - | - |
| 0.1562 | 5000 | 2.3691 | 0.9335 | 0.8001 | 0.9614 | 0.8435 | 0.8755 | 0.8818 | 0.8826 |
| 0.1953 | 6250 | 1.4007 | - | - | - | - | - | - | - |
| 0.2344 | 7500 | 2.5715 | 0.9311 | 0.7960 | 0.9611 | 0.8418 | 0.8730 | 0.8866 | 0.8816 |
| 0.2734 | 8750 | 1.5546 | - | - | - | - | - | - | - |
| 0.3125 | 10000 | 0.004 | 0.9332 | 0.7972 | 0.9620 | 0.8435 | 0.8730 | 0.8850 | 0.8823 |
| 0.3515 | 11250 | 2.2819 | - | - | - | - | - | - | - |
| 0.3906 | 12500 | 14.0214 | 0.9324 | 0.7986 | 0.9603 | 0.8409 | 0.8717 | 0.8855 | 0.8816 |
| 0.4297 | 13750 | 2.0774 | - | - | - | - | - | - | - |
| 0.4687 | 15000 | 1.7724 | 0.9272 | 0.7955 | 0.9592 | 0.8381 | 0.8733 | 0.8838 | 0.8795 |
| 0.5078 | 16250 | 3.8234 | - | - | - | - | - | - | - |
| 0.5468 | 17500 | 0.7029 | 0.9300 | 0.7959 | 0.9594 | 0.8371 | 0.8674 | 0.8832 | 0.8788 |
| 0.5859 | 18750 | 1.5763 | - | - | - | - | - | - | - |
| 0.6250 | 20000 | 2.3146 | 0.9294 | 0.7986 | 0.9589 | 0.8376 | 0.8704 | 0.8829 | 0.8796 |
| 0.6640 | 21250 | 13.784 | - | - | - | - | - | - | - |
| 0.7031 | 22500 | 1.4557 | 0.9252 | 0.7927 | 0.9617 | 0.8357 | 0.8661 | 0.8839 | 0.8775 |
| 0.7421 | 23750 | 4.973 | - | - | - | - | - | - | - |
| 0.7812 | 25000 | 2.206 | 0.9240 | 0.7939 | 0.9623 | 0.8354 | 0.8639 | 0.8857 | 0.8775 |
| 0.8203 | 26250 | 0.7343 | - | - | - | - | - | - | - |
| 0.8593 | 27500 | 0.727 | 0.9251 | 0.7926 | 0.9608 | 0.8362 | 0.8676 | 0.8829 | 0.8775 |
| 0.8984 | 28750 | 1.7905 | - | - | - | - | - | - | - |
| 0.9374 | 30000 | 0.7259 | 0.9244 | 0.7937 | 0.9607 | 0.8357 | 0.8655 | 0.8824 | 0.8771 |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.1.1
- PyLate: 1.3.4
- Transformers: 4.57.3
- PyTorch: 2.9.0+cu128
- Accelerate: 1.12.0
- Datasets: 4.4.2
- Tokenizers: 0.22.2
Citation
BibTeX
LateOn-Code
@misc{LateOn-Code,
title = {LateOn-Code: a Family of State-Of-The-Art Late Interaction Code Retrieval Models},
author = {Chaffin, Antoine},
url = {https://huggingface.co/collections/lightonai/lateon-code},
year = {2026}
}
ColGrep
@software{next-plaid,
title = {NextPlaid, ColGREP: Multi-vector search, from database to coding agents.},
url = {https://github.com/lightonai/next-plaid},
author = {Raphaël Sourty},
year = {2026},
}
CoRNStack
@inproceedings{DBLP:conf/iclr/SureshRXNMDJ25,
author = {Tarun Suresh and
Revanth Gangi Reddy and
Yifei Xu and
Zach Nussbaum and
Andriy Mulyar and
Brandon Duderstadt and
Heng Ji},
title = {CoRNStack: High-Quality Contrastive Data for Better Code Retrieval
and Reranking},
booktitle = {The Thirteenth International Conference on Learning Representations,
{ICLR} 2025, Singapore, April 24-28, 2025},
publisher = {OpenReview.net},
year = {2025},
url = {https://openreview.net/forum?id=iyJOUELYir},
timestamp = {Sun, 25 May 2025 21:25:19 +0200},
biburl = {https://dblp.org/rec/conf/iclr/SureshRXNMDJ25.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
CoIR
@inproceedings{li2025coir,
title = {Coir: A comprehensive benchmark for code information retrieval models},
author = {Li, Xiangyang and Dong, Kuicai and Lee, Yi Quan and Xia, Wei and Zhang, Hao and Dai, Xinyi and Wang, Yasheng and Tang, Ruiming},
booktitle = {Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages = {22074--22091},
year = {2025}
}
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
PyLate
@inproceedings{DBLP:conf/cikm/ChaffinS25,
author = {Antoine Chaffin and
Rapha{"{e}}l Sourty},
editor = {Meeyoung Cha and
Chanyoung Park and
Noseong Park and
Carl Yang and
Senjuti Basu Roy and
Jessie Li and
Jaap Kamps and
Kijung Shin and
Bryan Hooi and
Lifang He},
title = {PyLate: Flexible Training and Retrieval for Late Interaction Models},
booktitle = {Proceedings of the 34th {ACM} International Conference on Information
and Knowledge Management, {CIKM} 2025, Seoul, Republic of Korea, November
10-14, 2025},
pages = {6334--6339},
publisher = {{ACM}},
year = {2025},
url = {https://github.com/lightonai/pylate},
doi = {10.1145/3746252.3761608},
}
- Downloads last month
- 47
Dataset used to train lightonai/LateOn-Code-edge
Collection including lightonai/LateOn-Code-edge
Papers for lightonai/LateOn-Code-edge
CoRNStack: High-Quality Contrastive Data for Better Code Ranking
NV-Retriever: Improving text embedding models with effective hard-negative mining
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Evaluation results
- Maxsim Accuracy@1 on CodeSearchNetPythonself-reported0.855
- Maxsim Accuracy@3 on CodeSearchNetPythonself-reported0.958
- Maxsim Accuracy@5 on CodeSearchNetPythonself-reported0.972
- Maxsim Accuracy@10 on CodeSearchNetPythonself-reported0.980
- Maxsim Precision@1 on CodeSearchNetPythonself-reported0.855
- Maxsim Precision@3 on CodeSearchNetPythonself-reported0.319
- Maxsim Precision@5 on CodeSearchNetPythonself-reported0.194
- Maxsim Precision@10 on CodeSearchNetPythonself-reported0.098
- Maxsim Recall@1 on CodeSearchNetPythonself-reported0.855
- Maxsim Recall@3 on CodeSearchNetPythonself-reported0.958
- Maxsim Recall@5 on CodeSearchNetPythonself-reported0.972
- Maxsim Recall@10 on CodeSearchNetPythonself-reported0.980
- Maxsim Ndcg@10 on CodeSearchNetPythonself-reported0.924
- Maxsim Mrr@10 on CodeSearchNetPythonself-reported0.906
- Maxsim Map@100 on CodeSearchNetPythonself-reported0.906
- Maxsim Accuracy@1 on CodeSearchNetJavascriptself-reported0.707
- Maxsim Accuracy@3 on CodeSearchNetJavascriptself-reported0.815
- Maxsim Accuracy@5 on CodeSearchNetJavascriptself-reported0.845
- Maxsim Accuracy@10 on CodeSearchNetJavascriptself-reported0.877
- Maxsim Precision@1 on CodeSearchNetJavascriptself-reported0.707