Sentence Similarity
sentence-transformers
Safetensors
Transformers
Russian
bert
feature-extraction
russian
pretraining
embeddings
tiny
mteb
Eval Results (legacy)
text-embeddings-inference
Instructions to use sergeyzh/rubert-tiny-turbo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use sergeyzh/rubert-tiny-turbo with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("sergeyzh/rubert-tiny-turbo") sentences = [ "Это счастливый человек", "Это счастливая собака", "Это очень счастливый человек", "Сегодня солнечный день" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use sergeyzh/rubert-tiny-turbo with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sergeyzh/rubert-tiny-turbo") model = AutoModel.from_pretrained("sergeyzh/rubert-tiny-turbo") - Inference
- Notebooks
- Google Colab
- Kaggle
| { | |
| "added_tokens_decoder": { | |
| "0": { | |
| "content": "[PAD]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "1": { | |
| "content": "[UNK]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "2": { | |
| "content": "[CLS]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "3": { | |
| "content": "[SEP]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "4": { | |
| "content": "[MASK]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| } | |
| }, | |
| "clean_up_tokenization_spaces": true, | |
| "cls_token": "[CLS]", | |
| "do_basic_tokenize": true, | |
| "do_lower_case": false, | |
| "mask_token": "[MASK]", | |
| "model_max_length": 2048, | |
| "never_split": null, | |
| "pad_token": "[PAD]", | |
| "sep_token": "[SEP]", | |
| "strip_accents": null, | |
| "tokenize_chinese_chars": true, | |
| "tokenizer_class": "BertTokenizer", | |
| "unk_token": "[UNK]" | |
| } | |