Instructions to use microsoft/deberta-v3-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/deberta-v3-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="microsoft/deberta-v3-base")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("microsoft/deberta-v3-base", dtype="auto") - Inference
- Notebooks
- Google Colab
- Kaggle
Deberta max length
inputs=tokenizer(text,
add_special_tokens=True,
max_length=1024,
padding='max_length',
truncation=True)
If i give max length=1024 it takes in and doesnt throw error even though max positional embeddings is 512 so the model can take any size? The model runs perfectly though
Hi
can we increase max_length to say 2048 if we increase positional embedding to 2048?? is it possible. or is it possible to increase max_length by finetuning model on larger context size?
I want to fine-tune the deberta model for my specific use case. In this the context length is around 1200 tokens, do any one face any issue in fine-tuning when the token length is greater than the limi 512?