Deberta max length

#12
by dataminer1 - opened

inputs=tokenizer(text,
add_special_tokens=True,
max_length=1024,
padding='max_length',
truncation=True)
If i give max length=1024 it takes in and doesnt throw error even though max positional embeddings is 512 so the model can take any size? The model runs perfectly though

Hi
can we increase max_length to say 2048 if we increase positional embedding to 2048?? is it possible. or is it possible to increase max_length by finetuning model on larger context size?

I want to fine-tune the deberta model for my specific use case. In this the context length is around 1200 tokens, do any one face any issue in fine-tuning when the token length is greater than the limi 512?

Sign up or log in to comment