BERT
Collection
BERT models of varying flavors • 22 items • Updated
How to use Intel/bert-large-uncased-cola-int8-inc with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Intel/bert-large-uncased-cola-int8-inc") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Intel/bert-large-uncased-cola-int8-inc")
model = AutoModelForSequenceClassification.from_pretrained("Intel/bert-large-uncased-cola-int8-inc")This is an INT8 PyTorch model quantized with huggingface/optimum-intel through the usage of Intel® Neural Compressor.
The original fp32 model comes from the fine-tuned model yoshitomo-matsubara/bert-large-uncased-cola.
| INT8 | FP32 | |
|---|---|---|
| Accuracy (eval-f1) | 0.6336 | 0.6335 |
| Model size (MB) | 388 | 1340 |
from optimum.intel import INCModelForSequenceClassification
model_id = "Intel/bert-large-uncased-cola-int8"
int8_model = INCModelForSequenceClassification.from_pretrained(model_id)