Text Classification
Transformers
PyTorch
Chinese
bert
Trained with AutoTrain
text-embeddings-inference
Instructions to use 0x-YuAN/CL_1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use 0x-YuAN/CL_1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="0x-YuAN/CL_1")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("0x-YuAN/CL_1") model = AutoModelForSequenceClassification.from_pretrained("0x-YuAN/CL_1") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 6b17af1eec7808c094f4c032568a999f5d514462562a99f151c25ba86dd1dddf
- Size of remote file:
- 409 MB
- SHA256:
- ec5eacd1bea197d9ac118cf0611fba246285ecb46c66b8cf391972d51eb3030e
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.