Text Generation
Transformers
PyTorch
English
llama
upstage
llama-2
instruct
instruction
text-generation-inference
Instructions to use upstage/SOLAR-0-70b-16bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use upstage/SOLAR-0-70b-16bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="upstage/SOLAR-0-70b-16bit")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("upstage/SOLAR-0-70b-16bit") model = AutoModelForCausalLM.from_pretrained("upstage/SOLAR-0-70b-16bit") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use upstage/SOLAR-0-70b-16bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "upstage/SOLAR-0-70b-16bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "upstage/SOLAR-0-70b-16bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/upstage/SOLAR-0-70b-16bit
- SGLang
How to use upstage/SOLAR-0-70b-16bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "upstage/SOLAR-0-70b-16bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "upstage/SOLAR-0-70b-16bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "upstage/SOLAR-0-70b-16bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "upstage/SOLAR-0-70b-16bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use upstage/SOLAR-0-70b-16bit with Docker Model Runner:
docker model run hf.co/upstage/SOLAR-0-70b-16bit
Updates
Solar, a new bot created by Upstage, is now available on Poe. As a top-ranked model on the HuggingFace Open LLM leaderboard, and a fine tune of Llama 2, Solar is a great example of the progress enabled by open source. Try now at https://poe.com/Solar-0-70b
SOLAR-0-70b-16bit model card
The model name has been changed from LLaMa-2-70b-instruct-v2 to SOLAR-0-70b-16bit
Model Details
- Developed by: Upstage
- Backbone Model: LLaMA-2
- Language(s): English
- Library: HuggingFace Transformers
- License: Fine-tuned checkpoints is licensed under the Non-Commercial Creative Commons license (CC BY-NC-4.0)
- Where to send comments: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the Hugging Face community's model repository
- Contact: For questions and comments about the model, please email contact@upstage.ai
Dataset Details
Used Datasets
- Orca-style dataset
- Alpaca-style dataset
- No other dataset was used except for the dataset mentioned above
- No benchmark test set or the training set are used
Prompt Template
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
Usage
- The followings are tested on A100 80GB
- Our model can handle up to 10k+ input tokens, thanks to the
rope_scalingoption
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct-v2")
model = AutoModelForCausalLM.from_pretrained(
"upstage/Llama-2-70b-instruct-v2",
device_map="auto",
torch_dtype=torch.float16,
load_in_8bit=True,
rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
)
prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
del inputs["token_type_ids"]
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
Hardware and Software
- Hardware: We utilized an A100x8 * 4 for training our model
- Training Factors: We fine-tuned this model using a combination of the DeepSpeed library and the HuggingFace Trainer / HuggingFace Accelerate
Evaluation Results
Overview
- We conducted a performance evaluation following the tasks being evaluated on the Open LLM Leaderboard.
We evaluated our model on four benchmark datasets, which include
ARC-Challenge,HellaSwag,MMLU, andTruthfulQAWe used the lm-evaluation-harness repository, specifically commit b281b0921b636bc36ad05c0b0b0763bd6dd43463. - We used MT-bench, a set of challenging multi-turn open-ended questions, to evaluate the models
Main Results
| Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | MT_Bench | |
|---|---|---|---|---|---|---|---|
| Llama-2-70b-instruct-v2(Ours, Open LLM Leaderboard) | 73 | 71.1 | 87.9 | 70.6 | 62.2 | 7.44063 | |
| Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | 69.8 | 61 | 7.24375 | |
| llama-65b-instruct (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | ||
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | ||
| llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | ||
| llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | ||
| llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | ||
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
Scripts for H4 Score Reproduction
- Prepare evaluation environments:
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness
Contact Us
About Upstage
- Upstage is a company specialized in Large Language Models (LLMs) and AI. We will help you build private LLMs and related applications. If you have a dataset to build domain specific LLMs or make LLM applications, please contact us at βΊ click here to contact
- As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally.
- Downloads last month
- 66,441