YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
π§ Phyen β Fine-Tuned Qwen Model for Physics and Engineering Reasoning
Model ID: parani01/phyen
Base Model: Qwen (7B or equivalent)
Type: Fully fine-tuned and merged model (not LoRA)
Framework: PyTorch + Transformers
Files: 4 Γ .safetensors shards + tokenizer
π§© Model Summary
Phyen is a specialized variant of the Qwen model, fine-tuned for physics, engineering, and technical scientific reasoning.
It has been trained and merged to perform better on domain-specific text such as:
- Thermodynamics
- Fluid mechanics
- Structural analysis
- General physics conceptual reasoning
It retains general Qwen language ability but prioritizes scientific precision.
π Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "parani01/phyen"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "Explain the laws of thermodynamics in simple words."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=120)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π Intended Use
Intended for:
- Physics and engineering question answering
- Scientific writing and conceptual reasoning
- Educational or research assistants
Not intended for:
- General conversation
- Legal or medical advice
- Sensitive or factual decision-making outside the training domain
βοΈ Technical Details
| Field | Description |
|---|---|
| Architecture | Qwen-style Transformer |
| Parameters | ~7 Billion |
| Precision | bfloat16 / float16 (auto-detect) |
| Framework | PyTorch + safetensors |
| Tokenizer | Qwen tokenizer |
β οΈ Limitations
- May hallucinate answers outside scientific domains
- Requires GPU (β₯16GB VRAM recommended) for efficient inference
- Does not guarantee factual correctness in all contexts
π§© Training Information
- Base Model: Qwen-7B (open-source base)
- Fine-tuned with: domain-specific corpus of physics and engineering text
- Merged into: full model weights (
merged_vlm_physics)
π·οΈ License
Add your license here (for example, Apache-2.0 or MIT).
Ensure you comply with the original Qwen base modelβs license.
π¨βπ» Author
Developed and fine-tuned by Parani Dharan
Published at: Hugging Face β parani01
π¬ Citation
If you use this model, please cite it as:
@model{parani2025phyen,
title={Phyen: Fine-tuned Qwen Model for Physics and Engineering Reasoning},
author={Parani Dharan},
year={2025},
publisher={Hugging Face}
}
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
1
Ask for provider support