runtime error
Exit code: 1. Reason: otal...): 0%| | 0.00/18.8G [00:01<?, ?B/s][A Downloading (incomplete total...): 3%|▎ | 564M/18.8G [00:02<00:33, 544MB/s][A Downloading (incomplete total...): 22%|██▏ | 4.09G/18.8G [00:03<00:06, 2.27GB/s][A Downloading (incomplete total...): 35%|███▌ | 6.61G/18.8G [00:04<00:05, 2.18GB/s][A Downloading (incomplete total...): 68%|██████▊ | 12.8G/18.8G [00:05<00:01, 3.65GB/s][A Fetching 4 files: 25%|██▌ | 1/4 [00:06<00:20, 6.85s/it][A Downloading (incomplete total...): 86%|████████▌ | 16.2G/18.8G [00:07<00:00, 2.65GB/s][A Fetching 4 files: 100%|██████████| 4/4 [00:12<00:00, 2.93s/it][A Fetching 4 files: 100%|██████████| 4/4 [00:12<00:00, 3.22s/it] Download complete: 100%|██████████| 18.8G/18.8G [00:12<00:00, 2.65GB/s] [ATraceback (most recent call last): File "/home/user/app/app.py", line 33, in <module> model = AutoModelForCausalLM.from_pretrained(MODEL, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 380, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4089, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/user/.cache/huggingface/modules/transformers_modules/THUDM/LongWriter_hyphen_glm4_hyphen_9b/2e7f69100db28b2874e80739aed789dae214d3c0/modeling_chatglm.py", line 912, in __init__ self.max_sequence_length = config.max_length File "/usr/local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 419, in __getattribute__ return super().__getattribute__(key) AttributeError: 'ChatGLMConfig' object has no attribute 'max_length'. Did you mean: 'seq_length'? Download complete: 100%|██████████| 18.8G/18.8G [00:13<00:00, 1.42GB/s]
Container logs:
Fetching error logs...