如何解决爆显的问题
来源:11-10 文本情感分类-test脚本定义

weixin_慕勒4383646
2021-11-14
老师:
您好!我在测试模型时出现了如下问题,上网查原来是爆显,请问老师该如何解决这个问题,谢谢!
`Traceback (most recent call last):
File “D:/pythonProject/learnPytorch/mooc/chapter_11/test.py”, line 31, in
pred_soft_max = model_textcls.forward(data)
File “D:\pythonProject\learnPytorch\mooc\chapter_11\model.py”, line 23, in forward
out,_ = self.ltsm(embed)
File “D:\Envs\machine_learning\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _call_impl
return forward_call(*input, **kwargs)
File “D:\Envs\machine_learning\lib\site-packages\torch\nn\modules\rnn.py”, line 692, in forward
self.dropout, self.training, self.bidirectional, self.batch_first)
RuntimeError: CUDA out of memory. Tried to allocate 1.77 GiB (GPU 0; 6.00 GiB total capacity; 2.41 GiB already allocated; 0 bytes free; 4.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Process finished with exit code 1
2回答
-
weixin_慕勒4383646
提问者
2021-12-01
在老师的指导下已解决,每次推理后清空这招百试不爽,感谢老师
00 -
会写代码的好厨师
2021-12-01
电脑显存是什么配置的?如果显存太小就没有办法了,只能改成用cpu推理,如果显存本来就比较大,可以再每次推理结束之后,清空一下占用的显存。torch.cuda.empty_cache()
00
相似问题