运行到第2500步之后数值不再变化怎么办?

来源:2-7 神经元实现(二分类逻辑斯蒂回归模型实现)

心影交叠

2018-08-11

[Train] step:500,loss:0.35000,acc:0.65000

[Train] step:1000,loss:0.35000,acc:0.65000

[Train] step:1500,loss:0.29998,acc:0.70000

[Train] step:2000,loss:0.24998,acc:0.75000

[Train] step:2500,loss:0.10000,acc:0.90000

[Train] step:3000,loss:0.10000,acc:0.90000

[Train] step:3500,loss:0.10000,acc:0.90000

[Train] step:4000,loss:0.10000,acc:0.90000

[Train] step:4500,loss:0.10000,acc:0.90000

[Train] step:5000,loss:0.10000,acc:0.90000

[Test ]Step:5000,acc:0.55000

[Train] step:5500,loss:0.10000,acc:0.90000

[Train] step:6000,loss:0.10000,acc:0.90000

[Train] step:6500,loss:0.10000,acc:0.90000

[Train] step:7000,loss:0.10000,acc:0.90000

[Train] step:7500,loss:0.10000,acc:0.90000

[Train] step:8000,loss:0.10000,acc:0.90000

[Train] step:8500,loss:0.10000,acc:0.90000

[Train] step:9000,loss:0.10000,acc:0.90000

[Train] step:9500,loss:0.10000,acc:0.90000

[Train] step:10000,loss:0.10000,acc:0.90000

[Test ]Step:10000,acc:0.55000

[Train] step:10500,loss:0.10000,acc:0.90000

[Train] step:11000,loss:0.10000,acc:0.90000

[Train] step:11500,loss:0.10000,acc:0.90000

[Train] step:12000,loss:0.10000,acc:0.90000

[Train] step:12500,loss:0.10000,acc:0.90000

[Train] step:13000,loss:0.10000,acc:0.90000

[Train] step:13500,loss:0.10000,acc:0.90000

[Train] step:14000,loss:0.10000,acc:0.90000

[Train] step:14500,loss:0.10000,acc:0.90000

[Train] step:15000,loss:0.10000,acc:0.90000

[Test ]Step:15000,acc:0.50000

[Train] step:15500,loss:0.10000,acc:0.90000

[Train] step:16000,loss:0.10000,acc:0.90000

[Train] step:16500,loss:0.10000,acc:0.90000

[Train] step:17000,loss:0.10000,acc:0.90000

[Train] step:17500,loss:0.10000,acc:0.90000

[Train] step:18000,loss:0.10000,acc:0.90000

[Train] step:18500,loss:0.10000,acc:0.90000

[Train] step:19000,loss:0.10000,acc:0.90000

[Train] step:19500,loss:0.10000,acc:0.90000

[Train] step:20000,loss:0.10000,acc:0.90000

[Test ]Step:20000,acc:0.50000

[Train] step:20500,loss:0.10000,acc:0.90000

[Train] step:21000,loss:0.10000,acc:0.90000

[Train] step:21500,loss:0.10000,acc:0.90000

[Train] step:22000,loss:0.10000,acc:0.90000

[Train] step:22500,loss:0.10000,acc:0.90000

[Train] step:23000,loss:0.10000,acc:0.90000

[Train] step:23500,loss:0.10000,acc:0.90000

[Train] step:24000,loss:0.10000,acc:0.90000

[Train] step:24500,loss:0.10000,acc:0.90000

[Train] step:25000,loss:0.10000,acc:0.90000

[Test ]Step:25000,acc:0.50000

[Train] step:25500,loss:0.10000,acc:0.90000

[Train] step:26000,loss:0.10000,acc:0.90000

[Train] step:26500,loss:0.10000,acc:0.90000

[Train] step:27000,loss:0.10000,acc:0.90000

[Train] step:27500,loss:0.10000,acc:0.90000

[Train] step:28000,loss:0.10000,acc:0.90000

[Train] step:28500,loss:0.10000,acc:0.90000

[Train] step:29000,loss:0.10000,acc:0.90000

[Train] step:29500,loss:0.10000,acc:0.90000

[Train] step:30000,loss:0.10000,acc:0.90000

[Test ]Step:30000,acc:0.50000

[Train] step:30500,loss:0.10000,acc:0.90000

[Train] step:31000,loss:0.10000,acc:0.90000

[Train] step:31500,loss:0.10000,acc:0.90000

[Train] step:32000,loss:0.10000,acc:0.90000

[Train] step:32500,loss:0.10000,acc:0.90000

[Train] step:33000,loss:0.10000,acc:0.90000

[Train] step:33500,loss:0.10000,acc:0.90000

[Train] step:34000,loss:0.10000,acc:0.90000

[Train] step:34500,loss:0.10000,acc:0.90000

[Train] step:35000,loss:0.10000,acc:0.90000

[Test ]Step:35000,acc:0.50000

[Train] step:35500,loss:0.10000,acc:0.90000

[Train] step:36000,loss:0.10000,acc:0.90000

[Train] step:36500,loss:0.10000,acc:0.90000

[Train] step:37000,loss:0.10000,acc:0.90000

[Train] step:37500,loss:0.10000,acc:0.90000

[Train] step:38000,loss:0.10000,acc:0.90000

[Train] step:38500,loss:0.10000,acc:0.90000

[Train] step:39000,loss:0.10000,acc:0.90000

[Train] step:39500,loss:0.10000,acc:0.90000

[Train] step:40000,loss:0.10000,acc:0.90000

[Test ]Step:40000,acc:0.50000

[Train] step:40500,loss:0.10000,acc:0.90000

[Train] step:41000,loss:0.10000,acc:0.90000

[Train] step:41500,loss:0.10000,acc:0.90000

[Train] step:42000,loss:0.10000,acc:0.90000

[Train] step:42500,loss:0.10000,acc:0.90000

[Train] step:43000,loss:0.10000,acc:0.90000

[Train] step:43500,loss:0.10000,acc:0.90000

[Train] step:44000,loss:0.10000,acc:0.90000

[Train] step:44500,loss:0.10000,acc:0.90000

[Train] step:45000,loss:0.10000,acc:0.90000

[Test ]Step:45000,acc:0.50000

[Train] step:45500,loss:0.10000,acc:0.90000

[Train] step:46000,loss:0.10000,acc:0.90000

[Train] step:46500,loss:0.10000,acc:0.90000

[Train] step:47000,loss:0.10000,acc:0.90000

[Train] step:47500,loss:0.10000,acc:0.90000

[Train] step:48000,loss:0.10000,acc:0.90000

[Train] step:48500,loss:0.10000,acc:0.90000

[Train] step:49000,loss:0.10000,acc:0.90000

[Train] step:49500,loss:0.10000,acc:0.90000

[Train] step:50000,loss:0.10000,acc:0.90000

[Test ]Step:50000,acc:0.50000

[Train] step:50500,loss:0.10000,acc:0.90000

[Train] step:51000,loss:0.10000,acc:0.90000

[Train] step:51500,loss:0.10000,acc:0.90000

[Train] step:52000,loss:0.10000,acc:0.90000

[Train] step:52500,loss:0.10000,acc:0.90000

[Train] step:53000,loss:0.10000,acc:0.90000

[Train] step:53500,loss:0.10000,acc:0.90000

[Train] step:54000,loss:0.10000,acc:0.90000

[Train] step:54500,loss:0.10000,acc:0.90000

[Train] step:55000,loss:0.10000,acc:0.90000

[Test ]Step:55000,acc:0.50000

[Train] step:55500,loss:0.10000,acc:0.90000

[Train] step:56000,loss:0.10000,acc:0.90000

[Train] step:56500,loss:0.10000,acc:0.90000

[Train] step:57000,loss:0.10000,acc:0.90000

[Train] step:57500,loss:0.10000,acc:0.90000

[Train] step:58000,loss:0.10000,acc:0.90000

[Train] step:58500,loss:0.10000,acc:0.90000

[Train] step:59000,loss:0.10000,acc:0.90000

[Train] step:59500,loss:0.10000,acc:0.90000

[Train] step:60000,loss:0.10000,acc:0.90000

[Test ]Step:60000,acc:0.50000

[Train] step:60500,loss:0.10000,acc:0.90000

[Train] step:61000,loss:0.10000,acc:0.90000

[Train] step:61500,loss:0.10000,acc:0.90000

[Train] step:62000,loss:0.10000,acc:0.90000

[Train] step:62500,loss:0.10000,acc:0.90000

[Train] step:63000,loss:0.10000,acc:0.90000

[Train] step:63500,loss:0.10000,acc:0.90000

[Train] step:64000,loss:0.10000,acc:0.90000

[Train] step:64500,loss:0.10000,acc:0.90000

[Train] step:65000,loss:0.10000,acc:0.90000

[Test ]Step:65000,acc:0.50000

[Train] step:65500,loss:0.10000,acc:0.90000

[Train] step:66000,loss:0.10000,acc:0.90000

[Train] step:66500,loss:0.10000,acc:0.90000

[Train] step:67000,loss:0.10000,acc:0.90000

[Train] step:67500,loss:0.10000,acc:0.90000

[Train] step:68000,loss:0.10000,acc:0.90000

[Train] step:68500,loss:0.10000,acc:0.90000

[Train] step:69000,loss:0.10000,acc:0.90000

[Train] step:69500,loss:0.10000,acc:0.90000

[Train] step:70000,loss:0.10000,acc:0.90000

[Test ]Step:70000,acc:0.50000

[Train] step:70500,loss:0.10000,acc:0.90000

[Train] step:71000,loss:0.10000,acc:0.90000

[Train] step:71500,loss:0.10000,acc:0.90000

[Train] step:72000,loss:0.10000,acc:0.90000

[Train] step:72500,loss:0.10000,acc:0.90000

[Train] step:73000,loss:0.10000,acc:0.90000

[Train] step:73500,loss:0.10000,acc:0.90000

[Train] step:74000,loss:0.10000,acc:0.90000

[Train] step:74500,loss:0.10000,acc:0.90000

[Train] step:75000,loss:0.10000,acc:0.90000

[Test ]Step:75000,acc:0.50000

[Train] step:75500,loss:0.10000,acc:0.90000

[Train] step:76000,loss:0.10000,acc:0.90000

[Train] step:76500,loss:0.10000,acc:0.90000

[Train] step:77000,loss:0.10000,acc:0.90000

[Train] step:77500,loss:0.10000,acc:0.90000

[Train] step:78000,loss:0.10000,acc:0.90000

[Train] step:78500,loss:0.10000,acc:0.90000

[Train] step:79000,loss:0.10000,acc:0.90000

[Train] step:79500,loss:0.10000,acc:0.90000

[Train] step:80000,loss:0.10000,acc:0.90000

[Test ]Step:80000,acc:0.50000

[Train] step:80500,loss:0.10000,acc:0.90000

[Train] step:81000,loss:0.10000,acc:0.90000

[Train] step:81500,loss:0.10000,acc:0.90000

[Train] step:82000,loss:0.10000,acc:0.90000

[Train] step:82500,loss:0.10000,acc:0.90000

[Train] step:83000,loss:0.10000,acc:0.90000

[Train] step:83500,loss:0.10000,acc:0.90000

[Train] step:84000,loss:0.10000,acc:0.90000

[Train] step:84500,loss:0.10000,acc:0.90000

[Train] step:85000,loss:0.10000,acc:0.90000

[Test ]Step:85000,acc:0.50000

[Train] step:85500,loss:0.10000,acc:0.90000

[Train] step:86000,loss:0.10000,acc:0.90000

[Train] step:86500,loss:0.10000,acc:0.90000

[Train] step:87000,loss:0.10000,acc:0.90000

[Train] step:87500,loss:0.10000,acc:0.90000

[Train] step:88000,loss:0.10000,acc:0.90000

[Train] step:88500,loss:0.10000,acc:0.90000

[Train] step:89000,loss:0.10000,acc:0.90000

[Train] step:89500,loss:0.10000,acc:0.90000

[Train] step:90000,loss:0.10000,acc:0.90000

[Test ]Step:90000,acc:0.50000

[Train] step:90500,loss:0.10000,acc:0.90000

[Train] step:91000,loss:0.10000,acc:0.90000

[Train] step:91500,loss:0.10000,acc:0.90000

[Train] step:92000,loss:0.10000,acc:0.90000

[Train] step:92500,loss:0.10000,acc:0.90000

[Train] step:93000,loss:0.10000,acc:0.90000

[Train] step:93500,loss:0.10000,acc:0.90000

[Train] step:94000,loss:0.10000,acc:0.90000

[Train] step:94500,loss:0.10000,acc:0.90000

[Train] step:95000,loss:0.10000,acc:0.90000

[Test ]Step:95000,acc:0.50000

[Train] step:95500,loss:0.10000,acc:0.90000

[Train] step:96000,loss:0.10000,acc:0.90000

[Train] step:96500,loss:0.10000,acc:0.90000

[Train] step:97000,loss:0.10000,acc:0.90000

[Train] step:97500,loss:0.10000,acc:0.90000

[Train] step:98000,loss:0.10000,acc:0.90000

[Train] step:98500,loss:0.10000,acc:0.90000

[Train] step:99000,loss:0.10000,acc:0.90000

[Train] step:99500,loss:0.10000,acc:0.90000

[Train] step:100000,loss:0.10000,acc:0.90000

[Test ]Step:100000,acc:0.50000


Process finished with exit code 0

后面基本都是一样的数值了……是不是我的代码有问题?好几次都是这样

写回答

2回答

正十七

2018-08-12

看起来,你的代码并没有学到东西,因为测试集的准确率是50%,但是你的训练accuracy却能达到90%,所以问题可能是过拟合了,导致的原因可能是训练数据和测试数据的处理方式不一样,比如一个归一化了,另一个没有归一化等。而训练集最后准确率也没有上去,说明最后梯度比较小了。 造成这两个问题的原因有很多。建议把我的课程代码下载下去运行,然后再检查下数据是否是对的。

1
0

Mr_小祥

2018-08-13

我猜应该是代码打错了吧……   你检查下,  二分类单层的神经元不太可能是过拟合吧……

0
0

深度学习之神经网络(CNN/RNN/GAN)算法原理+实战

深度学习算法工程师必学,深入理解深度学习核心算法CNN RNN GAN

2617 学习 · 935 问题

查看课程