aihot  2017-12-10 16:25:59  机器学习 |   查看评论   

一些具体的例子

 

      一个经典的例子就是用神经网络做逻辑运算。我们可以用一个两层神经网络来模拟模拟与运算。下面就是具体的代码:

  1. # and operation
  2. X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]).T
  3. y = np.array([[0],[0],[0],[1]]).T

  4. net = Net(2,4,1,0.1)
  5. net.train(X,y)

       以下是调用代码给出的结果,可以看出最终的结果效果还不错,经过10000轮的迭代,最终模型给出的结果和我们的期望结果十分相近,实际上如果我们继续进行迭代,这个算法的精度还可以进一步地提高,Loss可以进一步地减少:

  1. iter = 0, loss =0.105256639066
  2. === Label vs Prediction ===
  3. t=[[0 0 0 1]]
  4. y=[[ 0.40930536  0.4617139   0.36923076  0.4299025 ]]
  5. iter = 1000, loss =0.0229368486589
  6. === Label vs Prediction ===
  7. t=[[0 0 0 1]]
  8. y=[[ 0.04445123  0.22684496  0.17747671  0.68605373]]
  9. iter = 2000, loss =0.00657594469044
  10. === Label vs Prediction ===
  11. t=[[0 0 0 1]]
  12. y=[[ 0.01057127  0.11332809  0.11016211  0.83411794]]
  13. iter = 3000, loss =0.00322081318498
  14. === Label vs Prediction ===
  15. t=[[0 0 0 1]]
  16. y=[[ 0.00517544  0.07831654  0.07871461  0.88419737]]
  17. iter = 4000, loss =0.00201059297485
  18. === Label vs Prediction ===
  19. t=[[0 0 0 1]]
  20. y=[[ 0.00336374  0.06171018  0.0624756   0.90855558]]
  21. iter = 5000, loss =0.00142205310651
  22. === Label vs Prediction ===
  23. t=[[0 0 0 1]]
  24. y=[[ 0.00249895  0.05189239  0.05257126  0.92309992]]
  25. iter = 6000, loss =0.00108341055769
  26. === Label vs Prediction ===
  27. t=[[0 0 0 1]]
  28. y=[[ 0.00200067  0.04532728  0.04585262  0.93287134]]
  29. iter = 7000, loss =0.000866734887908
  30. === Label vs Prediction ===
  31. t=[[0 0 0 1]]
  32. y=[[ 0.00167856  0.04058314  0.04096262  0.9399489 ]]
  33. iter = 8000, loss =0.000717647908313
  34. === Label vs Prediction ===
  35. t=[[0 0 0 1]]
  36. y=[[ 0.00145369  0.03696819  0.0372232   0.94534786]]
  37. iter = 9000, loss =0.000609513241467
  38. === Label vs Prediction ===
  39. t=[[0 0 0 1]]
  40. y=[[ 0.00128784  0.03410575  0.03425751  0.94962473]]
  41. === Final ===
  42. X=[[0 0 1 1]
  43.  [0 1 0 1]]
  44. t=[[0 0 0 1]]
  45. y=[[ 0.00116042  0.03177232  0.03183889  0.95311123]]

 记得初始化

 

      初始化是神经网络一个十分重要的事情,我就不说三遍了,来个实验,如果我们把所有的参数初始化成0,会发生一个可怕的事情:

  • X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]).T
  • y = np.array([[0],[0],[0],[1]]).T

  • net = Net(2,4,1,0.1)
  • net.fc1.w.fill(0)
  • net.fc2.w.fill(0)
  • net.train(X,y)
  • print "=== w1 ==="
  • print net.fc1.w
  • print "=== w2 ==="
  • print net.fc2.w

      直接看结果:

  1. === Final ===
  2. X=[[0 0 1 1]
  3. [0 1 0 1]]
  4. t=[[0 0 0 1]]
  5. y=[[ 3.22480024e-04 2.22335711e-02 2.22335711e-02 9.57711099e-01]]
  6. === w1 ===
  7. [[-2.49072772 -2.49072772 -2.49072772 -2.49072772]
  8. [-2.49072772 -2.49072772 -2.49072772 -2.49072772]]
  9. === w2 ===
  10. [[-3.373125]
  11. [-3.373125]
  12. [-3.373125]
  13. [-3.373125]]

       不但没有训练出合理的结果,而且每一层的参数还都是一样的。

 

      但是如果把每层参数设为不同的固定值呢?

  • X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]).T
  • y = np.array([[0],[0],[0],[1]]).T

  • net = Net(2,4,1,0.1)
  • net.fc1.w.fill(1)
  • net.fc2.w.fill(0)
  • net.train(X,y)
  • print "=== w1 ==="
  • print net.fc1.w
  • print "=== w2 ==="
  • print net.fc2.w

 

除特别注明外,本站所有文章均为 赢咖4注册 原创,转载请注明出处来自神经网络-全连接层(3)

留言与评论(共有 0 条评论)
   
验证码:
[lianlun]1[/lianlun]