Activation Function 发表于 2018-12-15 | 分类于 TensorFlow 字数统计: 428 | 阅读时长 ≈ 2 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697#为什么需要激励函数?Why need activation function?#激励函数就是为了解决不能用线性方程(Linear)解决的问题,y=w*x+b。也就是非线性方程问题(Nonlinear)#y=Wx-->y=AF(Wx)#一些常用的AF非线性函数,比如relu,sigmoid,tanh,也就是激励函数#relu为,x>0,f(x)=1,x<=0,f(x=0)#你也可以自己创建自己的激励函数,但是函数必须保证是可微分的#在卷积神经网络中,推荐使用relu激励函数#在循环神经网络中(Recurrent Nerual Network)推荐适用relu or tanh#sigmoid函数,也叫Logistics会出现梯度消失import numpy as npimport matplotlib.pyplot as pltdef sigmoid(x): y = 1.0 / (1.0 + np.exp(-x)) return ydef elu(x, a): y = x.copy() for i in range(y.shape[0]): if y[i] < 0: y[i] = a * (np.exp(y[i]) - 1) return ydef lrelu(x, a): y = x.copy() for i in range(y.shape[0]): if y[i] < 0: y[i] = a * y[i] return ydef relu(x): y = x.copy() y[y < 0] = 0 return ydef softplus(x): y = np.log(np.exp(x) + 1) return ydef softsign(x): y = x / (np.abs(x) + 1) return ydef tanh(x): y = (1.0 - np.exp(-2 * x)) / (1.0 + np.exp(-2 * x)) return y#invoke this functionx = np.linspace(start=-10, stop=10, num=100)y_sigmoid = sigmoid(x)y_elu = elu(x, 0.25)y_lrelu = lrelu(x, 0.25)y_relu = relu(x)y_softplus = softplus(x)y_softsign = softsign(x)y_tanh = tanh(x)tx = 6ty = 0.9#display the graphplt.subplot(331)plt.title('sigmoid')plt.plot(x, y_sigmoid)plt.grid(True)plt.subplot(332)plt.title('elu')plt.plot(x, y_elu)plt.grid(True)plt.subplot(333)plt.title('lrelu')plt.plot(x, y_lrelu)plt.grid(True)plt.subplot(334)plt.title('relu')plt.plot(x, y_relu)plt.grid(True)plt.subplot(335)plt.title('softplus')plt.plot(x, y_softplus)plt.grid(True)plt.subplot(336)plt.title('softsign')plt.plot(x, y_softsign)plt.grid(True)plt.subplot(337)plt.title('tanh')plt.plot(x, y_tanh)plt.grid(True)plt.show()