python学习之机器学习
线性回归第一个机器学习算法 - 单变量线性回归
"""使用sklearn实现线性回归"""import numpy as npfrom sklearn.linear_model import LinearRegressionX = 2 * np.random.rand(100, 1)y = 4 + 3 * X + np.random.randn(100, 1)lin_reg = LinearRegression()#fit方法就是训练模型的方法lin_reg.fit(X, y)#intercept 是截距 coef是参数print(lin_reg.intercept_, lin_reg.coef_)#预测X_new = np.array([[0], [2]])print(lin_reg.predict(X_new))
#encoding=utf-8"""线性回归实现梯度下降的批处理(batch_gradient_descent )"""import numpy as npX = 2 * np.random.rand(100, 1)y = 4 + 3 * X + np.random.randn(100, 1)X_b = np.c_[np.ones((100, 1)), X]#print(X_b)learning_rate = 0.1#通常在做机器学习的时候,一般不会等到他收敛,因为太浪费时间,所以会设置一个收敛次数n_iterations = 1000m = 100#1.初始化theta, w0...wntheta = np.random.randn(2, 1)count = 0#4. 不会设置阈值,之间设置超参数,迭代次数,迭代次数到了,我们就认为收敛了for iteration in range(n_iterations): count += 1 #2. 接着求梯度gradient gradients = 1.0/m * X_b.T.dot(X_b.dot(theta)-y) #3. 应用公式调整theta值, theta_t + 1 = theta_t - grad * learning_rate theta = theta - learning_rate * gradientsprint(count)print(theta)
import numpy as npimport timea = np.array([1,2,3,4])a = np.random.rand(1000000)b = np.random.rand(1000000)tic = time.time()c = np.dot(a,b)toc = time.time()print("Vectorized version:" + str(1000*(toc-tic)) + 'ms')c = 0tic = time.time()for i in range(1000000): c += a[i]*b[i]toc = time.time()print("for loop:" + str(1000*(toc-tic)) + 'ms')
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。