波波老师,我问一下,如果不使用dot方法。
来源:5-4 向量化
爱西瓜同志
2019-03-29
直接对向量进行减法和除法的操作,然后分别对分母和分子进行sum求和,再相除是不是也是可以的呢?如果可以,这样的效率应该和dot方法的效率是类似的吧?
写回答
2回答
-
抱歉,我没有太理解你的意思,用代码表达一下?
012019-03-29 -
爱西瓜同志
提问者
2019-03-29
一开始是:
用for循环对对num和d进行赋值
def fit(self, x_train, y_train): """根据训练数据集x_train,y_train训练Simple Linear Regression模型""" assert x_train.ndim == 1, \ "Simple Linear Regressor can only solve single feature training data." assert len(x_train) == len(y_train), \ "the size of x_train must be equal to the size of y_train" x_mean = np.mean(x_train) y_mean = np.mean(y_train) num = 0.0 d = 0.0 for x, y in zip(x_train, y_train): num += (x - x_mean) * (y - y_mean) d += (x - x_mean) ** 2 self.a_ = num / d self.b_ = y_mean - self.a_ * x_mean return self
后来经过向量化优化:
def fit(self, x_train, y_train): """根据训练数据集x_train,y_train训练Simple Linear Regression模型""" assert x_train.ndim == 1, \ "Simple Linear Regressor can only solve single feature training data." assert len(x_train) == len(y_train), \ "the size of x_train must be equal to the size of y_train" x_mean = np.mean(x_train) y_mean = np.mean(y_train) self.a_ = (x_train - x_mean).dot(y_train - y_mean) / (x_train - x_mean).dot(x_train - x_mean) self.b_ = y_mean - self.a_ * x_mean return self
这里主要使用了dot方法
我的问题是:
直接对向量进行加减乘除操作,然后分别对分母和分子进行sum求和,再分母分子相除是不是也是可以的呢?如果可以,这样的效率应该和dot方法的效率是类似的吧?
np.sum((x_train - x_mean)*(y_train - y_mean)) / np.sum((x_train - x_mean) * (x_train - x_mean))
032022-10-24
相似问题