Programming/Deep Learning
Linear Regression 구현
xxvd
2018. 3. 19. 23:37
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | import tensorflow as tf x_train = [1,2,3] y_train = [1,2,3] W = tf.Variable(tf.random_normal([1]), name = 'weight') b= tf.Variable(tf.random_normal([1]), name = 'bias') Hypothesis = x_train * W +b //define Hypothesis cost = tf.reduce_mean(tf.square(Hypothesis - y_train)) // 평균값을 냄 optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) //경사 하강법 , 학습비율 0.1 train = optimizer.minimize(cost) // optimizer의 minimize 함수 호출 하여 cost를 minimize함 sess = tf.Session() sess.run(tf.global_variables_initializer()) //W,b 와 같은 Variable을 initialize 함. Variable들을 사용 할 수 있음 for step in range(2001): sess.run(train) if step % 20 ==0: print(step,sess.run(cost),sess.run(W),sess.run(b)) )) | cs |