본문 바로가기
Dev/딥러닝

02. Tensor Flow로 linear regression 예제

by bsion 2018. 8. 16.
02. Linear Regression 의 개념

출처 : 모두를위한 머신러닝 (http://hunkim.github.io/ml/)


이론





예제

Hypothesis and cost function


학슴을 통해 cost function 을 최소화 하는 W, b 를 찾아야 함.


Build graph



In [1]:
import tensorflow as tf

x_train = [1, 2, 3]
y_train = [1, 2, 3]

# Variable : Tensorflow 가 사용하는 변수
# Training 과정중에서 변함
W = tf.Variable(tf.random_normal([1]), name='weight')   # rank 가 1인 random 값
b = tf.Variable(tf.random_normal([1]), name='bias')

hypothesis = x_train * W + b      # hypothesis node

# Tensorflow 내장 수학함수로 표현함
# reduce_mean = 평균내주는 함수
cost = tf.reduce_mean(tf.square(hypothesis - y_train))

# Minimize cost
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)

Run/update graph and get results

In [4]:
sess = tf.Session()

# tf.Variable 을 사용할경우 변수 initial lize 를 시켜야함
sess.run(tf.global_variables_initializer())

for step in range(2001):
    sess.run(train)
    # train 이 매 스탭 run 할때마다 cost, W, b 값들이 바뀜
    if step % 100 == 0:     # 100step 마다 값 관찰
        print(step, sess.run(cost), sess.run(W), sess.run(b))
0 29.6167 [-1.07884669] [-1.01295555]
100 0.000603383 [ 1.02851284] [-0.06486142]
200 0.000372856 [ 1.02242684] [-0.05098141]
300 0.0002304 [ 1.01762938] [-0.04007579]
400 0.000142373 [ 1.01385832] [-0.0315032]
500 8.79791e-05 [ 1.01089394] [-0.02476436]
600 5.43651e-05 [ 1.00856352] [-0.019467]
700 3.35938e-05 [ 1.00673187] [-0.01530284]
800 2.07591e-05 [ 1.00529182] [-0.01202946]
900 1.28284e-05 [ 1.00415993] [-0.0094564]
1000 7.92717e-06 [ 1.00327015] [-0.00743362]
1100 4.89911e-06 [ 1.00257075] [-0.00584369]
1200 3.02774e-06 [ 1.00202096] [-0.00459392]
1300 1.87119e-06 [ 1.00158882] [-0.00361156]
1400 1.15649e-06 [ 1.00124919] [-0.00283934]
1500 7.14907e-07 [ 1.00098217] [-0.00223236]
1600 4.42105e-07 [ 1.00077212] [-0.00175535]
1700 2.73364e-07 [ 1.00060749] [-0.00138038]
1800 1.69108e-07 [ 1.00047755] [-0.00108575]
1900 1.04648e-07 [ 1.00037563] [-0.00085412]
2000 6.47946e-08 [ 1.000296] [-0.00067206]

Using placeholders

In [3]:
import tensorflow as tf

X = tf.placeholder(tf.float32, shape=[None])
Y = tf.placeholder(tf.float32, shape=[None])

W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')

hypothesis = X * W + b

cost = tf.reduce_mean(tf.square(hypothesis - Y))

# Minimize cost
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)

sess = tf.Session()
# tf.Variable 을 사용할경우 변수 initial lize 를 시켜야함
sess.run(tf.global_variables_initializer())

for step in range(2001):
    # run 결과를 변수로 받음
    cost_val, W_val, b_val, _ = sess.run([cost, W, b, train],
                                         feed_dict={X: [1, 2, 3, 4, 5], Y: [2.1, 3.1, 4.1, 5.1, 6.1]})
    if step % 200 == 0:
        print(step, cost_val, W_val, b_val)
0 23.122606 [-0.6476886] [1.8367788]
200 0.057673745 [0.8440853] [1.6629015]
400 0.014881599 [0.92080045] [1.3859355]
600 0.0038399145 [0.95976925] [1.2452459]
800 0.0009908127 [0.9795641] [1.17378]
1000 0.0002556529 [0.9896194] [1.1374773]
1200 6.596596e-05 [0.99472696] [1.119037]
1400 1.7020879e-05 [0.99732155] [1.1096699]
1600 4.3915397e-06 [0.99863946] [1.1049118]
1800 1.1334625e-06 [0.9993088] [1.1024952]
2000 2.9254358e-07 [0.9996488] [1.1012678]

Training 으로 얻어진 Hypothesis test

In [4]:
print(sess.run(hypothesis, feed_dict={X: [13]}))
print(sess.run(hypothesis, feed_dict={X: [2.5, 7.2]}))
[14.096714]
[3.6003885 8.298743 ]


댓글