출처 : 모두를위한 머신러닝 (http://hunkim.github.io/ml/)
Data manipulate using Numpy¶
List Slicing¶
Numpy Indexing
In [18]:
import numpy as np
import pandas as pd
xy = np.loadtxt('static/data-01-test-score.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
print(x_data)
print(y_data)
In [2]:
import tensorflow as tf
filename_queue = tf.train.string_input_producer(['static/data-01-test-score.csv'],
shuffle=False,
name='filename_queue')
# Reader 정의
reader = tf.TextLineReader()
key, value = reader.read(filename_queue)
# 다음과 같은 data type 을 csv 로 decode
record_defaults = [[0.], [0.], [0.], [0.]]
xy = tf.decode_csv(value, record_defaults=record_defaults)
# batch를 이용해서 data 읽어오기
train_x_batch, train_y_batch= tf.train.batch([xy[0:-1], xy[-1:]], batch_size=10)
X = tf.placeholder(tf.float32, shape=[None, 3])
Y = tf.placeholder(tf.float32, shape=[None, 1])
W = tf.Variable(tf.random_normal([3, 1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
hypothesis = tf.matmul(X, W) + b
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
train = optimizer.minimize(cost)
# Launch session
sess = tf.Session()
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator() # 통상적으로 쓰는것
threads = tf.train.start_queue_runners(sess=sess, coord=coord) # 통상적으로 쓰는것
for step in range(2001):
x_batch, y_batch = sess.run([train_x_batch, train_y_batch])
cost_val, hy_val, _ = sess.run([cost, hypothesis, train],
feed_dict={X: x_batch, Y: y_batch})
if step % 500 == 0:
print(step, "\nCost: ", cost_val, "\nPrediction:\n", hy_val)
coord.request_stop() # 통상적으로 쓰는것
coord.join(threads) # 통상적으로 쓰는것
'Dev > 딥러닝' 카테고리의 다른 글
06-1. Tensor Flow 로 3종류 이상 Classfication (Soft max Classifier) (0) | 2018.08.20 |
---|---|
05. Tensor Flow 로 Classification 예제 (binary) (0) | 2018.08.17 |
04-1. Tensor Flow 변수가 여러개일때 Linear regression (0) | 2018.08.17 |
03. TensorFlow Linear Regression 의 cost 최소화 구현 (0) | 2018.08.17 |
02. Tensor Flow로 linear regression 예제 (0) | 2018.08.16 |
댓글