Notice
Recent Posts
Recent Comments
Link
«   2025/08   »
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
Tags
more
Archives
Today
Total
관리 메뉴

kang's study

16일차 : 심층 신경망 본문

[학습 공간]/[혼공머신러닝]

16일차 : 심층 신경망

보끔밥0302 2022. 3. 3. 06:58

심층 신경망

2개의 층

입력층은 생각하지 않는다.
모든 신경망의 은닉층에는 항상 활성화 함수가 있다.
분류 문제의 경우 클래스에 대한 확률을 출력하기 위해 활성화 함수를 사용한다.
회귀는 임의의 어떤 숫자를 출력하는 문제이므로 활성화 함수를 적용할 필요가 없다.
In [1]:
from tensorflow import keras

(train_input, train_target), (test_input, test_target) = keras.datasets.fashion_mnist.load_data()
In [2]:
from sklearn.model_selection import train_test_split

train_scaled = train_input / 255.0
train_scaled = train_scaled.reshape(-1, 28*28)

train_scaled, val_scaled, train_target, val_target = train_test_split(
    train_scaled, train_target, test_size=0.2, random_state=42)
In [3]:
dense1 = keras.layers.Dense(100, activation='sigmoid', input_shape=(784,))
dense2 = keras.layers.Dense(10, activation='softmax')
 

심층 신경망 만들기

In [4]:
model = keras.Sequential([dense1, dense2])
 
In [5]:
model.summary()
# None은 fit모델에 batchsize 기본 32이다 미니배치하강법이다.
# 보통 64,128로 변경
# Param 각층에 있는 가중치와 절편의 개수
 
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 100)               78500     
                                                                 
 dense_1 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
_________________________________________________________________

 

 

층을 추가하는 여러 방법

In [6]:
# 첫번째 방법
model = keras.Sequential([
    keras.layers.Dense(100, activation='sigmoid', input_shape=(784,), name='hidden'),
    keras.layers.Dense(10, activation='softmax', name='output')
], name='패션 MNIST 모델')
In [7]:
model.summary()
 
Model: "패션 MNIST 모델"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 hidden (Dense)              (None, 100)               78500     
                                                                 
 output (Dense)              (None, 10)                1010      
                                                                 
=================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
_________________________________________________________________
In [8]:
# 두 번째 방법
model = keras.Sequential()
model.add(keras.layers.Dense(100, activation='sigmoid', input_shape=(784,)))
model.add(keras.layers.Dense(10, activation='softmax'))
In [9]:
model.summary()
 
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_2 (Dense)             (None, 100)               78500     
                                                                 
 dense_3 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
_________________________________________________________________
 

모델 훈련

In [10]:
model.compile(loss='sparse_categorical_crossentropy', metrics='accuracy')

model.fit(train_scaled, train_target, epochs=5)
 
Epoch 1/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.5603 - accuracy: 0.8081
Epoch 2/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.4059 - accuracy: 0.8543
Epoch 3/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3721 - accuracy: 0.8657
Epoch 4/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3486 - accuracy: 0.8735
Epoch 5/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3328 - accuracy: 0.8798
Out[10]:
<keras.callbacks.History at 0x185e4028f40>
 

렐루 활성화 함수

완만한 구간에서 빠른 변화에 대응해서 학습하기 어려운 문제 해결
In [11]:
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28))) 
# 케라스 편의를 위해 만든 층 데이터를 1차원 배열로 펼치는 작업을 수행해줌
model.add(keras.layers.Dense(100, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))
In [12]:
model.summary()
 
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 flatten (Flatten)           (None, 784)               0         
                                                                 
 dense_4 (Dense)             (None, 100)               78500     
                                                                 
 dense_5 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
_________________________________________________________________
In [13]:
(train_input, train_target), (test_input, test_target) = keras.datasets.fashion_mnist.load_data()

train_scaled = train_input / 255.0

train_scaled, val_scaled, train_target, val_target = train_test_split(
    train_scaled, train_target, test_size=0.2, random_state=42)
In [14]:
model.compile(loss='sparse_categorical_crossentropy', metrics='accuracy')

model.fit(train_scaled, train_target, epochs=5)
 
Epoch 1/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.5302 - accuracy: 0.8114
Epoch 2/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3947 - accuracy: 0.8589
Epoch 3/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3573 - accuracy: 0.8714
Epoch 4/5
1500/1500 [==============================] - 2s 2ms/step - loss: 0.3350 - accuracy: 0.8808
Epoch 5/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3175 - accuracy: 0.8868
Out[14]:
<keras.callbacks.History at 0x1858d771d30>
In [15]:
model.evaluate(val_scaled, val_target)
 
375/375 [==============================] - 0s 982us/step - loss: 0.3598 - accuracy: 0.8797
Out[15]:
[0.35983479022979736, 0.8796666860580444]
 

옵티마이저

최적화 학습 방법
In [16]:
# 방법 1
model.compile(optimizer='sgd', loss='sparse_categorical_crossentropy', metrics='accuracy') 
# 확률적 경사 하강법 
In [17]:
# 방법 2
sgd = keras.optimizers.SGD()
model.compile(optimizer=sgd, loss='sparse_categorical_crossentropy', metrics='accuracy')
In [18]:
# 설정을 따로 주고 싶다면 객체를 사용하는 방법을 사용
sgd = keras.optimizers.SGD(learning_rate=0.1)
In [19]:
sgd = keras.optimizers.SGD(momentum=0.9, nesterov=True) 
# 수식을 이해해야함 중급서 (향후 핸즈온머신러닝 참고)
In [20]:
adagrad = keras.optimizers.Adagrad()
model.compile(optimizer=adagrad, loss='sparse_categorical_crossentropy', metrics='accuracy')
In [21]:
rmsprop = keras.optimizers.RMSprop()
model.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics='accuracy')
In [22]:
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
model.add(keras.layers.Dense(100, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))
 
adam을 설정해서 모델 훈련해보기
In [23]:
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics='accuracy')
# 옵티마이저, learning_rate는 하이퍼 파라미터이다.

model.fit(train_scaled, train_target, epochs=5)
 
Epoch 1/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.5285 - accuracy: 0.8152
Epoch 2/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3990 - accuracy: 0.8584
Epoch 3/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3565 - accuracy: 0.8701
Epoch 4/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3301 - accuracy: 0.8787
Epoch 5/5
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3088 - accuracy: 0.8873
Out[23]:
<keras.callbacks.History at 0x1858d850550>
In [24]:
model.evaluate(val_scaled, val_target)
 
375/375 [==============================] - 0s 998us/step - loss: 0.3368 - accuracy: 0.8779
Out[24]:
[0.3368491232395172, 0.877916693687439]

 

출처 : 박해선, 『혼자공부하는머신러닝+딥러닝』, 한빛미디어(2021), p367-388

'[학습 공간] > [혼공머신러닝]' 카테고리의 다른 글

18일차 : 합성곱 신경망  (0) 2022.03.03
17일차 : 신경망 모델  (0) 2022.03.03
15일차 : 인공신경망 ANN, Artificial neural network  (0) 2022.03.03
14일차 : 차원축소  (0) 2022.03.03
13일차 : k-평균  (0) 2022.03.03
Comments