Key features:
- Use Keras API as a "portal" into TensorFlow library to build the neural network
- Use "Sequential" model - allows you to add "layers" sequentially
- "Feed" the model the training data (creating the model takes a bit of study/effort)
- Model gives back a vector of "Logits" or "Logs-odd" scores, one per class
- Run softmax to convert these scores to probabilities
- Compile the model - with an optimizer and a loss function, configure for 'accuracy'
- model.fit
- model.evaluate to see how the model performed (was it a good fit to the data)
Doing this example immediately raises a billion questions! Answering these questions will help you in future machine learning projects with TensorFlow. So get your answers now!
Some numbers to remember in this "post game analysis" are 0 to 255 and 28x28.
All About the Data - the MNSIT Dataset & (Numpy-friendly) Data Format
The MNIST dataset consists of 60,000 training images of handwritten digits and 10,000 test images, each a size of 28x28 pixels. Images are grayscale and numbers are 0-9. The data set is vectorized and in numpy format. Each pixel has an encoding of 0 to 255 (typical for grayscale images) where the number represents brightness, 0 is black and 255 is white.
The Data Set Loading Process (Involves Normalization)
So MNIST is one of the built-in datasets in Keras.
The first step is to normalize the data by dividing each pixel value (in the training and testing data set) by PIXEL_MAX=255 which creates a value between 0 and 1 (inclusive) and converts an integral value into a decimal value.
Model.fit - In depth
How does this from a function-calling perspective.
How do I see how good this model is visually?
This requires some additional programming.
No comments:
Post a Comment