|
One question: I wrote a small DataReader class that reads a .csv file and has methods to randomize and retrieve the results in a variety of formats, one of which is the CNTK format. So in my case, I already have the data in memory and can easily access it in CNTK format, in pieces (e.g. training set and test set), and/or as 1D features and label arrays.
In your EvaluateIrisModel() method, there is this line:
var testMinibatchSource = MinibatchSource.TextFormatMinibatchSource(
trainPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
referring to an external file in CNTK format at "trainpath."
I looked at the overloads for the testMiniBatchSource() method but all of them seem to look for an external data file formatted CNTK. My question is, right about this point in the code, how can I tell miniBatch that the "source" should be a public static string[] array named "xyz" in memory?
Even better, is there a set of (particularly good) documentation for all of these CNTK C# methods that you would recommend?
Thanks again for a great article.
|
|
|
|
|
Hi asiwel,
this is great question.
Actually you are looking into my third blog post. The second blog post shows how to use in memory data, and passed to the trainer.
So I would suggest to read my second article Train Iris Data by Batch using CNTK and C#[^] in order to see how csv data can be loaded into CNTK by using CreateBatch method and passing the array of features and label values for training.
|
|
|
|
|
Hi, Bahrudin. Thanks for your quick reply. Actually I did read all of your blogs and CodeProject article series, but just finally picked the last one to post comments. Of course I really do have lots of questions, which is why I have been looking for good documentation. The MS CNTK C# docs currently appear scattered around and mixed up with Python docs (still sort of haphazard lists of stuff it seems). Here are some of the links to that, which I'm sure you are familiar with, but took me a while to find:
https://docs.microsoft.com/en-us/cognitive-toolkit/using-cntk-with-csharp
https://docs.microsoft.com/en-us/cognitive-toolkit/cntk-library-managed-api
https://docs.microsoft.com/en-us/cognitive-toolkit/CNTK-Eval-Examples#cntk-library-eval-cc-examples
https://cntk.ai/pythondocs/Manual_How_to_create_user_minibatch_sources.html
I followed your 2nd blog (training by batch) to get the trained model, but in your third blog about validation, the test set is retrieved using miniBatchSource.TextFormatMinibatchSource which reads data in CNTK format from an external file. What I wanted to do (and have not yet figured out) is how to read that data from an internal array in memory and to sample that in miniBatches (or look at it as just as a single batch).
Lots of other questions arise - things I know how to do using TensorFlow or other tools in other languages - but don't yet quite understand for CNTK. I need to study some of the examples in the documentation, I guess.
For example, I want to define models that have more than one hidden layer, each with a different hiddenlayerdimension. Actually the model I want to play with right now is (24:12:6:4). In your Iris example code, it is easy to set the number of hidden layers but apparently they all will have the same dimension coming out of simplelayer()?
Another question is how to control the learning rate, not just to set it, but to make it adaptable and to change as iterations get closer to final solution.
Another is how to set up other types of models (than just a simple feed forward network)?
Too many questions for a CodeProject "comment." But you have certainly triggered a lot of interest and provided excellent starting demonstrations and examples to get users going.
Thanks again and I will look forward to your future work soon to be published!
modified 17-Nov-17 10:15am.
|
|
|
|
|
Hi asiwel,
those questions bother me as well. That's why I am writing articles.
Your question about how to use batch instead of minibatchsource for validation. Yes I used the minibatch reader, and it can be translate in to the same as in my second blog post. The next article will explain this.
Second question is how to create more hidden layer with different dimensions. You may reuse my code for creating deep neural network
CreateFFNN and modify it like this:
private static Function createFFNN(Variable input, int hiddenLayerCount, int[] hiddenDims, int outputDim, Activation activation, string modelName, DeviceDescriptor device)
{
var glorotInit = CNTKLib.GlorotUniformInitializer(
CNTKLib.DefaultParamInitScale,
CNTKLib.SentinelValueForInferParamInitRank,
CNTKLib.SentinelValueForInferParamInitRank, 1);
Function h = simpleLayer(input, hiddenDims[0], device);
h = applyActivationFunction(h, activation);
for (int i = 1; i < hiddenLayerCount; i++)
{
h = simpleLayer(h, hiddenDims[i], device);
h = applyActivationFunction(h, activation);
}
var r = simpleLayer(h, outputDim, device);
r.SetName(modelName);
return r;
}
As you can see, the argument hiddenDims is changed into array of int contains the dimensions of hidden layers. So for each hidden layer you ave to supply dimensions (number of neurons)
modified 21-Nov-17 15:10pm.
|
|
|
|
|
Hi, Bahrudin. I've been busy fiddling with your code and my data and what appears to be amazingly obtuse documentation for CNTK for C#. (For that matter, the Python documentation is not much better yet IMO, but it does give clues here and there as to what all the fancy variables and formats and classes, etc., are supposed to mean and do.) Without that documentation, one can get examples to work without understanding exactly what is happening. However trying to modify or extend such examples becomes very difficult, I think.
But I have been having (sort of) great success with your codes and examples. I appreciate your tip about your createFFNN function. The funny thing is that there, I also had solved that problem myself ... and did it exactly the same way!
I am close but have not figured out how best to use a Test file in memory (even one in CNTK format) instead of a disk file in the Evaluation method. However there are many other questions, e.g.:
1) The LearningRate. TrainingParameterScheduleDouble() has several parameters, and the first is the learning rate. The next, I think, are for overriding minibatch and epoch defaults of some kind. Your (0.001125,1) worked OK for Iris, but on other, larger, data sets, I get understandable results by omitting the second argument and playing around with various more reasonable values for the first argument. But I do not find any documentation suggesting what values might be reasonable starters for different models. For my problem, I am finding .02 or even .05 speeds things up considerably for the SGDLearner.
2) However, using a MomentumSGDLearner, you need a learning rate and a momentum rate (both of which are of the type returned by TrainingParameterScheduleDouble(). No documentation about what size that latter rate should be? (0.002) seems to work OK.
3) What actually is the trainer.PreviousMinibatchEvaluationAverage()? Right near the end of a training (after many epochs) this appears to the the training misclassification rate - almost exactly. But when just starting and stopping after a few dozen or hundred epochs (while looking for bugs, etc. ), that "average" is usually a little bit less than the Evaluation Validation test results. Why?
BTW: In your Evaluation method, for validation and testing, the line that computes the "validation" or "test" accuracy" of the model always shows up =1. A cast is needed to get the decimal places, like this:
float accuracy = (1.0F - (float)miscountTotal / totalCount);
Lots of other stuff to figure out! Great fun! Little cosmetic stuff like this (in your Training method):
Console.WriteLine($"The model trained on {yValues.Data.Shape.Dimensions[2]} cases " + $"to an accuracy of {acc}%");
Imagine having to reach way into the yValues object like that just to grab the number of cases/records/instances/samples/etc that you might be using right then to train on! (I read the data into memory, randomize, and split it for train and test early in my programs. Could easily do train, validate, test that way if I wanted to - and could figure out how to use those arrays in memory for validating models and testing results in the Evaluation method! Writing pieces to disk for later retrieval is a bummer when experimenting.)
modified 21-Nov-17 17:29pm.
|
|
|
|
|
Thanks for the comment and the code revision. Yes the accuracy was incorrectly casted, it is fixed already in the full source code attached in the last blog post(I think).
I just published the article of how to change learning rate with respect of the iteration number.
So you can find the answer here
|
|
|
|
|
There was no link associated with "here" but there was a link to "How to Setup Learning Rate .." at your blog and it was very helpful! Thank you.
The confusion (without clues in regular documentation) results essentially from "concept" unfamiliarity. "Sample" seems to mean data record (in most places) and "miniBatch" means a dataset (in what we have been calling batch mode) or a piece/slice/etc. of that set (in minibatch mode, I think) and "epoch" is an iteration. This is sort of like translating a foreign language!
More specifically, the confusion results from the fact that the CNTK.TrainingParameterScheduleDouble method has 6 overloads.
1) CNTK.TrainingParameterScheduleDouble(double value)
This just sets a fixed learning rate for the whole run and works for me, e.g.,
var learningRate = new TrainingParameterScheduleDouble(0.02);
2) CNTK.TrainingParameterScheduleDouble(double value, uint minibatchSize)
This works when using minibatches (does not work well in batch mode for me).
This is your var learningRate = new TrainingParameterScheduleDouble(0.2, 1);
The "1" is a minbatchsize (i.e., one whole minibatch or maybe for each individual minibatch?)
(Heck, it might be a range too?)
3) CNTK.TrainingParameterScheduleDouble(VectorPairSizeTDouble schedule, uint epochSize)
This one is interesting and it is the one you are using to control the Learning Rate in your blog.
I set my vector up like this:
PairSizeTDouble p1 = new PairSizeTDouble(100, 0.03);
PairSizeTDouble p2 = new PairSizeTDouble(200, 0.02);
PairSizeTDouble p3 = new PairSizeTDouble(1, 0.01);
var vp = new VectorPairSizeTDouble() { p1, p2, p3 };
var learningRatePerSample =
new CNTK.TrainingParameterScheduleDouble(vp, 1);
(... probably should have renamed that var learningRatePerEpoch)
which says use a multiplier of 100 * an epochSize of 1 so that the first 100 epochs with use a learning rate of .03. The next 200 epochs will use .02 and so forth. Here in this overload, the "1" is an epochSize (or epoch range?) Sure would be nice to read about this stuff first!
But that's learning and experimenting with new code. (CNTK is full of this sort of thing.)
Actually, there is a more interesting (to me) question of why it is designed this way? It seems to me that the "learning rate" should be dynamic. By that, I mean the learner ought to automatically adjust it. (Maybe that's the MomentumSGD learner). How does the user know ahead of time what epochs to pick to change (let alone optimize) the rate? Strange?
Also, these algorithms don't seem to have a meaningful stopping criterion, other than the number of epochs? How does a user know an appropriate number of epochs? Why not set a logical stopping point, like when the "evaluation criterion" stops changing in the 4th or 6th decimal place? I guess I could write my code to do that ... and let the computer just run overnight!
EDIT: I did this easily. Just change the printTrainingProgress() method to return a float and check that against its previous value. The CrossEntropy "trainLossValue" is a much better choice for a stopping criterion than is the "evaluationValue," I think now.
Actually, IMHO, this code was probably written by people with far greater expertise (than I certainly have) and also far more familiarity with using very large data sets for training, validation, and testing. So one has to give the developers (almost ) all benefit of doubt.
modified 23-Nov-17 13:29pm.
|
|
|
|
|
I guess I got motivated and am happy to report a solution to loading batch test data from memory. Here's my version of your code now and output from a very short training run with some of my data. (My DataReader method provides Training and Test data in 1D format to Value.CreateBatch.)
public static void EvaluateModel(Function ffnn_model, Trainer trainer, DeviceDescriptor device, DataReader rdr)
{
var feature = ffnn_model.Arguments[0];
var label = ffnn_model.Output;
int inputDim = feature.Shape.TotalSize;
int numOutputClasses = label.Shape.TotalSize;
var xValues = Value.CreateBatch<float>(new NDShape(1, inputDim), Form1.rdr.GetTestFeatures(), device);
var yValues = Value.CreateBatch<float>(new NDShape(1, numOutputClasses), Form1.rdr.GetTestLabels(), device);
Console.WriteLine($"-----VALIDATION SUMMARY------");
Form1.mainform.WritetoListBox(string.Format($"-----VALIDATION SUMMARY------"));
int miscountTotal = 0;
int totalCount = yValues.Data.Shape.Dimensions[2];
var inputDataMap = new Dictionary<Variable, Value>() { { feature, xValues} };
var expectedDataMap = new Dictionary<Variable, Value>() { { label, yValues } };
var outputDataMap = new Dictionary<Variable, Value>() { { label, null } };
var expectedData = expectedDataMap[label].GetDenseData<float>(label);
var expectedLabels = expectedData.Select(l => l.IndexOf(l.Max())).ToList();
ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
var outputData = outputDataMap[label].GetDenseData<float>(label);
var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();
int misMatches = actualLabels.Zip(expectedLabels, (a, b) => a.Equals(b) ? 0 : 1).Sum();
miscountTotal += misMatches;
Console.WriteLine($"Validating Model: Total Samples = {totalCount}, Mis-classify Count = {miscountTotal}");
Form1.mainform.WritetoListBox(string.Format($"Validating Model: Total Samples = {totalCount}, Mis-classify Count = {miscountTotal}"));
Console.WriteLine($"------TESTING SUMMARY--------");
float accuracy = (1.0F - (float)miscountTotal / totalCount);
Console.WriteLine($"Model Accuracy = {accuracy,6:0.0000}");
Form1.mainform.WritetoListBox(string.Format($"------TESTING SUMMARY--------"));
Form1.mainform.WritetoListBox(string.Format($"Model Accuracy = {accuracy,6:0.0000}"));
for (int k = 0; k < actualLabels.Count; k++)
{
actualLabels[k]++;
expectedLabels[k]++;
}
Dictionary<int, string> StatusLabels = new Dictionary<int, string>()
{
{1,"Successful" },{2,"At Risk: Falling" },{3,"At Risk: Rising" },{4,"At Risk: Failing" }
};
Console.WriteLine("------CONVENTIONAL CONFUSION MATRIX RESULTS------");
Crosstabs ct = new Crosstabs(actualLabels, expectedLabels, "DASHBOARD STUDY");
ct.RowLabels = StatusLabels;
ct.ColumnLabels = StatusLabels;
ct.WritetoConsole();
ct.View(Form1.mainform.dataGridView1);
return;
}
This is what some quick sample output looks like:
Epoch: 0 CrossEntropyLoss = 1.6604410, EvalCriterion = .7505131
Epoch: 20 CrossEntropyLoss = 1.0943360, EvalCriterion = .3836944
Epoch: 40 CrossEntropyLoss = 1.0075010, EvalCriterion = .3870011
Epoch: 60 CrossEntropyLoss = .9473225, EvalCriterion = .3740023
Epoch: 80 CrossEntropyLoss = .9008594, EvalCriterion = .3588369
Epoch: 100 CrossEntropyLoss = .8622118, EvalCriterion = .3399088
Epoch: 120 CrossEntropyLoss = .8282195, EvalCriterion = .3250855
Epoch: 140 CrossEntropyLoss = .7975291, EvalCriterion = .3159635
Epoch: 160 CrossEntropyLoss = .7694442, EvalCriterion = .3049031
Epoch: 180 CrossEntropyLoss = .7435042, EvalCriterion = .2962372
Epoch: 200 CrossEntropyLoss = .7194668, EvalCriterion = .2879134
----------------
------TRAINING SUMMARY-------- Elasped time: 00:01:14
The model trained on 8770 cases to an accuracy of 71.21%
-----VALIDATION SUMMARY------
Validating Model: Total Samples = 5847, Mis-classify Count = 1735
------TESTING SUMMARY--------
Model Accuracy = 0.7033
------CONVENTIONAL CONFUSION MATRIX RESULTS------
Crosstab matrix for DASHBOARD STUDY
CLASSIFIED LABELLED
Values 1 2 3 4 RowSum
1 3078 287 696 4 4065 precision: 0.7572
2 31 951 176 538 1696 precision: 0.5607
3 0 2 5 0 7 precision: 0.7143
4 0 0 1 78 79 precision: 0.9873
ColSum 3109 1240 878 620 5847
recall: 0.9900 0.7669 0.0057 0.1258
Accuracy: 0.7033
Hope this might be useful to you or other readers.
|
|
|
|
|
|
I must say that getting all the data pieces and putting all the blog code pieces to get this project to run and to (somewhat) understand all the parts has been an amazing effort! I had to go back to your own source codes, get them to run (fairly easily), and then debug at break points to look at various inputs and outputs to figure out the actual data formats being used by the code (so I could fit my own data to them). For example, the picture of how the cntk format was supposed to look, shown I think in Blog 2 or 3, and how it really looks in the testIris_cntk.txt data file are rather different! And also how all those "hot vectors" and features translate in 1D arrays was interesting too.
At any rate, that has sure been worth very bit of it. This is a excellent and very timely contribution. I (and surely lots of others have really wanted to use the Microsoft Cognitive Toolkit for a long time, but it was not available for ready and direct integration in C#. And now even with the new nice Nuget package, without your blog guides, it would have still been a nightmare to learn how to apply quickly!
My goal here was to learn ... but since I just published a little tiny piece of code to produce a confusion matrix (our articles were released on the same day!), I wanted to see if I could really learn to use CNTK in C# and get all the way to adding my little Crosstabs/Confusion matrix routine to visualize the final output.
And happily I finally did that. It was necessary to comment out some lines in the demo method's code referring to a Form and a DataGridView to make it run in a Console app, but that was easy. Here's the result (using the vars expectedvalues and actualvalues from your EvaluateIrisModel() method applied to the original training data):
Thank you again for one of the very best CodeProjects yet!
..........
..........
Minibatch: 750 CrossEntropyLoss = 0.1424918, EvaluationCriterion = 0.05333333
Minibatch: 800 CrossEntropyLoss = 0.1335766, EvaluationCriterion = 0.05333333
----------------
------TRAINING SUMMARY--------
The model trained with the accuracy 94.67%
Validating Model: Total Samples = 150, Mis-classify Count = 6
---------------
------TESTING SUMMARY--------
Model Accuracy = 1
---------------
------CONVENTIONAL CONFUSION MATRIX RESULTS------
IRIS STUDY LABELLED
Values 0 1 2 ColSum
0 50 0 0 50 precision: 1.0000
1 0 49 5 54 precision: 0.9074
2 0 1 45 46 precision: 0.9783
RowSum 50 50 50 150
recall: 1.0000 0.9800 0.9000
Accuracy: 0.9600
Press any key
modified 16-Nov-17 2:16am.
|
|
|
|
|
Hi asiwel again,
thanks for the comment. Confusion matrix and other model performance related stuff is very interesting and it is my future work which will be published soon.
|
|
|
|
|