How to create a active neural network after using the SPSS module for neural network? 
When you run the NN, on the export tab in the dialog box, specify a file
name (with extension xml) as you have done.
When you want to score a dataset, use Utilities > Scoring Wizard and select
the xml file you saved.
These actions have equivalent syntax as well as the dialog interfaces.
HTH,
Jon Peck

batch reading wav files in matlab to create a matrix for neural network training set 
This solution makes use of cell arrays, {...}, that can handle data of
different dimensions, sizes and even types. Here, Y will store the .wav
sampled data and FS the sampled rate of all the audio files in a directory.
% create some data (write waves)
load handel.mat; %predifined sound in matlab stored in
.mat
audiowrite('handel1.wav',y,Fs); %write the first wave file
audiowrite('handel2.wav',y,Fs); %write the second
clear y Fs %clear the data
% reading section
filedir = dir('*.wav'); %list the current folder content for .wav
file
Y = cell(1,length(filedir)); %preallocate Y in memory (edit from @
Werner)
FS = Y; %preallocate FS in memory (edit from @
Werner)
for ii = 1:length(filedir) %loop thr

How to apply a trained Matlab neural network from C++ without call to Matlab? 
Of course it is possible  neural networks are clear mathematical models.
All you need is a compatible representation, where you have stored:
network topology (number of neurons in particuluar layers)
network weights (between all neurons)
network activation functions (for each neuron)
And that's all. The exact solution depends on what matlab library you are
using for neural networks. There is a "standard" for prediction models
called PMML, which can be loaded by for example Weka libraries. Either way
 it is easy operation, so you can also implement it by hand by simply
storing all the numbers in the text file and simulating network in the C++
(as the "forward" phase of the neural network is just few lines of code 
the training part is the long one).

How to train neural network incrementally in Matlab? 
You can just manually divide dataset into batches and train them one after
one:
for bn = 1:num_batches
inputs = <get batch bn inputs>;
targets = <get batch bn targets>;
net = train(net, inputs, targets);
end
Though batch size should be greater than 1, but anyway that should reduce
memory consumtion for training.
In case of trainlm training alogrithm, net.efficiency.memoryReduction optim
could help.
Also instead of default trainlm algorithm you can try less memory consuming
ones like trainrp.
For details on training algorithms check matlab documentation page.
I assumed above that you are using corresponding matlab toolbox for neural
networks.
Regarding training one sample at a time you could try googling for
stochastic gradient descent algorithm. But, it looks l

Concerns related to matlab neural network toolbox 
I am not a matlab user, but if you don't want to use the normalization and
it is forced on both input and output  then simply denormalize the output.
I assume, that it is simple linear normalization (squashing to the [1,1]
interval), so if you want output in the [0,1] interval you can simply apply
f(x) = (x+1)/2 which linearly maps [1,1] to the [0,1]. Neural networks are
scale sensitive (as it is strongly correlated with nontunable parameters
like activation functions slopes), so the internal normalization has its
advantages. This should work, if the normalization is applied after the
training.
If it only normalizes input then you should not be concerned, that won't
imply any problems with using any activation functions (in fact, as stated
before, it should actualy help).
UPDATE
As

Neural Network Issues 
Not that it matters to anyone else really, but I figured it out! It was in
ANN.adjustWeights(). During the weight adjustment phase, I was building an
array of new_weights and applying them only AFTER I computed all
new_weights, when I should have been applying the new weight to each
synapse immediately after its computation.

2D/3D shadows by a Neural Network 
Question: What about the performance of this approach versus a usual
raytracer for millions of vertices(NN seems to be more embarrassingly
parallel than raytracer)?
The problem you are trying to solve does not seem to be a problem for the
machine learning model. Such methods should be aplied for the problems of
the complex, statistical data, for which finding the good algoritmic
solutions is too hard for a human being. Such easy problems (in the sense,
that you can find highly efficient algorithm), which you can deeply analyze
(as it is just 2/3 dimensional data) should be approached using classical
methods, not neural networks (nor any other machnie learning model).
Even if you would try to do this, your representation of the problem is
rather badly prepared, network won't learn th

Issues with neural network 
I am unfamiliar with nntool but I would suspect that your problem is
related to the selection of your initial weights. Poor initial weight
selection can lead to very slow convergence or failure to converge at all.
For instance, notice that as the number of neurons in the hidden layer
increases, the number of inputs to each neuron in the visible layer also
increases (one for each hidden unit). Say you are using a logit in your
hidden layer (always positive) and pick your initial weights from the
random uniform distribution between a fixed interval. Then as the number
of hidden units increases, the inputs to each neuron in the visible layer
will also increase because there are more incoming connections. With a
very large number of hidden units, your initial solution may become very
larg

r language getting a not so good neural network 
Add the linout = TRUE argument to the nnet function:
nnet1=nnet(trainingdata$Input,trainingdata$Output,size=10,decay=.2,MaxNWts=100,
linout = TRUE)
That should solve your problem! By default, the fitted values are logistic
output units  see ?nnet.

Neural network inputs transformation 
It is often a good idea to transform an input vector to have a mean of zero
and a standard deviation of one. This works well for statistical
classifiers and for back propagation neural networks. it helps to make the
clusters in feature space more evenly shaped (nearer to ndimensional
spheres). Having said that, the learning process in a multilayer neural
network should adapt to whatever the input ranges happen to be so it is not
a huge issue.
You have to use a nonlinear (usually sigmoid) activation function for a
multilayer network or it becomes equivalent to a single layer and will not
learn complex boundary shapes between classes.

m Input and n Output Neural Network 
Look here, it's very similar to your task.
Briefly: taken a standart dataset and obtaining training and test datasets
irisTrainData = sample(1:150,100)
irisValData = setdiff(1:150,irisTrainData)
the neural network may be trained and used for prediction next way:
library(nnet)
ideal < class.ind(irisdata$species)
irisANN = nnet(irisdata[irisTrainData,5], ideal[irisTrainData,], size=10,
softmax=TRUE)
predict(irisANN, irisdata[irisValData,5], type="class")

Qlearning in a neural network  Mountain Car 
Problem representation
Using neural networks to represent the valueaction function is a good
idea. It has been shown that this works well for a number of applications.
However, a more natural representation for the Qfunction would be a net,
that receives the combined stateaction vector as input and has a scalar
output. But as long as the number of actions is finite and small, it should
be possible to do it like you did. Just remember that strictly speaking,
you are not learning Q(s,a) but multiple value functions V(s) (one for each
action), that share the same weights, except for the last layer.
Testing
This is a straightforward greedy exploitation of the Q function. Should be
correct.
Learning
There are several pitfalls here, you will have to think about. The first
one is scalin

Neural Network Evaluation and Topology 
1) Yes, each input gets its own node, and that node is always the node for
that input type. The order doesn't matter  you just need to keep it
consistent. After all, an untrained neural net can learn to map any set of
linearly separable inputs to outputs, so there can't be an order that you
need to put the nodes in in order for it to work.
2 and 3) You need to collect all the values from a single layer before any
node in the next layer fires. This is important if you're using any
activation function other than a stepwise one, because the sum of the
inputs will affect the value that is propagated forward. Thus, you need to
know what that sum is before you propagate anything.
4) Which nodes to connect to which other nodes is up to you. Since your net
won't be excessively large and XOR is

threshold and bias in neural network 
bias and threshold in MLP are the same concepts, simply  two different
names for the same thing. Sign does not matter, as bias can be both
positive and negative (but it is more common to use + bias).
In the most simple terms  if there is no bias, then for input of only 0's,
you get summing_function=0, and as a result also output_value=0 (as most of
the activation functions cross the origin). As a result, your network
cannot learn any other behavior for this type of signal, as only changing
part of the whole model are weights.
From more mathematical perspective  this is responsible for shifting the
activation function and giving neural network the universal approximator
capabilities.

Neural Network Diverging instead of converging 
If the problem you are trying to solve is of classification type, try 3
layer network (3 is enough accordingly to Kolmogorov) Connections from
inputs A and B to hidden node C (C = A*wa + B*wb) represent a line in AB
space. That line divides correct and incorrect halfspaces. The connections
from hidden layer to ouput, put hidden layer values in correlation with
each other giving you the desired output.
Depending on your data, error function may look like a hair comb, so
implementing momentum should help. Keeping learning rate at 1 proved
optimum for me.
Your training sessions will get stuck in local minima every once in a
while, so network training will consist of a few subsequent sessions. If
session exceeds max iterations or amplitude is too high, or error is
obviously high  the sessio

extrapolation with recurrent neural network 
Neural networks are not extrapolation methods (no matter  recurrent or
not), this is completely out of their capabilities. They are used to fit a
function on the provided data, they are completely free to build model
outside the subspace populated with training points. So in non very strict
sense one should think about them as an interpolation method.
To make things clear, neural network should be capable of generalizing the
function inside subspace spanned by the training samples, but not outside
of it
Neural network is trained only in the sense of consistency with training
samples, while extrapolation is something completely different. Simple
example from "H.Lohninger: Teach/Me Data Analysis, SpringerVerlag,
BerlinNew YorkTokyo, 1999. ISBN 3540147438" shows how NN behave in thi

Activation function for neural network 
To exploit their full power, neural networks require continuous,
differentable activation functions. Thresholding is not a good choice for
multilayer neural networks. Sigmoid is quite generic function, which can be
applied in most of the cases. When you are doing a binary classification
(0/1 values), the most common approach is to define one output neuron, and
simply choose a class 1 iff its output is bigger than a threshold
(typically 0.5).
EDIT
As you are working with quite simple data (two input dimensions and two
output classes) it seems a best option to actually abandon neural networks
and start with data visualization. 2d data can be simply plotted on the
plane (with different colors for different classes). Once you do it, you
can investigate how hard is it to separate one class f

How do I know that my neural network is being trained correctly 
You should be adding debug/test mode messages to watch if the weights are
getting saturated and more converged. It is likely that good <
trainingData.size() is not happening.
Based on Double nodeValue = value.get(Constants.NODE_VALUE); I assume
NODE_VALUE is of type Double ? If that's the case then this line
nodeList.get(nodeList.size()1).getValue(Constants.NODE_VALUE) !=
adalineNode.getValue(Constants.NODE_VALUE) may not really converge exactly
as it is of type double with lot of other parameters involved in obtaining
its value and your convergence relies on it. Typically while training a
neural network you stop when the convergence is within an acceptable error
limit (not a strict equality like you are trying to check).
Hope this helps

How to get neural network parameter after training? 
You can store the network parameters in a cell array. Please find more
details in the following link:
http://www.mathworks.ch/ch/help/matlab/cellarrays.html

scaling inputs data to neural network 
Firstly, there are many types of ANNs, I will assume you are talking about
the simplest one  multilayer perceptron with backpropagation.
Secondly, in your question you are mixing up data scaling (normalization)
and weight initialization.
You need to randomly initialize weights to avoid symmetry while learning
(if all weights are initially the same, their update will also be the
same). In general, concrete values don't matter, but too large values can
cause slower convergence.
You are not required to normalize your data, but normalization can make
learning process faster. See this question for more details.

php based Neural Network — how to implement backpropagation 
5.92878775009E323 is such a ridiculously small number that it will
compute as zero. The best way to handle this is to use standard exponent
mathematics.
I would handle this by multiplying the digits then summing the exponents in
a custom function. That having been said, if you need to manage these
numbers, you are probably already losing a lot of information in the
rounding to 12 significant figures.
If you need to do this without loss of data, the only way I can think of
doing it is to use the computer to do the multiplication much in the same
way it is handled on paper, one digit at a time.
Why not write a function like this?
Alternatively, if you are not feeling suicidal, try bcmul() which is a PHP
function that does exactly what you need.

vehicle type identification with neural network 
You didn't say whether you can use an existing framework or need to
implement the solution from scratch, but either way Python is excellent
language for coding neural networks.
If you can use a framework, check out Theano, which is written in Python
and is the most complete neural network framework available in any
language:
http://www.deeplearning.net/software/theano/
If you need to write your implementation from scratch, look at the book
'Machine Learning, An Algorithmic Perspective' by Stephen Marsland. It
contains example Python code for implementing a basic multilayered neural
network.
As for how to proceed, you'll want to convert your images into 1D input
vectors. Don't worry about losing the 2D information, the network will
learn 'receptive fields' on its own that extract 2D

MLP Neural Network: calculating the gradient (matrices) 
With Python and numpy that is easy.
You have two options:
You can either compute everything in parallel for num_instances instances
or
you can compute the gradient for one instance (which is actually a special
case of 1.).
I will now give some hints how to implement option 1. I would suggest that
you create a new class that is called Layer. It should have two functions:
forward:
inputs:
X: shape = [num_instances, num_inputs]
inputs
W: shape = [num_outputs, num_inputs]
weights
b: shape = [num_outputs]
biases
g: function
activation function
outputs:
Y: shape = [num_instances, num_outputs]
outputs
backprop:
inputs:
dE/dY: shape = [num_instances, num_outputs]
backpropagated gradient
W: shape = [num

OpenCV Neural Network train one iteration at a time 
According to
http://opencv.willowgarage.com/documentation/cpp/ml_neural_networks.html#cvannmlptrain
the params parameter is of Type cvANN_MLP_TrainParams. This class contains
a property TermCriteria which controls the when the training function
terminates. This Termination criteria class
http://opencv.willowgarage.com/documentation/cpp/basic_structures.html can
be set to terminate after a given number of iterations or when a given
epsilon conditions is fulfilled or some combination of both. I have not
used the training function myself so I can't know the code that you'd use
to make this work, but something like this should limit the number of
training cycles
CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams()
params.term_crit.type = 1;//This should tell the train function you want t

Excel VBA Artificial Neural Network training code 
If your problem is to import the data, I presume the issue is inside the
first loop:
X1(J, I) = Sheets("Sheet1").Cells(I, J).String
If so, you could just try to use a method that reads your data given a
Range. You declare an array to hold the input layer:
Public inputv() As Double '// the input layer data
and then populate it given the range of the data, lets say "C19:C27", with
this method:
Private Sub loadInput(ByRef r As Range)
Dim s As Range
Set s = r.Resize(1, 1)
// this way you can even evaluate the number of trainning examples, m
m = r.Rows.Count
ReDim inputv(0 To m  1, 0 To n  1)
Dim i, j As Integer
For i = 0 To m  1
For j = 0 To n  1
inputv(i, j) = s(i + 1, j + 1).Value
Next j
Next i
End Sub

pybrain image input to dataset for Neural Network 
QUICK ANSWER
No, you don't need target for every single pixel, you treat pixels from
single image as your input data and you add target to that data.
LONG ANSWER
What you trying to do is to solve classification problem. You have image
represented by array of numbers and you need to classify it as some class
from limited set of classes.
So lets say that you have 2 classes: prohibitions signs (I'm not native
speaker, I don't know how you call signs that forbid something), and
information signs. Lets say that prohibition signs is our class 1 and
information signs is class 2.
Your data set should look like this:
([representation of sign in numbers], class)  single sample
After that, since it's classification problem, I recommend using
_convertToOneOfMany() method of DataSet class, to conver

how to use multidimensional feature vector in neural network of opencv 
I think, you should look at Bag of words aproach. Here you can get some
code:
http://www.morethantechnical.com/2011/08/25/asimpleobjectclassifierwithbagofwordsusingopencv23wcode/
And you can use NN instead of SVM.

Pybrain Neural Network failing to train correctly 
I had very similar problem, and I found SoftmaxLayer to be the cause. Try
to replace it with something else, for example SigmoidLayer. If that is a
problem in your case as well there is a good chance that this class is
bugy.

Annealing on a multilayered neural network: XOR experiments 
I highly doubt that any strict rules exist for your problem. First of all,
limits/bounds of weights are strictly dependant on your input data
representation, activation functions, neurons number and output function.
what you can rely on here are rules of the thumb in the best possible
scenario.
First, lets consider the initial weights values in classical algorithms.
Some basic idea of the weights scale are to use them in the range of [1,1]
for small layers, and for large ones divide it by the square root of the
number of units in the large layer. More sophisticated methods are
described by Bishop (1995). With such rule of the thumb we could deduce,
that a resonable range (which is simply row of magniture bigger then the
initial guess) would be something in the form of [10,10]/sqrt(neur

complex valued neural network (CVNN) error divergence 
If you are doing gradient descent, a very common debugging technique is to
check whether the gradient you calculated actually matches the numerical
gradient of your loss function.
That is, check
(f(x+dx)f(x))/dx==f'(x)*dx
for a variety of small dx. Usually try along each dimension, as well as in
a variety of random directions. You also want to do this check for a
variety of value of x.

How to return neural network weights (parameters) values of nnetar function? 
That is explained in ?nnetar: in the output, the model field contains the
list of neural networks fitted to your data (there are several of them).
library(forecast)
fit < nnetar(lynx)
str(fit)
str(fit$model[[1]])
summary( fit$model[[1]] )
# a 841 network with 41 weights
# options were  linear output units
# b>h1 i1>h1 i2>h1 i3>h1 i4>h1 i5>h1 i6>h1
i7>h1 i8>h1
# 2.99 7.31 3.90 2.63 1.48 4.30 2.57 2.77 9.40
# b>h2 i1>h2 i2>h2 i3>h2 i4>h2 i5>h2 i6>h2
i7>h2 i8>h2
# 0.23 1.42 1.27 0.75 2.48 1.12 0.01 2.79 2.35
# b>h3 i1>h3 i2>h3 i3>h3 i4>h3 i5>h3 i6>h3
i7>h3 i8>h3
# 3.30 1.43 0.79 7.44 0.42 1.12 5.36 15.61 5.17
# b&g

How to use WEKA Machine Learning for a Bayes Neural Network and J48 Decision Tree 
Here is one way to do it with the commandline. This information is found
in Chapter 1 ("A commandline primer") of the Weka manual that comes with
the software.
java weka.classifiers.trees.J48 t training_data.arff T test_data.arff p
1N
where:
t <training_data.arff> specifies the training data in ARFF format
T <test_data.arff> specifies the test data in ARFF format
p 1N specifies that you want to output the feature vector and the
prediction,
where N is the number of features in your feature vector.
For example, here I am using soybean.arff for both training and testing.
There are 35 features in the feature vector:
java weka.classifiers.trees.J48 t soybean.arff T soybean.arff p 135
The first few lines of the output look like:
=== Predictions on test dat

Very large data sets to train a neural network using simulated annealing 
Some specific additional information about what you are working on and/or
code samples would be helpful.
However, here is what I suggest:
It sounds like you have a data set with 100k lines in it.
Some of those lines of data are probably duplicates.
Typically with an artificial neural network the program increases the
strength of a connection between two nodes of the network when they are
activated.
Rather than train your artificial neural network with one line of input at
a time, perhaps a faster strategy would be:
Identify the unique lines in the input and count how often they occur.
When you train the artificial neural network, use the count of that input
as factor for how much you increase the strength of the connection for the
nodes.
Instead of going through 100k iterations of t

Matlab How to use a custom transfer function in neural net training 
Not sure if things work the same way in Matlab 2008, but in newer versions
you can try setting your transfer function for a layer directly in neural
network object properties:
net = <network creation code>; net.layers{1}.transferFcn =
'fungsiku'; This should set fungsiku transfer function for first layer.

layer dependence about network 
generally you are right. Unfortunately borders between layers in networks
are kinda blurry, not just because we have a standard which is not used
(OSI) and de facto standard which does not enforce the idea you mentioned,
but also because the protocols are often not strictly bound to one layer
but can do stuff on more then one of them. Good amount of protocols is
developed before the OSI model and before they were standardized and then
it was already too late to make some radical changes. So there are
protocols that are considered to be between two layers (or on both layers)
like MPLS, ARP etc. And protocols that are based on another protocol which
is on the same layer, like OSPF that runs on top of IP even if they are
considered to be on L3.
What you mentioned is another example. The reas

Does each Windows Store application have its own network level/layer that can be accessed? 
WebView in 8.1 has mechanism to intercept resource requests and replace
them with your own. See http://channel9.msdn.com/Events/Build/2013/3179
around 42 minutes. Basically you will have to create your own
implementation of IUriToStreamResolver.UriToStreamAsync that uses
HttpClient to get all of your data through proxy.

Create service response in web method, service layer or DAO layer? 
I would say in your web method. The web service should be the interface for
calling the service layer. It should transform the incoming request into
something the service layer understands and should transform the result
into something the web service can send.
The service is, in general, a reusable part of your application and could
be reused between your web service and web application (with controllers)
or maybe by doing batch inserts. Basically everything calling the service
(which contains the business logic) is a interfacing layer to your service.
Controllers provide access to the application behavior that you typically
define through a service interface. Controllers interpret user input and
transform it into a model that is represented to the user by the view.
Spring implemen

Matlab help: how to skip or delay events 
This really is a complicated problem to troubleshoot without hands on, and
less of a programming and more of an experimental design question.
You may want to ensure that no photobeams are interrupted before starting a
new tone, perhaps by substituting
if t_elapsed > eventLog.trial_times(trial_count)
with
if t_elapsed > eventLog.trial_times(trial_count) & ~n_broken
You could alternately add a while loop at the end of the pb obstruct check
statement that makes sure no new event occurs for 1 sec, before exiting the
if statement and proceeding. But then you also have to check the sound and
possibly terminate it before exiting that loop.
As a final alternative, running a variety of controls (which I'm sure you
are running), for instance an experiment without rewards, should

How to skip builtin functions in MATLAB debugging? 
The answer as to why commands like abs and sum are automatically skipped is
because they are compiled, proprietary MATLAB functions that don't actually
have any readable MATLAB code with them. If you do edit('angle.m') (maybe
without the m, I forget) you will see the code (as expected). Now do the
same for sum, and you will notice there is no MATLAB code there, just
comments. The core MATLAB functions, like sum, but also like clc and close
are all core embedded functions so we can't see the code.
As was mentioned earlier in the comments, the debugger has tools that allow
you to just step instead of step in, and if you are stepped in one part,
you can always step out to the function calling the one you are currently
looking at. Also, to skip a couple lines of code at a time, the "run to c

matlab: changing the layer of patches and lines 
Graphics objects are stacked based on their order in get(gca,'children')
(first element = top, last element = bottom), so rearranging that array
allows you to change the layer of lines, patches, etc.
Example:
patch([0.25 0.25 0.75 0.75],[0.25 0.75 0.75 0.25],'y')
hold on;
plot([1 1],[1 1],'b',[1 1],[1 1],'r','linewidth',10)
Currently from bottom to top: patch, blue line, red line
g=get(gca,'Children')
g=g([3 1 2])
set(gca,'children',g)
Now bottom to top: blue line, red line, patch
g=get(gca,'Children')
g=g([1 3 2])
set(gca,'children',g)
Now bottom to top: red line, blue line, patch
