
Home
 General
 What it does
 Instructions
 Tour
 NEW:Try it online
 Advantages
 Test your copy of GGem
 Downloads
 Registration
 Contact 



Waiting for reply...
Fast download  about 30 seconds on a modem (2 MB)

Requires Windows 98/ME/XP/2000/Vista/7 

  GoldenGem^{®} Neural network calculator ~ Stock market data source

Beginner's Intro




  General Information about Neural Networks
Background Information (from Wikipedia):
Since the early 90's when the first practically usable types emerged,
artificial neural networks (ANNs) have rapidly grown in popularity.
They are artificial intelligence adaptive software systems that have
been inspired by how biological neural networks work. Their use comes
in because they can learn to detect complex patterns in data. In
mathematical terms, they are universal nonlinear function
approximators meaning that given the right data and configured
correctly, they can capture and model any inputoutput relationships.
This not only removes the need for human interpretation of charts or
the series of rules for generating entry/exit signals but also provides
a bridge to fundamental analysis as that type of data can be used as
input.
In addition, as ANNs are essentially nonlinear statistical models,
their accuracy and prediction capabilities can be both mathematically
and empirically tested. In various studies neural networks used for
generating trading signals have significantly outperformed buyhold
strategies as well as traditional linear technical analysis methods.
While the advanced mathematical nature of such adaptive systems have
kept neural networks for financial analysis mostly within academic
research circles, in recent years more user friendly neural network
software has made the technology more accessible to traders
(read full article). [also
follow the same link to the more important section on fundamental analysis] Wikipedia, September 2006
 
Theory of Neural Networks (J. Moody)
 Typical published example (from the literature): 
 
Summary of operation:
 The trader, wishing to quantify the relationship among a group of stock or share prices, and/or indices,
enters the tickers in capital letters, separated by commas.
 The needed historical and real time share price quotes and volumes are looked up and compared automatically.
 The neural network searches for a nonlinear mathematical
relationship (pattern) relating the prices and volumes to the ticker of
interest, while the user participates by controlling a sensitivity
(also called 'momentum') adjustment
 When sensitivity is then set to zero, graphs show two years of correct and rigorous backtesting.
through which the user may visually assess whether the relationship is valid throughout historical time.
 The relationship is extended into the future to make a forecast,
by the number of days the user has set on the slider during training.
 There is no buy/sell indicator: the reliability of the forecast depends on the user's visual verification
of the match between the two graphs
obtained during backtesting, and the his estimation of the likelihood that the mathematical relationship
which has been found will continue to hold in the future.
User instructions
1. Think of a list of tickers that are likely to be mathematically
related, over time. This is the hardest step  mathematically related does not mean that they behave similarly.
2. Locate the label which says 'Related Group of Tickers.' Beneath
this, delete the sample list of tickers which may be there, and enter
the names of the tickers you are interested in analyzing, in capital
letters, and separated by commas.
3. Choose a data source from the File: button at the top of the program. The simplest
possibility is 'Load from Internet.'
The program knows how to search the standard list of
public domain websites including Google, Yahoo and MSN and place a plain text
file called Internet.txt in the GoldenGem folder, which will have
columns for Date, Ticker, Open, High, Low, Close, and Volume, with
entries separated by commas. The default settings in StockDownloader
are set to load the same list of stocks
which you entered in GoldenGem, up to the most recent close price.
If you are happy with this selection, press the 'Load' button on
StockDownloader. (The StockDownloader is just a beginners' tool; it is expected that you will later graduate from technical analysis to
fundamental analysis finding your own data from other websites;
click here to find out how to do this.)
4. Use the up and down arrows on your keyboard to choose which of the the various graphs you wish to see (or you can use the
dropdown menu button, near the letters TK to the right of the screen).
  Click to continue
5. You will see three coloured traces on each graph. The red
trace is the graph of the selected share price throughout two years of
historical time. The green graph shows the
neural net's use of artificial intelligence to predict the red graph,
throughout historical time, simultaneously combining all loaded volume
and price information of all the shares. There is also
a blue graph, which shows where the green graph would go if it were
predicting perfectly.
6. Increase the sensitivity and
wait until the green graph matches the blue graph, then decrease the
sensitivity to zero (the bottom of the slider). This is best done in
stages, over many iterations, as you are removing the training input.
If the green and blue graphs match well when sensitivity is zero, this
means the green curve has learned how to predict the advance copy of
the red curve throughout historical time, by looking only at effects as
many days in the past as you have selected on the Days slider. The
green curve does not therefore need to finish at the 'Loaded Until'
date, but can continue the calculation for an equal number of days yet
to come.
You see this in the first screenshot as the part of the green curve
that extends three weeks past the verticical blue line.
7. The numerical prediction for the future increase or
decrease will also be shown at the upper right of the display. A text
display will say, for instance: Predicted change over the next 15
business days: +2.35.  
For more detailed instructions go to the FAQ page (click here)
For technical specifications to click here
  Other data sources: Besides using StockDownloader, there
are other options for obtaining data. For three beginning examples, you can
  *choose 'Load from File,'
to load data
which you may have downloaded previously, such as the standard and poor
500 file sp500hst.txt
which uses the same standard file conventions as GoldenGem, and can be
obtained as a file called full_set.zip from
biz.swcp.com/stocks/   or
  *download a collection of .csv files (some of stock market data and some of Forex data)
from for example
www.forexrate.co.uk/forexhistoricaldata.php.
The csv data export on that site works by scrolling with the up and down arrow keys. Select 500 data points, 1 day,
and split by comma. Change the vol number in GoldenGem to column 4, enter the ticker names you wish into GoldenGem, and choose Import from a folder. Then browse to whatever folder contains the .csv files.   or
  *paste or type in a text file of
your own using Wordpad, as explained in the link below   or
  *download a file of 2 years or so of daily data from the Bank of England Statistical database http://www.bankofengland.co.uk/statistics/index.htm Choose columnular (with titles if you wish). This
creates a file called results.csv. Set the file position numbers tick, date close to 2,1,3 and leave the vol window blank. Type in the subset of ticker names you want to start with. Choose 'Browse for a new file' from GoldenGem's
File button. Choose 'comma separated (csv) files' at the bottom
of the file selecting box and select 'results.csv,' the file you've just downloaded from Bank of England.

(Click here for even further methods of importing and loading data from other applications)
Availability:
Try it online!
If you have a Windows XP operating system, with Internet Explorer or Mozilla you may find it is not necessary to install the program at all; you're invited to
click here to try it online. For Internet Explorer choose 'run' instead of 'save'. For Mozilla choose 'save' then 'open.' The link above is a small 504 K file called GoldenGem Viewer.
Choose to 'load tickers from the internet' from the File: menu, then use the menu to the right of the screen to change between graphs.
If it is the first time you've trained a neural network, think of it as a sort of video game, where the goal is to make
both indicator lights remain green.
Full installation:1. Local download Link (same as on the home page)
2. The Download.com site, which delivers
the same file, verifies that software which they offer is free of adware, spyware,
or viruses, and that it correctly installs and uninstalls. It does not matter whether you download
from the local link or from Download.com. 

Safety information:
1. Mcafee site advisor check of goldengem.co.uk and setupv.exe.
(note that Mcafee Site Advisor seems to be offline though)
2. Digital signature. The 'properties' menu for digitally signed files always includes an extra Digital Signatures tab to ensure that a file has never been altered. It allows you to verify the following information:
File Name: setupv.exe  File Size: 2.05 MB  Signed by: Dr. J Moody 
Date Signed: 31 October, 2011
 Countersignature: Comodo  Certificate: UTN
 Time signed: 22:24:42 
Operating systems supported by version 2.4
Windows 98, 2000, NT, XP Vista and Windows 7.
Advantages:
 Benchmark predictions of abstract mathematical functions, such as shown on this site, have been extensively verified.
 The technical specifications are those agreed to be most effective in stock market prediction.
 The algorithm has been widely used for many years in finance,
trading and investment, portfolio management. It is established as a
reliable and valuable calculation.
 The algorithm is the only reliable way in which it is possible to
simultaneously consider the combined effects of a number prices and
volumes.
Test your copy of GoldenGem
We've made a file of tickers named x, y, z,w
containing 10 sin(i/10), 10 cos(i/10),
10 cos^{2}(i/10)sin(i/10)+10 cos(i/10), and
10 sin ^{2}(i/10)10 cos(i/10)10. for i=1 to 400. Right click proof2.txt and
choose 'save target.' Set the
ticker names x,y,z,w and file position numbers 1 (blank) 2 (blank) as in the screenshot below. Press the File: button,
select 'Browse for new file' and locate the file proof2.txt which you have just downloaded.
Then have fun! The fact we've used repeating functions
is just so the viewer can recognize whether the predictions are correct. The program does not use the
fact that they are repeating functions in any way. Note that since the DAYS slider is set to 21, the predicted change is given 15 data points
ahead, as the DAYS slider is in terms of actual days not including weekends, so it will state that the prediction
is for 15 business days in the future.
Registration is free of charge.
The program is freeware and a registration key can be obtained here.

The theory of neural networks
Mathematically, the theory of neural networks is fairly
trivial. That said, it is also true that the development of
the theory would have proceeded more quickly if basic mathematical understanding had been applied at the beginning.
We assume one knows that a linear map
R^{n} → R^{m}
is given by a matrix with n columns and m rows, whose
entries are real numbers. A composite of such linear maps
is again a linear map, and the matrix representing the
composite is the product of the matrices representing the
separate factors. Therefore, the set of functions which can
be represented as a composite of linear maps is no larger
than the set of functions which can be represented by a single
matrix.
Just to make things concrete, if we attempt to find
a function which determines the price of a share of IBM,
in terms of the prices and volumes of five stocks, using
a linear function, we are choosing one function out of
a ten dimensional space of functions; or, we are choosing
the ten entries of a matrix with a single row and ten columns.
One notion of what those entries should be is the one which
gives the least error, in the sense of least squares. Provided
the share prices are normalized to have mean zero and standard
deviation one, these ten numbers are what are called the linear
regression coefficients.
The new ingredient in neural networks is that after applying
a matrix to a vector, we then apply a transition function to
each entry of the new vector. If repeat what we have just done,
the result will not be much different. Some of the differences
are that it is not guaranteed that there is a sequence
of ten coefficients that gives the best fit, two sequences of
ten might give an equally good fit. Also, it will be impossible
to fit values which exceed the range of our transition function.
These seem like disadvantages.
 
But there are tremendous compensating advantages. Namely,
a composite of functions of this type (applying a matrix,
then applying a transition function to each entry of the answer)
is not the same as a single function of this type. For example,
the composite of
linear maps goes (if five shares are loaded) and there are ten neurons in the middle layer
R^{11} → R^{10} → R^{10} → R^{5}
To make a fair comparison, reagarding the calculation of one
output, we are only looking at one of the factors in R^{5}.
So the number of matrix entries used for the same calculation is
11 × 10 + 10 × 10 + 10 × 1=220
So instead of a ten dimensional space of functions,
we are looking at 220 dimensional space of functions.
Next, in order to simplify things, let us write
all the various matrix entries as a sequence of variables
y_{1}, y_{2}, ..., y_{m}
where m is a number which may range up to, as we have seen,
220. And, let us write the input variables as x_1,...,x_n
where in our example of five shares, n is just ten. In fact,
n is eleven because we use a bias variable which is the constant
input one.
Now, our neural network, or, the part of it which calculates
a single output variable, is just a function
f(x_{1},...,x_{n}; y_{1},...,y_{m}).




We would like this function to match the actual share price
of the share we are analyzing at a fixed number of days into the
future, and we shall call the correct answer, when it is known,
g(x_1,...,x_n). So our error, of course, is the difference
e= g(x_{1},...,x_{n})f(x_{1},...,x_{n}
; y_{1},..,y_{m}).
The second term, if we were to write it down, would look
complicated, it would involve composites of the transition
function and matrix multiplications, and, if we include
the normalization of variables it would look even more compli
cated. We shall now imagine a particular point in the training
of the neural network. So x_{1},...,x_{n} are fixed, they are par
ticular numbers. This means the first term g(x_{1},...,x_{n}) is
a number, and the second term is a rather complicated looking
function involving all the thousands of variables y_{1},...,y_{m}.
But we may still ask, what is the best direction to change
the y_{i} to improve the error the fastest. This is merely
the value of e times the gradient of f viewed as a function
of the y_{i}.
The entries of the gradient of f are just the partial deri
vatives of f with respect to the y_{i}. Because f is a composite
of separate functions R^{i} to R^{j} for various values of i and j,
the chain rule can be used, which says that the gradient of f is
the product of the Jacobian matrices of these separate functions.
Each function is in fact a matrix composed with a function that acts
separately on each coordinate by the transition function. The
Jacobian matrix of a matrix is the matrix itself, and the Jacobian
matrix of the transition function acting on each coordiante separately
is a diagonal matrix, with the derivative of the transition
function in each diagonal entry. Some care must be taken to
evaluate the derivative of the transition function at the
appropriate value, but this is standared in multivariable calculus.
 
Let us now make this explicit. We have a composite
g_{1} g_{2} g_{3}
R^{11} → R^{10} → R^{10} → R
and
f=g_{3} o g_{2} o g_{1}
a composite of three functions. For i=1,2,3 the function g_i
is a composite M_{i} o h_{i} where M_{i} is a linear map,
given by a matrix, and
h_{1}: R^{10} → R^{10}
h_2: R^{10} → R^{10}
h_3: R→ R
Note that h_{i} acts on the target of g_{i}.
The functions h merely apply the transition function
to each variable.
Now, the derivatives of the h_{i} just apply the derivative
of the transition function to each variable. Let us call these h_{i}'.
We have
f=h_{3} o M_{3} o h_{2} o M_{2} o h_{1} o M_{1}
so the chain rule says that the derivative of f evaluated at
our fixed vector (x_{1},...,x_{n}) is
h_{3}'( M_{3} o h_{2} o M_{2} o h_{1} o M_{1}(x_{1},...,x_{n})
o M_{3}(h_{2} o M_{2} o h_{1} o M_{1}(x_{1},...,x_{n})
o h_{2}'(M_{2} o h_{1} o M_{1} (x_{1},...,x_{n})
o M_{2}(h_{1}(M_{1}(x_{1},...,x_{n})
o h_{1}'(M_{1}(x_{1},...,x_{n})
o M_{1}(x_{1},..,x_{n})




Here we can view the h_{i}' once they are evaluated at the
long expressions in parentheses, as square matrices with only
the diagonal entries nonzero.
Now, I actually want to differentiate f(x_{1},...,x_{n}) with respect
to the variables y_{1},...,y_{m} which are the entries of the matrices
M_{1},M_{2}, M_{3} keeping the x_{i} as fixed numbers.
The calculation is less familiar but actually easier.
To differentiat with respect to the (i,j) entry of M_{1} replace
the last term in the expression above with a column vector with x_{i}
in position j. If I want to differentiate with respect to the (i,j)
entry of M_{2} then the last two terms will be removed,
and the third from last term will be replaced by the
i'th entry of h_{1}(M_{1}(x_{1},...,x_{n})) in position j and soon.
This is just an application of the chain rule
viewing the matrix entries as variables. The same calculation is also
the same as the the 'backpropogation' calculation
for artificial neural networks. To finish we must apply some multiple
of the error  actually GoldenGem uses also a sensitivity
multiplier based on a logarithmic scale here. The particular
transition function which we use is the inverse tangent function
arctan. This is chosen because the other possible choice of a bipolar
transition function is hyperbolic tangent. The hyperbolic tangent
takes values between 1 and 1, and is almost always nearly equal to one
of those values. It is often chosen if a neural network is meant to model
digital data. Arctan varies more evenly, and is the better choice
of bipolar transition function for analogue data. Why the neural network configuration  this particular space
of functions  is chosen is easy to see. If the matrix entries
in early stages are chosen small, it matches a linear function well,
so it is in this sense at least as good as linear regression.
 
On the
other pointer, it is possible to model a binary function very well, even
with three levels. To see this, think of the output which ranges
from π /2 to π /2 as a digital output, thinking that an answer
near pi/2 means 'yes' and one near the negative of that means 'no.'
Now we can very easily choose matrix entries to arrange any truth
table we wish. If I want an answer of 'yes' just if the first seven
inputs are 'yes,' I can weight the matrix entries directly looking at
the first seven inputs so that the sum of the contributions from these
entries (plus a constant from a bias neuron) is positive if and only if all seven inputs say 'yes,' and
in any case a very large positive or negative number. And so that the
contributions from other inputs do not matter, I set other coefficients
to zero. After applying the transition function to the sum, we see that
one neuron in the middle level has an answer of π/2 if all seven are
'yes' and π/2 otherwise. In this way, one sees, in the first stage
I can model an 'and' of any subset of entries, or their negatives.
Now, it is an easy fact of logic that any truth table can
be expressed in two stages of negation and and. The more familiar
fact is that any truth table can be expressed as an OR of
a set of ANDS of the input variables and their negations. But then
one can use the tautology that
OR = NOT AND NOT.
This shows that a three level neural network can model
both linear and logical phenomena. A more precise fact is any
continuous function whose domain is a
bounded subset of R^{n} can be approximated uniformly
by a sequence of functions f_{1},f_{2},... where f_{i}
is realized by a three level perceptron with i neurons in the hidden (middle)
layer. This was first discovered by Funahashi (in the Journal Neural Networks, vol 2)
and independently by Hornik et al in the same journal. Hornik et al applied the
phrase `universal nonlinear function approximator' to this three level
configuration, and this is the default configuration
in GoldenGem.




In conclusion, the chain rule is an element of under
graduate mathematics, it was not applied in neural network theory
for many years. When it was applied, it was called `backpropogation,'
because the chain rule takes account of all the values
and derivatives of the separate functions we have composed to
create f.
Another concluding remark: in data mining, some people run
several neural networks in parallel. This is very much the same
as running a single neural net with a very large middle layer, but it
may be better, because there is a concern about overtraining
and poor generalization if a single three layer perceptron is used
with a large middle layer. For that reason the number of neurons
in the middle layer of GoldenGem by default is no more than the
square root of the product of the number in levels 1 and 3, and
this means it may be useful to run many copies of the program
simultaneously, or run the program many times. With this in view
we have added a 'speed' button which allows rapid training,
and quick indicator lights which can be used to assess statistical
significance of the training set, and an adjustment in the stock
download window which allows one to choose a validation data set.
It is my view
that the challenge now is not better software, but better
use of the existing algorithms. I hope I have successfully
brought the GoldenGem
console into the 21'st century, and it will remain a reliable
slide rule of neural calculation in the field.
 

Contact Us
Postal Address: GoldenGem Neural Networks, 12 Armorial Road, Coventry CV3 6GJ, England
Telephone (if dialled from USA): 011442476523490
Software author (if dialled from USA): 011442476417946
Support: GoldenGemNetwork@gmail.com
Beginner's Introduction

Currently, the site is visited by programmers, serious investors, students (mostly Masters and
PhD), some emeritus prof.'s and hobbyists, most of whom are
already involved in neural networks.
We would like to change that, and we are working on making the site and the program more accessible.
Questions and Answers
What are the main types of financial analysis?
There are two main types. Technical Analysis attempts to extrapolate a price using only
earlier values of that same price. Only a few papers show a statistically significant advantage over random trading.
The second type is Fundamental Analysis, where
data from financial statements, interest rates, volumes, competitors' prices, prices of raw materials
or other variables known to affect the target prices are used.
As explained by Wikipedia (see the General Information link on the left), neural networks
act as a 'bridge' between Technical Analysis and the more highly regarded Fundamental Analysis.
Future values of prices
are approximated, not by nonlinear extrapolation of earlier values, but by hypothesizing
and testing actual causal connections.
It is possible to load a set of prices and volumes of the most wellknown shares, bonds, and indices by pressing one button.
But unless one has the skill to find a meaningful relationship among
these variables, StockDownloader is really only for beginners. A competent user will graduate to using other types of data, available on the internet and also accessible by GoldenGem with a bit
more work; the instructions link on the left leads to a series of links explaining further types
of data that can be imported, and we're willing to modify the program to make it compatible with
new formats as they come up.
Developing this implementation over time we have needed to
confront subtle questions before the algorithm began to work well, as it now does.
The Verification link on the left shows GoldenGem predicting
abstract mathematical functions. The program does not use the fact that the functions are repeating;
only today's values of the input variables are used in making the last
day of prediction. In cases when earlier values than the last known value may have an effect you should also include `stochastics' as
input variables. The program does not take into account 'oscillations' such as described in the
Elliot wave 'theory,' a theory which we do not subscribe to; however, note that the program is able to predict functions that oscillate.
Finally, the program is able to predict any one of the functions knowing the others, but there is no preferred process of extrapolation
that should would work for a single function treated by itself.
How reliable is a prediction that this program gives me?
We have added a pair of indicator lights to help answer this in each case.
The first indicator light refers to a number r during backtesting, which is defined to be the
minimum of the correlation coefficient of predicted versus actual daily changes, on the
one hand,
and the correlation coefficient times the ratio (actual variance)/(predicted variance),
on the other hand. To give some idea of the meaning of this: if a person were
to buy or short, every day during backtesting, in proportion to its
predicted percentage change, assuming things are normally distributed, the
percentage profit over n days of trading during that time in the past
would have been
(percentage gain over n days of backtesting) =  r ( 
 )^{}1/2  (   )^{}1/2  V 
where V is the maximum of actual and predicted annual volatility expressed in percentage points.
This taking of V to be the maximum of actual and predicted volatility seems contrived, but it is exactly what one wants. If r and V were defined using only the actual volatility, then a strategy of relying upon a posteriori information would exist to attain deceptively good returns during backtesting which do not really result from any prediction: a sluggish response, in which the green curve stays near the 2 year average, would correspond to
a strategy of predicting always a sudden return to the 2 year average, which during backtesting includes knowledge of future
days and would unfairly reward the correlation coefficient alone. Whereas if r and V were defined using only the predicted volatility, there would be no intrinsic relation between r and the actual percentage gain: a large r value could arise from a prediction with very low variance. The value of r as we have defined it rules out both these problems, and appears to correspond with what looks like intuitively good backtesting.
The first light is yellow
when the r value is larger than 0.39 and green when it is larger than 0.6. The second light goes from
red to yellow to green as the training input is removed. You will need to try different combinations of
input variables before you will be able to make both lights remain green at the same time. If the lights cannot be made to remain green,
the answer to your question is, the prediction is meaningless. If the lights do remain green, then
that means a relationship has been found which has been able to make succesful
predictions during the backtesting interval.
When both lights have remained green, does this imply the prediction can be trusted? Not yet. Even taking account the variance ratio as we have, the formulation in terms of profit shows that this number could
be high enough to set the green light, just because some of the hypothetical trades were extremely
profitable, others not at all. You also need to to actually look at the behaviour of the prediction line,
the part of the green line extending into the future, past the red line, throughout backtesting, and see qualitatively how consistenly
it is correct. When sensitivity is set to zero there is no training input, and the green graph is calculated only using data values of all variables
from the time of the earlier red graph, and any
prediction you see therefore shows a real mathematical relationship during backtesting.
Finally, you still are not quite done. Even when you have assessed both statistically and visually
that the predictions throughout backtesting are good, to be really sure the variables you are looking
at are related, you should set 'today's date' in StockDownloader to a time in the past, or otherwise
load data only up various times in the past, and train the net to predict a range of values which you actually
already know. This is a 'validation data set,' and the next version of GoldenGem will make this last
stage of validation easier.
There do exist relationships between variables which are known to affect prices, the ones which are wellknown
can't be exploited unless you have knowledge of the input variables advance of the trading public. It is not true
that all existing relationships are wellknown. Insider trading is perfectly legal if you exploit public domain
information through your own intelligence.
What will happen the first time I try it?
A good training strategy is to start with a high sensitivity, and to bring
it down in stages.
Assuming the variables actually were expected to be related, and your training
strategy was correct,
you are likely to end up with the first light turning red,
signifying inadequate correlation, by the time
the second light turns green. This is usually for one of three reasons:
1. If you see vertical green spikes,
and a message 'Press the Reset button,' then you have traumatized the net. Like a human or animal, it will
take a very long time to recover. Similar to a good nights' sleep might do for an animal,
the Reset button gives a fresh start, and all is forgiven, but it will need to be trained again from the beginning.
2. If the green line is flat, this is because it was trained inadequately. Raise the sensitivity
slider again and wait a while before bringing it down (ideally in stages).
3. If
the green line looks just like the red line, but shifted to the right by the amount on the Days slider, you
are seeing a situation where the expected future value is always nothing but the last known value. If all graphs
are like that, congratulations, you have found a 'Markovian' set of shares: interesting but with no
opportunity for arbitrage, assuming the neural network has found the best possible solution.
Conclusion
Financial analysis is something you do, not something you buy.
A neural network requires involvement by the user. You have to choose what data you think is
relevant, you have to learn how to train the net, and it is up to you to evaluate the backtesting.
The most important thing to remember is that although our display shows
only two graphs at a time (actual and predicted), the predicted graph is generated by taking account of the mathematical relations among the prices and volumes of all loaded variables simultaneously and so the choice of ungraphed tickers affects the quality of the match between the two graphs you are observing.
It is your responsibility to decide whether you are discovering and exploiting a valid and rational mathematical relationship which others haven't yet thought of. Think of the famous story of the investor who profited decades ago, during the beginning of the urban legend of a mouse in a KFC meal. He counted the change in the number of people attending his local KFC each day, and decided there had been no decrease. There is a valid, simple, and meaningful relation between the number of people he observed, the current share price and the future share price, which he used intuitively. If he had wanted to be more precise he could have used a neural network. It is not guesswork, it is not a fishing or `data mining' expedition. You have to already know what you are doing and why.
 
Right Click, Select all, Copy, and Paste the text below into your HTML code if you would like to embed the free Reuters newsTicker on your site.
(Never made a website before? Don't worry, just paste the code into Notepad, add whatever text you like, and
save as a file called favourite.html on your desktop. You have made your first website. You can email it to colleagues
or put upload it to a server.)
