It’s quite long time man tries to predict the value of a phisical dimension and there are so many applications of time series forecasting, I won’t report here. With reference to the stock market, for example, the basic idea is that, even if its state, modeling it as dependent from Open, High, Low, Close and Volume values of the last minute candle closed at the time t, is variable with time, x(t)=(O(t),H(t),L(t),C(t),V(t)), representing it in the frequency domain will allow for working in a way that is independent from time. By chance, I bumped into the interesting paper “Sequences with Minimal Time-Frequency Uncertainty” https://www.sciencedirect.com/science/article/pii/S1063520314000906, that refers to Heisenberg’s uncertainty principle stating that a given function cannot be arbitrarily compact both in time and frequency, defining an “uncertainty” lower bound, taking the variance as a measure of localization in time and frequency, and indicating Gaussian functions reach this bound for continuous-time signals, and that this is not true for time sequences. In addition to that, most of the signals seen in nature are not distributed as a gaussian function, making the challenge even worse. The Hamiltonian is an operator that represents the total energy of a quantum system, including both kinetic and potential energy and this represents, in my opinion, a fundamental exogen variable to take into consideration when building a model. Even if based on statistical indexes, the strategy may be severely impacted by the “energy” of the market.
Further, the reason the reading was very interesting to me is that what I noticed over time was that the better managed to obtain a good prediction, the more the residual (prediction error) wave form became similar to the one of a noise signal, feeling that there is a limit to the precision of the prediction versus true values, so wanted to take a closer look at the shape and behaviour of this “noise”.

Interestingly, in the book authored by Mandelbrot “The (Mis)Behaviour of Markets: A Fractal View of Risk, Ruin and Reward”, the author made the point about the fact that the traditional normal distribution does not match “real world” distributions, that is power law curves with “fat tails”, like for example the Cauchy distribution that better fits daily and weekly stock price movement. So it looks like both the stock price and the residuals of the prediction behave in a similar way.
Because a reduction in error naturally implies an improvement in forecasting it would be interesting to understand if they are correlated and to what extent, so that if it is applied an inverse noise signal to the predicted values, it would improve prediction or, at least, reduce the spikes of the residuals timeseries. Two approaches have been followed:
- Identify the distribution that shows the best similarity with the residuals timeseries
- Filter the residuals timeseries and subtract just those freqencies that are more “disturbing” in this case high frequency variations
Optimized regression with AutoML MAPE 1,25%, approach 1 leads to MAPE 1,79%, while approach 2 (e.g. removing hi-power frequencies) leads to 1,16%

The final goal is quite simple in some cases that is to predict if the market will go up or down and the issue is the change percentage is usually lower than the prediction error percentage (e.g. RMSE or MAPE), thus Machine Learning is usually not sufficient on its own to perform a good prediction, at least not good enough to drive investments decisions with a high success rate.
Ultimately, a good trading system, in my opinion, is composed by a good forecast model and a trading model that is as simple as possible. In particular, here follows an out-of-sample test started end of august and lasting about 9 months, based on a long-short strategy.

The reference markets are the futures of Nasdaq, S&P and Russel2000 and the strategy overperformed the markets by 34%.

