In this article I wanted to concentrate on some basic time series analysis, and on efforts to see if there is any simple way we can improve our prediction skills and abilities in order to produce more accurate results. When considering most financial asset price time series you would be forgiven for concluding that, at various time frames (some longer, some shorter) many, many of the data sets we try to analyse can appear completely random. At least random enough that any hope of easily forecasting future value and paths is going to be a tough ask at the every least!

Author

# s666

I thought today I would whip up a quick post regarding Jupyter Notebooks and how to download, install and use various “addons” that I like using and find more than just a little bit useful. Among other things I’ll show how to use the “jupyter-themes” module to change and manipulate the basic theme and styling of the overall notebook, I’ll show how to download and install the Jupyter Notebook extensions module giving access to a whole range of usefull goodies you can try out, and I’ll even show you how to use Jupyter widgets and how to embed URLs, PDFs, and Youtube videos directly into a notebook itself.

In this post I am going to be looking at portfolio optimisation methods, touching on both the use of Monte Carlo, “brute force” style optimisation and then the use of Scipy’s “optimize” function for “minimizing (or maximizing) objective functions, possibly subject to constraints”, as it states in the official docs (https://docs.scipy.org/doc/scipy/reference/optimize.html).

I have to apologise at this point for my jumping back and forth between the UK English spelling of the word “optimise” and the US English spelling (optimize)…my fingers just won’t allow me to type it with a “z” unless I absolutely have to, for some reason!!! When quoting the official docs or referring to the actual function itself I shall use a “z” to fall in line.

To set up the first part of the problem at hand – say we are building, or have a portfolio of stocks, and we wish to balance/rebalance our holdings in such as way that they match the weights that would match the “optimal” weights if “optimal” meant the portfolio with the highest Sharpe ratio, also known as the “mean-variance optimal” portfolio.

This is part 2 of the Ichimoku Strategy creation and backtest – with part 1 having dealt with the calculation and creation of the individual Ichimoku elements (which can be found here), we now move onto creating the actual trading strategy logic and subsequent backtest.

The Ichimoku approach concerns itself with two major elements – firstly the signals and insights produced by the “cloud” structure, which is in trurn created by the interplay between the Senkou Span A and Senkou Span B (and sometimes its relation to the price), and secondly the interplay between the price, the Tenkan-sen and the Kijun-sen.

I thought it was about time for another blog post, and this time I have decided to take a look at the “Ichimoku Kinko Hyo” trading strategy, or just “Ichimoku” strategy for short. The Ichimoku system is a Japanese charting and technical analysis method and was published in 1969 by a reporter in Japan.

I thought I would spend this post on the creation of the indicator elements themselves, along with a couple of plotting examples usikng both Matplotlib and then Plotly.

So the Ichimoku “set up” is a technical indicator that is used to gauge momentum, along with future areas of support and resistance and consists of five main components.

In this article I thought I would take a look at and compare the concepts of “Monte Carlo analysis” and “Bootstrapping” in relation to simulating returns series and generating corresponding confidence intervals as to a portfolio’s potential risks and rewards.

Both methods are used to generate simulated price paths for a given asset, or portfolio of assets but they use slightly differing methods, which can appear reasonably subtle to those who haven’t come across them before. Technically Bootstrapping is a special case of the Monte Carlo simulation, hence why it may seem a little confusing at first glance.

With Monte Carlo analysis (and here we are talking specifically about the “Parametric” Monte Carlo approach) the idea is to generate data based upon some underlying model characteristics. So for example, we generate data based upon a Normal distribution, specifying our desired inputs to the model, in this case being the mean and the standard deviation. Where do we get these input figures from I hear you ask…well more often than not people tend to use values based on the historic, realised values for the assets in question.

This blog post is a result of a request I received on the website Facebook group page from a follower who asked me to analyse/play around with a csv data file he had provided. The request was to use Pandas to wrangle the data and perform some filtering and aggregation, with the view to plot the resulting figures using Matplotlib. Now Matplotlib was explicitly asked for, rather than Seaborn or any other higher level plotting library (even if they are built on the Matplotlib API) so I shall endeavour to use base Matplotlib where possible, rather than rely on any of the aforementioned (more user friendly) modules.

In this post I will be looking at a few things all combined into one script – you ‘ll see what I mean in a moment…

Being a blog about Python for finance, and having an admitted leaning towards scripting, backtesting and optimising systematic strategies I thought I would look at all three at the same time…along with the concept of “multithreading” to help speed things up.

So the script we are going to create (2 scripts in fact – one operating in a multi-threaded capacity and the other single threaded) will carry out the following steps:

1. Write the code to carry out the simulated backtest of a simple moving average strategy.

2. Run brute-force optimisation on the strategy inputs (i.e. the two moving average window periods). The Sharpe Ratio will be recorded for each run, and then the data relating to the maximum achieved Sharpe with be extracted and analysed.

3. For each optimisation run, the return and volatilty parameters of that particular backtest will then be passed to a function that runs Monte Carlo analysis and produces a distribution of possible outcomes for that particular set of inputs (I realise its a little bit of overkill to run Monte Carlo analysis on the results of each and every optimisation run, however my main goal here is to display how to multi-thread a process and the benefits that can be had in terms of code run time rather than actually analyse all the output data).

If you want to follow along with the post, the stock price data that I am using can be downloaded by clicking on the below:

It is daily price data for Ford (F.N) from the middle of 1972 onward. Once read in to a Pandas DataFrame and displayed, it should look like this:

Let’s deal first with the code to run the steps in a single threaded manner. First we import the necessary modules:

import numpy as np import pandas as pd import itertools import time

Next we quickly define a helper function to calculate annualised Sharpe Ratio for a backtest returns output:

#function to calculate Sharpe Ratio - Risk free rate element excluded for simplicity def annualised_sharpe(returns, N=252): return np.sqrt(N) * (returns.mean() / returns.std())

We then define our moving average strategy function as shown below. It takes 3 arguments, “data”, “short_ma” and “long_ma” – these should be pretty self explanatory. “data” is just the pricing data that will be passed to test the strategy over, and the other two are just the two moving average window period lengths.

Well it’s time for part 4 of our mini-series outlining how to create a program to generate performance reports in nice, fancy looking HTML format that we can render in our browser and interact with (to a certain extent). The previous post can be found here. If you copy and paste the last iteration of the code for “main.py” and “template.html” from the last post into your own local files and recreate the folder and file structure outline in part 1 (which can be found here), then you should be ready to follow on from here pretty much.

So I promised at the end of the last post that I would stop adding random charts and tables with additional KPIs and equity curves and what not, and try to add a bit of functionality that one may actually find useful even if it weren’t part of this whole specific performance report creation tutorial. I know many people are interested in the concept of Monte Carlo analysis and the insights it can offer above and beyond those statistics and visuals created from the actual return series of the investment/trading strategy under inspection.

This is the third part of the current “mini-series” providing a walk-through of how to create a “Report Generation” tool to allow the creation and display of a performance report for our (backtest) strategy equity series/returns.

To recap, the way we left the code and report output at the end of the last blog post is shown below. The “main.py” file looked like this: