IBKR Quant Blog


1 2 3 4 5 2 19


Quant

Storage Wars: Security Concerns Generate Interest In AI On-Premise Storage Solutions Like PureStorage (PSTG)


AI

Note: The content of this post references an opinion and / or is presented for product demonstration purposes. It is provided for information purposes only. It does not constitute, nor is it intended to be investment advice. Seek a duly licensed professional for investment advice.

AI (artificial intelligence) was certainly the buzzword of this past last year, influencing the conversations of most tech companies and also taking up increasing mindshare for Fortune 500 leaders across all industries.

In fact, we recently used Sentieo to take a look at mentions of AI in earnings call transcripts, and the number of mentions is growing exponentially. Here’s a snapshot from our recent Sentiment Analysis Quarterly Report:

AI

(For a full analysis of AI and other top keywords, download the full report here).

Companies looking to incorporate AI and machine learning into all aspects of their businesses also need to incorporate AI into their data storage systems. Currently, the top leaders in AI cloud storage services are: Amazon Web Services (AWS), Microsoft’s Azure, Google Cloud Platform (GCP), and IBM’s IBM Cloud and Watson. However, as data security and compliance become increasing concerns (especially in data security sensitive businesses like Financial Services and Healthcare), many companies are turning away from the cloud and looking towards on-premise data storage solutions to increase their privacy and control.

Jumping off from its recent partnership with Nvidia, PureStorage has created one of the first on-premise, AI-enabled solutions to hit the marketplace. For companies that don’t want to host data in the cloud (i.e. on-premise), there are no options outside of this new PSTG and NVDA offering. They may also be able to capitalize on “sole source” contracts with government institutions (circumventing the competitive bid process). These are 5-15M storage contacts with DoD, NASA, etc.

We took a look at PureStorage (PSTG) through the lens of Sentieo’s Mosaic tool, which plots alternative data that includes Google Trends, Alexa Website Data, and Twitter mentions. Alternative datasets like these can provide an edge in analyzing consumer-facing businesses, as they often have a high correlation with revenue growth and are available ahead of traditional financial metrics for the period. As consumer behavior shifts more and more towards digital, indicators like these have become more predictive of tech and consumer company results.

 

AI

AI

 

What we see above is that Google Trends (green line), Twitter mentions (blue line), and Alexa website visits (red line) are all trending up, very likely due to the announcement of this highly AI-optimized solution born of PureStorage’s partnership with Nvidia. While indicators for PureStorage are ticking up, we don’t necessarily expect this to impact this quarter’s earnings. However, we do expect higher guidance for the next few quarters as PSTG rides the AI wave until other on-premise solutions catch up.

We’ll be keeping our eye on PSTG until its earnings call in late May, but based on the alternative data we’ve seen, we like its prospects for growth.

-------------------------

About Sentieo
Made up of former hedge fund analysts, Sentieo is familiar with the challenge of gathering information to find a key data point that has the potential to make or break an investment thesis. With new datasets appearing daily, the job of an investor continues to grow more challenging and complex. This is the inspiration of Sentieo.

Sentieo is a financial data platform underpinned by search technology. Sentieo overlays search, collaboration and automation on key aspects of an analyst's workflow so that investors can spend less time searching and more time analyzing.https://www.sentieo.com/

 

This article is from Sentieo and is being posted with Sentieo’s permission. The views expressed in this article are solely those of the author and/or Sentieo and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional ad


18379




Quant

SAS, R, or Python


2017 SAS, R, or Python Flash Survey Results

By Burtch Works

 

As many of you probably know, over the past few years we’ve been gauging statistical tool preferences among data scientists and predictive analytics professionals by sending out a flash survey to our network: the first two years weighing SAS vs. R, and then adding Python to the mix last year as their libraries expanded.

Now, as one might imagine, the discussion is always rather spirited in nature – we’ve had over 1,000 responses every year – and reading the comments has become one of our favorite parts of doing the survey, so feel free to chime in!

To keep the comparison simple, we only asked one question: Which do you prefer to use – SAS, R, or Python?

Python

 

Over the past four years we’ve seen preference for open source tools steadily climbing, with 66% of respondents choosing R or Python this year. Python climbed from 20% in 2016 to 26% this year.

 

Each year we also match responses to demographic information, to show how these preferences break down by factors like region, industry, education, years of experience, and more.

 

Similar to last year, the largest proportion of Python supporters are on the West Coast and in the Northeast, however, the Mountain region is close behind the Northeast, and all regions saw at least some increase in Python preference. R preference is highest in the Midwest and SAS preference is highest in the Southeast.

 

Open source preference remains high in Tech/Telecom and preference for SAS continues to be higher in more regulated industries like Financial Services and Pharmaceuticals.

 

Professionals with a Ph.D. are the most likely to prefer open source tools, likely due to the prevalence of R and Python usage in research and academic programs, and the foundation of experience it establishes as they move into business.

 

 

 

Preference of open source tools is by far the highest amongst professionals with 5 or less years’ experience. Even as the specific proportions have changed, this trend has remained fairly constant over the years that we’ve done the survey.

 

Visit Burtch Works website to read the full article: https://www.burtchworks.com/2017/06/19/2017-sas-r-python-flash-survey-results/

 

 

About Burtch Works

Burtch Works https://www.burtchworks.com/ is a quant & marketing research recruiters company. Follow them on Social Media: for #datascience#analytics, & #marketingresearch career news!  

 

This article is from Burtch Works and is being posted with Burtch Works’s permission. The views expressed in this article are solely those of the author and/or Burtch Works and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18430




Quant

R Tip of the Month: mclapply


R Tip of the Month: mclapply

 

By Majeed Simaan, PhD

 

One of the keynote lectures from last week's R in Finance conference focused on parallel computing. It was an excellent lecture delivered by Professor Norman S. Matloff from UC Davis. The lecture focused on challenges faced in parallel computing when dealing with time series analysis, which is recursive in nature. Nonetheless, it also stressed the power of R and the advancement of the current libraries to perform parallel computing. The lecture slides should be uploaded to the online program. In this vignette, I will illustrate the usage of the mclapply function from the parallel package, which I find super friendly to deploy.

To get started, I will take a look at the SPY ETF along with AAPL:

library(quantmod)
P1 <- get(getSymbols("SPY",from = "1990-01-01"))[,6]
P2 <- get(getSymbols("AAPL",from = "1990-01-01"))[,6]
P <- merge(P1,P2)
R <- na.omit(P/lag(P))-1
names(R) <- c("SPY","AAPL")

In particular, I will test the computation time needed to estimate AAPL's beta with the SPY ETF. To do so, I create a function named beta.f that takes i as its main argument. The function randomly samples 50% of the data using a fixed seed i and computes the market beta for AAPL.

beta.f <- function(i) {
  set.seed(i)   R.i <- R[sample(1:nrow(R),floor(0.5*nrow(R)) ),]
  lm.i <- lm(AAPL~SPY,data = R.i)
  beta.i <- summary(lm.i)$coefficients["SPY",1]
  return(beta.i)
  }

I run the computation twice over a sequence of i integers - once using the lapply and once using the mclapply. The latter runs in the same fashion of the former, making it is extremely easy to implement:

library(parallel)
N <- 10^2
f1 <- function() mean(unlist(lapply(1:N, beta.f)))
f2 <- function() mean(unlist(mclapply(1:N, beta.f)) )

To compare the computation time that f1 and f2 takes to run, I refer to the microbenchmark library to achieve a robust perspective. The main function from the library is microbenchmark whose main argument is the underlying function we like to evaluate. In our case, those are f1 and f2. Additionally, we can add an input that determines how many times we would like to run these functions. This provides multiple perspectives on the computational time needed to run each function.

library(microbenchmark)
ds.time <- microbenchmark(Regular = f1(),Parallel = f2(),times = 100)
ds.time
## Unit: milliseconds
##   expr   min   lq   mean   median   uq   max   neval   cld
##   Regular   785.0485   891.9385   985.5360   955.2429   1028.3437   1537.644   100   b
##   Parallel   445.8762   524.5227   625.4168   579.8332   712.6159   1000.358   100   a

We observe that, on average, the mclapply runs significantly faster than the base lapply function. Additionally, one can refer to the autoplot function from ggplot2 to demonstrate the time distribution that takes each function to run, by simply running the following command:

library(ggplot2)
autoplot(ds.time)

R-mcapply

Summary

Overall, this vignette demonstrates the enhancement of computation time using parallel computing for a specific task. Note that the illustration exhibited here was conducted using a Linux OS and, thus, the mclapply function may not perform the same on a Windows OS. Users are advised to continue their studies on the topic in order to understand whether (and under what conditions) parallel computing improves performance. Check the following notes by Josh Errickson for further reading on the topic.

 

Visit Majeed's GitHub – IBKR-R corner for all of his R tips, and to learn more about his expertise in R: https://github.com/simaan84

Majeed Simaan, Ph,D Finance, is well versed in research areas related to banking, asset pricing, and financial modeling. His research interests revolve around Banking and Risk Management, with emphasis on asset allocation and pricing. He has been involved in a number of projects that apply state of the art empirical research tools in the areas of financial networks (interconnectedness), machine learning, and textual analysis. His research has been published in the International Review of Economics and Finance and the Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence. Majeed also pursued graduate training in the area of Mathematical Finance at the London School of Economics (LSE). He has a strong quantitative background in both computing and statistical learning. He holds both BA and MA in Statistics from the University of Haifa with specialization in actuarial science.

This article is from Majeed Simaan and is being posted with Majeed Simaan's permission. The views expressed in this article are solely those of the author and/or Majeed Simaan and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18427




Quant

Fama French Write up - by Jonathan Regenstein


In this post (based on a talk I gave at R in Finance 2018. The videos haven't been posted yet but I'll add a link when they have been), we lay the groundwork for a Shiny dashboard that allows a user to construct a portfolio, choose a set of Fama French factors, regress portfolio returns on the Fama French factors and visualize the results.

The final app is viewable here.

Before we get to Shiny, let's walk through how to do this in a Notebook to test our functions, code and results before wrapping it into an interactive dashboard. Next time, we will port to Shiny.

We will work with the following assets and weights to construct a portfolio:

+ SPY (S&P500 fund) weighted 25%
+ EFA (a non-US equities fund) weighted 25%
+ IJS (a small-cap value fund) weighted 20%
+ EEM (an emerging-mkts fund) weighted 20%
+ AGG (a bond fund) weighted 10%

Get Daily Returns and Build Portfolio

First, we import daily prices from a relatively new data source, tiingo, making use of the riingo package (a good vignette for which is here). I just started using tiingo and have really liked its performance thus far. Plus, it seems that fundamental data may be in the offing.

The first step is to create an API key and riingo makes that quite convenient:

riingo_browse_signup()
# This requires that you are signed in on the site once you sign up
riingo_browse_token()

Then we set our key for use this session with:

# Need an API key for tiingo
riingo_set_token("your api key here")

Now we can start importing prices. Let's first choose our 5 tickers and store them in a vector called symbols.

# The symbols vector holds our tickers.
symbols <- c("SPY","EFA", "IJS", "EEM","AGG")

And now use the riingo_prices() function to get our daily prices. Note that we can specify a start_date and could have chosen an end_date as well.

I want to keep only the tickers, dates and adjusted closing prices and will use dplyr's select() function to keep those columns. This is possible because riingo_prices() returned a tidy tibble.

# Get daily prices.
prices_riingo <-
  riingo_prices(symbols,
    start_date = "2013-01-01") %>%
select(ticker, date, adjClose)

We convert to log returns using mutate(returns = (log(adjClose) - log(lag(adjClose)))).

# Convert to returns
returns_riingo <-
  prices_riingo %>%
  group_by(ticker) %>%
  mutate(returns = (log(adjClose) - log(lag(adjClose))))

Next we want to convert those individual returns to portfolio returns, and that means choosing portfolio weights.

# Create a vector of weights
w <- c(0.25, 0.25, 0.20, 0.20, 0.10)

And now we pass the individual returns and weights to dplyr.

# Create a portfolio and calculate returns
portfolio_riingo <-
   returns_riingo %>%
  mutate(weights = case_when(
      ticker == symbols[1] ~ w[1],
      ticker == symbols[2] ~ w[2],
      ticker == symbols[3] ~ w[3],
      ticker == symbols[4] ~ w[4],
      ticker == symbols[5] ~ w[5]),
  weighted_returns = returns * weights) %>%
  group_by(date) %>%
  summarise(returns = sum(weighted_returns))

head(portfolio_riingo)

We could have converted to portfolio returns using tq_portfolio() from the tidyquant package. That probably would be better if we had 100 assets and 100 weights and did not want to grind through those case_when() lines above. Still, I like to run the portfolio once via the more verbose dplyr method because it forces me to walk through the assets and their weights.

Here is the more succinct tq_portfolio() version.

portfolio_returns_tq <-
  returns_riingo %>%
  tq_portfolio(assets_col = ticker,
        returns_col = returns,
        weights = w,
col_rename = "returns")

head(portfolio_returns_tq)

Importing daily prices and converting to portfolio returns is not a complex job, but it's still good practice to detail the steps for when our future self or a colleague wishes to revisit this work in 6 months. We will also see how these code flow gets ported almost directly over to our Shiny application.

Importing and Wrangling the Fama French Factors

Next we need to import Fama French factor data, luckily, FF make their factor data available on their website. We are going to document each step for importing and cleaning this data, to an extent that might be overkill. It can be a grind to document these steps now, but a time saver later when we need to update our Shiny app or Notebook. If someone else needs to update our work in the future, detailed data import steps are crucial.

Have a look at the website where factor data is available.

http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html

The data are packaged as zip files so we'll need to do a bit more than call read_csv().
 

We will use the tempfile() function from base R to create a variable called temp, and will store the zipped file there.

Now we invoke downloadfile(), pass it the URL address of the zip, which for daily Global 5 Factors is

http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/Global_5_Factors_Daily_CSV.zip

However, I choose not to pass that URL in directly, instead I paste it together in pieces with

factors_input <- "Global_5_Factors_Daily"

factors_address <- paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/", factors_input, "_CSV.zip", sep="" )

 

The reason for that is eventually we want to give the user the ability to choose different factors in the Shiny app, meaning the user is choosing a different URL end point depending on which zip is chosen.

We will enable that by having the user choose a different factors_input variable, that then gets pasted to the URL for download. We can toggle over to the Shiny app and see how this looks as a user input.

Next, we unzip that data with the  unz() function read and the csv file using read_csv().

factors_input <- "Global_5_Factors_Daily"

factors_address <-
  paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/",
      factors_input,
      "_CSV.zip",
      sep="" )

factors_csv_name <- paste(factors_input, ".csv", sep="")

temp <- tempfile()

download.file(
  paste# location of file to be downloaded
  pastefactors_address,
  paste# where we want R to store that file
  pastetemp)

Global_5_Factors <-
  pasteread_csv(unz(temp, factors_csv_name))

head(Global_5_Factors)

Have a quick look and notice that the object is not at all what we were expecting.

We need to clean up the metadata by skipping a few rows with skip = 6. Each time we access data from a new source there can be all sorts of maintenance to be performed. And we need to document it!

Global_5_Factors <-
  read_csv(unz(temp, factors_csv_name),
  skip = 6 )
head(Global_5_Factors)

Notice the format of the X1 column, which is the date. That doesn't look like it will play nicely with our date format for portfolio returns.

We can change the name of the column with rename(date = X1) and coerce to a nicer format with ymd(parse_date_time(date, "%Y%m%d")) from the lubridate package.

Global_5_Factors <-
  read_csv(unz(temp, factors_csv_name), skip = 6 ) %>%
  rename(date = X1, MKT = `Mkt-RF`) %>%
  mutate(date = ymd(parse_date_time(date, "%Y%m%d")))
head(Global_5_Factors)

It looks good, but there is one problem. Fama French have their factors on a different scale from our monthly returns - their daily risk free rate is .03. We need to divide the FF factors by 100.

Let's do that with mutate_if(is.numeric, funs(. / 100)).

Global_5_Factors <-
  read_csv(unz(temp, factors_csv_name), skip = 6 ) %>%
  rename(date = X1, MKT = `Mkt-RF`) %>%
  mutate(date = ymd(parse_date_time(date, "%Y%m%d")))%>%
  mutate_if(is.numeric, funs(. / 100))

tail(Global_5_Factors)

Here we display the end of the Fama French observations and can see that they are not updated daily. The data run through the end of April 2018. We will need to deal with that when running our analysis.

In general, our Fama French data object looks good and we were perhaps a bit too painstaking about the path from a zipped CSV to a readable data frame object.

This particular path can be partially reused for other zipped filed but the more important idea is to document the data provenance that sits behind a Shiny application or any model that might be headed to production. It is a grind in the beginning but a time saver in the future. We can toggle over to the Shiny application and see how this is generalized to whatever Fama French series is chosen by the user.

To the Analysis

We now have two objects, portfolio_riingo and Global_5_Factors, and we want to regress a dependent variable from the former on several independent variables from the latter.

To do that, we can combine them into one object and use mutate() to run the model. It's a two-step process to combine them. Let's use left_join() to combine them based on the column they have in common, date.

Not only will this create a new object for us, it acts as a check that the dates line up exactly because wherever they do not, we will see an NA.

portfolio_riingo_joined <-
  portfolio_riingo %>%
  mutate(date = ymd(date)) %>%
  left_join(Global_5_Factors) %>%
  mutate(Returns_excess = returns - RF) %>%
  na.omit()

head(portfolio_riingo_joined)
tail(portfolio_riingo_joined)

Notice that the Fama French factors are not current up to today. For any portfolio prices that were after April 30, 2018, the left_join() put an NA. We cleaned those up with na.omit().

We are finally ready for our substance, testing the Fama French factors. Nothing fancy here, we call

do(model = lm(Returns_excess ~ MKT + SMB + HML + RMW + CMA, data = .)) and clean up the results with tidy(model).


ff_dplyr_byhand <-
  portfolio_riingo_joined %>%
  do(model = lm(Returns_excess ~ MKT + SMB + HML + RMW + CMA, data = .)) %>%
  tidy(model)

ff_dplyr_byhand

We will display this table in the Shiny app and could probably stop here, but let's also add some rolling visualizations of model results. To do that, we first need to fit our model on a rolling basis and can use the rollify() function from tibbletime to create a rolling model.

First, we choose a rolling window of 100 and then define our rolling function.

window <- 100

<- rollify(.f = function(Returns_excess, MKT, SMB, HML, RMW, CMA) {
                 lm(Returns_excess ~ MKT + SMB + HML + RMW + CMA)
             },
     window = window,
     unlist = FALSE)

Next we apply that function, which we called rolling_lm() to our data frame using mutate().

rolling_ff <-
  portfolio_riingo_joined %>%
  mutate(rolling_lm = rolling_lm(Returns_excess, MKT, SMB, HML, RMW, CMA)) %>%
  slice(-1:-window)

tail(rolling_ff %>% select(date, rolling_lm))

Notice our object has the model results nested in the rolling_lm column. That is substantively fine, but not ideal for creating visualizations on the fly in Shiny.

Let's extract that the r.squared with map(..., glance()).

rolling_ff_glance <-
  rolling_ff %>%
  mutate(glanced = map(rolling_lm, glance)) %>%
  unnest(glanced) %>%
  select(date, r.squared)

head(rolling_ff_glance)

Next we visualize with highcharter via the hc_add_series() function. I prefer to pass an xts object to highcharter so first we will coerce to xts.

rolling_r_squared_xts <-
  rolling_ff_glance %>%
  tk_xts(date_var = date)

highchart(type = "stock") %>%
  hc_title(text = "Rolling R Squared") %>%
  hc_add_series(rolling_r_squared_xts, color = "cornflowerblue") %>%
  hc_add_theme(hc_theme_flat()) %>%
  hc_navigator(enabled = FALSE) %>%
  hc_scrollbar(enabled = FALSE)

Now we can port that rolling visualization over to the Shiny app.
It might also be nice to chart the rolling beta for each factor. We already have those stored in our rolling model, we just need to extract them.

Let's invoke tidy() from the broom package, and then unnest().

rolling_ff_tidy <-
  rolling_ff %>%
  mutate(tidied = map(rolling_lm, tidy)) %>%
  unnest(tidied) %>%
  select(date, term, estimate, std.error, statistic, p.value)

head(rolling_ff_tidy)

We now have the rolling beta estimates for each factor in the estimate column. Let's chart with ggplot().

We want each term to get its own color so we group_by(term), then call ggplot() and geom_line().

rolling_ff_tidy %>%
group_by(term) %>%
filter(term != "(Intercept)") %>%
ggplot(aes(x = date, y = estimate, color = term)) +
geom_line()

We have grinded through a lot in this Notebook - data import, wrangling, tidying, regression, rolling, and visualization - and in so doing have constructed the pieces that will eventually make up our Shiny dashboard.

Next time, we will walk through exactly how to wrap this work into an interactive app. See you then!

---------------------------------------

Written by Jonathan Regenstein, Director of Financial Services, RStudio. RStudio offers Open source and enterprise-ready professional software for R. Learn more here: https://www.rstudio.com/

This article is written by Jonathan Regenstein, Director of Financial Services, RStudio, and is being posted with his permission. The views expressed in this article are solely those of the author and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18409




Quant

Trade Ideas - It's a HOLLY & Human Market: Market Impact of Securing Alpha with Artificial Intelligence


Join us for a free webinar on Thursday June 21, 2018 at 12:00 PM


Register


Trade Ideas - It's a HOLLY & Human Market: Market Impact of Securing Alpha with Artificial Intelligence

Speaker:    David M. Aferiat, Co-Founder, Managing Partner

Sponsored by:     Trade Ideas


Information posted on IBKR Quant that is provided by third-parties and not by Interactive Brokers does NOT constitute a recommendation by Interactive Brokers that you should contract for the services of that third party. Third-party participants who contribute to IBKR Quant are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.


18030




1 2 3 4 5 2 19

Disclosures

We appreciate your feedback. If you have any questions or comments about IBKR Quant Blog please contact ibkrquant@ibkr.com.

The material (including articles and commentary) provided on IBKR Quant Blog is offered for informational purposes only. The posted material is NOT a recommendation by Interactive Brokers (IB) that you or your clients should contract for the services of or invest with any of the independent advisors or hedge funds or others who may post on IBKR Quant Blog or invest with any advisors or hedge funds. The advisors, hedge funds and other analysts who may post on IBKR Quant Blog are independent of IB and IB does not make any representations or warranties concerning the past or future performance of these advisors, hedge funds and others or the accuracy of the information they provide. Interactive Brokers does not conduct a "suitability review" to make sure the trading of any advisor or hedge fund or other party is suitable for you.

Securities or other financial instruments mentioned in the material posted are not suitable for all investors. The material posted does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation to you of any particular securities, financial instruments or strategies. Before making any investment or trade, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Past performance is no guarantee of future results.

Any information provided by third parties has been obtained from sources believed to be reliable and accurate; however, IB does not warrant its accuracy and assumes no responsibility for any errors or omissions.

Any information posted by employees of IB or an affiliated company is based upon information that is believed to be reliable. However, neither IB nor its affiliates warrant its completeness, accuracy or adequacy. IB does not make any representations or warranties concerning the past or future performance of any financial instrument. By posting material on IB Quant Blog, IB is not representing that any particular financial instrument or trading strategy is appropriate for you.