Predicting RevPAF to find out where the next hot real estate market will be.
Market selection is a fundamental aspect of real estate investing — choosing the right market helps focus a real estate investor’s time and money towards the subset of transactions that may be more likely to outperform over time.
Traditionally, real estate investors select markets based on market fundamentals and use heuristics or occasionally simple statistical techniques to build selection metrics. As a result, these approaches often present a myopic view of historical and forward-looking market trends. At Cadre, we believe that by leveraging alternative datasets alongside traditional data and using cutting-edge machine learning techniques, we can gain a better understanding of markets and in turn make more informed investment decisions for our clients.
Identifying and ranking promising markets is no easy task. Ground truth data is sparse and rarely updated, the life of an asset is long, and the cost of a bad prediction is high.
In the past, our models would assess market potential by forecasting future rent prices across markets. More recently, we have shifted our models to focus on RevPAF, or revenue per available foot. Defined as the product of occupancy rate and effective rent, RevPAF is a measure tracked by real estate professionals to index markets based on earnings from all assets. Instead of simply using rent, RevPAF can be used as an all-encompassing score to measure earning potential in a market because it includes top line revenue, which is one of the most influential drivers of NOI (Net Operating Income) growth, or an asset’s value.
Modeling RevPAF one, three, and five years ahead allows to us to gain a better understanding of market growth and subsequently reduces our search space for deals.
In this post, we will walk through how we use machine learning techniques to model future RevPAF growth. While our internal models compute RevPAF at a submarket and ZIP-code level, this post will describe models that forecast RevPAF growth at the MSA (Metropolitan Statistical Area) level.
Based on the industry knowledge of our seasoned in-house investing team, our working hypothesis is the following:
Generally, markets with strong demographics (jobs, median income, population, educational attainment, etc) lead to strong market fundamentals (rents, occupancy, inventory, etc) that in turn drive RevPAF growth. In other words, if we were to take a current snapshot of our dataset and rank every market by their demographic attributes, it should align with the outcomes of our RevPAF growth models.
Here’s how we figured out whether our hunch was right.
In order to utilize RevPAF to identify promising markets, we want to be able to do three things:
The following diagram describes the steps we take in order to achieve these goals. We’ll describe each of these steps in detail:
The first step in our analysis is to consolidate a time series dataset describing market behavior. Our multi-dimensional datasets include, but are not limited to:
Our focus as a business is investing in CRE (commercial real estate). Unfortunately, CRE data — whether transaction data or historical market trends data — tends to be sparse and often lacks an extensive history. With around seven million total multi-family and office buildings in the US, annual transaction volumes are low (in the thousands), and the MSA aggregate data isn’t large enough for models to capture patterns without overfitting. To solve for this, we have spent a lot of time assessing and acquiring alternative datasets to use alongside traditional CRE datasets as proxies to evaluate markets.
Each dataset that we ingest is sliced by geography (latitude/longitude, market, submarket, MSA) and a temporal component (monthly, quarterly, annually). We form a feature set by aggregating all lat/long, ZIP Code, and submarket data up to an MSA level. This initial dataset allows us to have a clear view on all markets and their attributes per year.
However, since many of our data sources span different years and are typically quite sparse, our combined dataset ends up being relatively large and very sparse with a fill rate of 20% on some of the features.
As a result, we conduct an exploratory analysis on our feature set to understand the missing data.
To test our hypothesis that demographics drive market fundamentals, which in turn drive RevPAF growth, we need to ensure our feature set includes as much history of demographics and market fundamentals as possible.
Let’s examine the depth of market fundamentals and demographics data we store:
This diagram shows the overlap of all datasets to be 2009–2015. Though seven years might be sufficient for training, considering the 200+ features and 388 MSAs, it is certainly not enough data to back-test our models.
We have a few choices to expand our dataset:
Knowing the real estate market, removing population wouldn’t seem to make sense because real estate is driven by supply/demand dynamics, and population is clearly an indication of demand. But we need to evaluate that assumption.
We can test how important population and other variables are to RevPAF by trying the following approaches:
To our surprise, demographic features such as affordability and median income outweigh population in terms of having predictive power for RevPAF growth.
The results are not hard to fathom: Population is highly correlated with both demand and supply side market trends, which suggests that eliminating population from our feature set will have a negligible influence on our model.
As a byproduct of this exercise, we are also able to resolve one of the challenges of using multiple data sources that may have different values for the same attribute (e.g. two sources of data might report two different population estimates for a given city in the same year). This process allows us to make a decision on which duplicated demographic source to use. For instance, Employment (3rd party) and Median Income (ACS) tested to have little significance to our models and we therefore discarded them from our feature set. Though the duplicate metrics might have properties that have predictive power, we are only concerned with how effective these variables are in predicting RevPAF growth (our response variable).
The result: We should use Employment (BLS), Median Income (3rd party), and remove Population from our feature set.
Since this dataset is relatively large and consists of many correlated variables, we try running Principal Component Analysis (PCA) to reduce dimensionality of our dataset and avoid falling victim to the Curse of Dimensionality. The idea is to conflate features by finding a linear combination of variables that effectively capture variance of the dataset.
Unfortunately, in our case, 120 features (> 50% of our dataset) are capturing ~60% of the data, so we decide to proceed without PCA.
Although we arrive at a feature set capturing as much relevant demographic and market fundamentals history as possible, the rest of our dataset still suffers from sparsity. We choose to impute these values in order have a feature set that spans from 2000–2017.
Some of the models we wish to run require a full matrix, so we first eliminate all variables with more than 40% missing values, since imputing them would likely create noise. For the remaining features, we try various traditional imputation methods by calculating kNN, mean, or median of a feature in a given year.
We also get creative with non-traditional imputation approaches by realizing that markets in the same time frame with similar population can behave comparably. For example, if the occupancy rate for Chicago 2009 is missing, we can impute it by taking an average of Houston, Philadelphia, and Los Angeles’ 2009 occupancy.
We understand that our imputation approaches are inherently approximate, therefore we test which method to use for each feature by running our model exhaustively and picking the method that yields the lowest back-test error.
Though we have created our feature set, we can’t apply tree-based models in a meaningful way since there is serial correlation in our data. Models like Random Forests build trees by randomly sampling a subset of training data, which wouldn’t take into account the importance of time in our dataset. In our case, we are dealing with information where future observations are definitely affected by past values.
To solve for this, we create a new feature set that includes information from the past in each row or data point in attempt to capture seasonality. Specifically, for each feature, we calculate one, three, and five year growth and its momentum (change in growth). We also add observed RevPAF growth from the year ahead, three years ahead, and five years ahead, as our response variables to train and test against.
The new feature set looks something like:
We apply various time series and tree-based models on our feature set, measuring overall accuracy by computing a Spearman correlation between our predictions and observed results at each step in our cross validation and taking an average.
Conventional cross validation techniques of randomly sampling the dataset into training, validation, and test sets do not work with time series data. This is again because random sampling does not take into account the temporal structure of the dataset. As a workaround, we make use of training windows to cross validate.
Specifically, we test two techniques for time series cross validation. The following examples show our methodology for a one year ahead prediction for 2017 and are inspired by the works of Rob Hyndman. These methods can be extended to predict three and five years ahead:
Forecasting with rolling window: For every year before prediction year (2017), train and test all possible 10 year windows i.e. train 2000–2009 and predict 2010, train 2001–2010 and predict 2011, … , train 2006–2015 and predict 2016. Measure accuracy by taking an average Spearman correlation over testing all predictions.
Forecasting with rolling prediction origin: Train and test every year to date starting with a 10 year window while rolling the prediction origin i.e. train 2000–2009 and predict 2010, train 2000–2010 and predict 2011, … , train 2000–2015 predict 2016. Measure accuracy by taking an average Spearman correlation over all predictions.
Having a consistent methodology to train and test, we now run our models.
We first run a simple Linear Regression model to benchmark our predictions. Over 2000–2017, we achieve a weak average Spearman correlation of 0.33 over all markets. However, we know this is just a starting place.
We now apply various industry-proven machine learning regression models including Random Forests, Extremely Randomized Trees, and XG Boost. We pick the best model and associated hyperparameters by taking the following steps when training:
The model and hyperparameters for our final model with unobserved RevPAF growth are simply the ones that yield the highest average Spearman correlation over our training set.
So what did our approach yield?
Extremely Randomized Trees (ER Trees) performed consistently better than other models for all years and the optimal hyperparameters did not vary much, which implied our approach was stable.
ER Trees is an extension of the Random Forest model where the “extra randomness” is induced by randomly choosing the threshold for each split. This results in an algorithm that generally has less variance, but at the cost of a higher bias. As a result, the algorithm is less prone to fitting to the noise in the training data and may have a better expected test (prediction) error.
We managed to forecast RevPAF a year ahead with an average Spearman correlation of 0.74 over our back-test, which suggests a very strong correlation between our predictions and observed RevPAF growth. This gives us confidence in using our model to estimate future RevPAF growth.
We also found that forecasting with a rolling window yielded better model predictions, since the CRE market is cyclical in nature with ~8–17 year cycles. Forecasting with a rolling prediction origin generally underpredicted RevPAF growth due to the inclusion of the financial crisis in 2008 for every prediction.
The most prominent features to predict RevPAF growth are affordability (Rents/Median Income), employment, permits, stock, and median home prices.
We are also able to perform sanity checks for our projections by visualizing a current market snapshot where MSAs are ordered by decreasing RevPAF growth predictions and features are sorted by decreasing importance. This allows us to have a clear view on the current market status of MSAs we see most potential in, and how their corresponding features rank against all other MSAs today.
For example (with dummy data):
Consistent with our intuition, higher ranked markets generally, but not always, had a higher average rank across all features. This implies that our model results are somewhat in agreement with our hunch: MSAs with strong demographics today do see higher RevPAF growth predictions, but only when demand outpaces supply. After all, a strong and growing market this year is the best predictor of a strong and growing market next year. While market strength may be the best indicator of future performance, as investors we know that prior performance does not guarantee future success.
As a data team, we use the insights gleaned from our research to understand what drives the heuristics that investors often employ. It may be impossible to time the next cycle, but we can certainly use data to help our investors pinpoint market dynamics associated with growth.
Cadre is building the world’s first digital stock market for alternative assets. Interested in becoming an investor with us? Check out our site.
Educational Communication
Not AdviceThe views expressed above are presented only for educational and informational purposes and are subject to change in the future. No specific securities or services are being promoted or offered herein.
Not Advice
This communication is not to be construed as investment, tax, or legal advice in relation to the relevant subject matter; investors must seek their own legal or other professional advice.
Performance Not Guaranteed
Past performance is no guarantee of future results. Any historical returns, expected returns, or probability projections are not guaranteed and may not reflect actual future performance.
Risk of Loss
All securities involve a high degree of risk and may result in partial or total loss of your investment.
Liquidity Not Guaranteed
Investments offered by Cadre are illiquid and there is never any guarantee that you will be able to exit your investments on the Secondary Market or at what price an exit (if any) will be achieved.
Not a Public Exchange
The Cadre Secondary Market is NOT a stock exchange or public securities exchange, there is no guarantee of liquidity and no guarantee that the Cadre Secondary Market will continue to operate or remain available to investors.
Opportunity Zones Disclosure
Any discussion regarding “Opportunity Zones” — including the viability of recycling proceeds from a sale or buyout — is based on advice received regarding the interpretation of provisions of the Tax Cut and Jobs Act of 2017 (the “Jobs Act”) and relevant guidances, including, among other things, two sets of proposed regulations and the final regulations issued by the IRS and Treasury Department in December of 2019. A number of unanswered questions still exist and various uncertainties remain as to the interpretation of the Jobs Act and the rules related to Opportunity Zones investments. We cannot predict what impact, if any, additional guidance, including future legislation, administrative rulings, or court decisions will have and there is risk that any investment marketed as an Opportunity Zone investment will not qualify for, and investors will not realize the benefits they expect from, an Opportunity Zone investment. We also cannot guarantee any specific benefit or outcome of any investment made in reliance upon the above.
Cadre makes no representations, express or implied, regarding the accuracy or completeness of this information, and the reader accepts all risks in relying on the above information for any purpose whatsoever. Any actual transactions described herein are for illustrative purposes only and, unless otherwise stated in the presentation, are presented as of underwriting and may not be indicative of actual performance. Transactions presented may have been selected based on a number of factors such as asset type, geography, or transaction date, among others. Certain information presented or relied upon in this presentation may have been obtained from third-party sources believed to be reliable, however, we do not guarantee the accuracy, completeness or fairness of the information presented.
No U.S. or foreign securities commission or regulatory authority has recommended or approved any investment or the accuracy or completeness of any of the information or materials provided by or through us.