Marc Gerstein
June 03, 2015
Portfolio Strategist specializing in quantitative fundamental equity modeling

Strategy Lab: Reviewing and Redefining a Value Protocol

In my last discussion of the Cherry-picking the Blue Chips model (http://hvst.co/1ePObZ3 initially introduced to Harvest Exchange on 3/12/15 (http://hvst.co/1cxNl1h and now being implemented here via the portfolio portion of the platform, I discussed the need for a strategy review. Last time, I concluded that I want to keep the combination screening and ranking protocol I’d been using. Now, I want to drill down to the various aspects of the strategy, with today’s focus being Value.

 Value influences this strategy in two respects. First, a stock cannot be eligible for inclusion in the portfolio unless it ranks 80 or better (out of 100) in a Value ranking system I created for Portfolio123 and which is being reassessed today. Once the stock passes that filter, as well as two others (membership in the S&P 500 and a rank of at least 80 under a Sentiment-based ranking system), the stock is sorted along with others that pass the screen under a multi-factor QVGM (Quality-Value- Growth-Momentum) ranking system; the value ranking model comprises 25% of the QVGM system. So value is an important stylistic consideration here.

  The Incumbent Value Model

 Here are the factors presently used in the Portfolio123 “Basic: Value” ranking system:

·     PE with E based on reported trailing 12 months EPS

·     Forward PE with E based on the consensus analyst estimate of EPS for the current fiscal year

·     PEG, the PE to Growth ratio with PE calculated with reference to the current-year EPS estimate and with G being the consensus analyst estimate of long-term (i.e. 3- to 5-year) EPS growth

·     Price to Sales over the trailing 12 months

·     Price to trailing 12-month free cash flow

·     Price to Book Value

 The weightings are as follows


·     Value Based on Income Stream (65% of total)

o   Earnings (50 % of category)

§ PE based on TTM EPS (33.33% of sub-category)

§ Forward PE (33.33% of sub-category)

§ PEG (33.33% of sub-category)

o   Other Earnings (50 % of category)

§ Price/Sales (50 % of sub-category)

§ Price/Free Cash Flow (50 % of sub-category)

·     Value Based on Assets (35% of total)

o   Price/Book (100% of category)

  Re-assessing PE

 PE is huge. We all know that. So there’s no way I’m going to exclude this item. But I see an area I believe needs to be changed. The basic PE, the one most of us see all the time and often wind up using due to inertia, is a mess.

 This is probably where you expect me to talk about creepy management teams and manipulation of earnings. I’m not doing that here. I’ll discuss earning quality when we review the Quality portion of the model. My beef now is with FASB (the Financial Accounting Standards Board). Between what they require, what they permit, and what they ban, the EPS item nowadays is horribly polluted by non-recurring matters, the sort of things Graham and Dodd tell us we have to ignore.

 We can adjust EPS on our own and re-compute PEs. But it takes a bit of elbow grease. Companies disclose non-recurring items on a pre-tax basis, but when it comes to reporting the tax consequences and the impact on EPS, we’re at the mercy of their good natures (ouch). So I’m going to use a bit of spit-and-chewing gum to come up with an alternative that probably won’t match actual reality, but which I think can serve us better than conventional historic PE. I’m going to recompute E as Operating Profit after Depreciation minus net interest expense multiplied by 0.65, to give effect to a presumed 35% normal tax rate. I’m not going to bother dividing by number of shares; I’ll simply compare my new E figure to market cap rather than price. 

·     Therefore, I’m going to replace PE (trailing 12 months) with Market Cap-to-Adjusted E.

I’ll keep Forward E based on the current-year estimate and PEG. Note, too, the PEG calculation as defined above. I think use of the forward-looking PE and growth figures five me much more reliable measures than the historic figures used in many pre-packaged PEGs you are likely encounter. And some of this pre-packaged PEGS can look pretty whacky if the growth rates are influenced by odd non-recurring events that find their way into EPS, something that can happen often if the growth rate they use is just form the most recent year.

  Re-assessing Price/Sales

I love this ratio. I understand it can be and has been abused and misused during bubbles as hucksters try to push shares of companies that don’t have earnings and won’t likely ever have earnings. But we haven’t abandoned use of fire or automobiles because of the misfeasance of some, and nor should we abandon sales-based valuation for such a reason. It tends to be more stable than earnings, it’s capital-structure neutral (sales is sales regardless of how much or how little debt a company has), it can be analyzed in terms of growth and in conjunction with margin to present an economically legitimate picture of a company, and it can be used even when companies lose money or report earnings that are distorted by unusuals, as most do from time to time. The latter can be particularly valuable for valuation of emerging businesses that aren’t profitable today but are credibly assumed to become so in the future.

 But I think we can do better substituting Enterprise Value (EV-to-Sales, or EVS) for Price-to-Sales. The benefit from doing this lies in the way it allows us to effectively compare companies with very different capital structures. Because we know a debt-heavy company will find its net income cut by interest expense (leading to a lower net margin), we would have to assume that all else being equal, leverage should justify a lower PS. Use of EVS spares us the need to make that mental adjustment.

·     Therefore, I’m going to replace PS with EVS

Re-assessing Price-to-Free Cash Flow

This is a bit of a challenge. Everything we see today in stock valuation rhetoric spotlights the importance of cash flow. Difference of opinion exists as to how, exactly, it should be defined. For the record, the pre-set Portfolio123 Free Cash Flow metric is Cash from Operations (the subtotal in the Operations portion of the Cash Flow statement, which amounts to net income adjusted for accruals) minus Capital spending minus depreciation. (Users can substitute their own approaches.)

 Besides making for great sound bites (“Cash is king!”), metrics like this are theoretically sound in the they relate logically to the pot of gold at the theoretical end of every investment-analysis rainbow, the present value of future expected dividends.

 But there’s a problem: I’ve tested valuations based on free cash flow (defined many different ways) and have consistently found their efficacy to be mediocre at best. The problem, I think, is that not all accruals are bad. Consider, for example, depreciation versus capital spending. The latter is real. The former is an artificial accounting estimate. But the latter can be very volatile from one period to the next. Capital projects, by their very nature, do not involve smooth outlays. These are occasions when companies must spend up. Depreciation, on the other hand is smoothed.

 Reality is not necessarily better. Suppose a company consistently logs earnings before interest and depreciation of say, $10-$15 million. Then, in 2012, it spent $80 million to build a new factory. If it were to report the bottom line as a loss of $65 million ($15 million in EBITD minus $80 million), are we getting a true economic picture of the company? Now after every bear market, there are always a bunch of harda** talking heads who say “Hell yes.” But suppose, in 2013, the company posts EBITD of $17 million. Is that really sound? What about expenses associated with the new factory that produced the products sold that year? Shouldn’t they be counted? If we charged the whole $80 million in cost off in 2012, we can’t do it again in 2013, or 2014 or in any subsequent year (and there my be many) that the factory is used to generate revenue. So what starts out as a genuinely conservative accounting choice (cash in minus cash out; real things only, no fictions) in year one turns out to upwardly distort earnings for many subsequent years. Talking heads may not get this. But my testing of cash-flow-based valuation metrics suggests those who actually invest (or at least the overwhelming majority of them) do get it and want presentations that match revenue and expenses, even if that means accountants have to sweat a bit over treatment of single-period expenses matched against multi-period revenue streams. Yes, this sort of thing can be and sometimes is abused. But we can address that in the Quality category.

·     Therefore, I’m going to eliminate Price-to-Free Cash Flow

Re-assessing Price-to-Book Value

Speaking of accounting estimates, we’re now jumping out of the frying pan and into the fire. How the heck do you value a company’s assets on the balance sheet? I mean, how do you make them real in relation to potential price that might be received in an arm’s length sale? I could fill a textbook on that topic, but I need not try because many others have already done that. And they still won’t necessarily agree when it comes to calculating the value of a specific asset for a specific company at a specific time.

That said, the topic is too darn important to simply ignore. :-(

For ages, the Street has pretty much capitulated to use of good-old plain-vanilla textbook “book value” which is the amount of capital the company raised when it got started (in whichever year, decade or century that occurred) plus the cumulative amount of all the net income it recorded since then, plus the amount of new equity it raised in the secondary market minus the cumulative total of all dividends it paid minus the amount of equity it retired. FASB is debating other ideas, but until they finish throwing spitballs at one another (I understand these are heated debates), book value is pretty much what we have to work with, unless a particular company comes “in play” at which time analysts and creditors will come up with something else specific to that case. Hence price-to-book value (PB) is one of the mainstream metrics used in academic approaches to and research on valuation.

 And for a long time it worked. Lately, though, it seems, from what I gather based on my own testing, to be losing some steam. Although we’re still waiting for the accountants to come up with something better, it may be that now in this era of widely accessible automated platforms, investors are trying to jump the gun and come up with their own alternatives. I decided to do likewise.

 One thing I decided to do is adapt PB to an enterprise level approach. I think that makes sense since if asset value becomes hot (i.e. if a company goes into play), we’ll be thinking in terms of the enterprise, rather than just the equity portion. So I’ll substitute EV for P. And in lieu of book value – i.e. the book value of equity – I’ll use the book value of the enterprise, or as I call it, Enterprise Assets, which I define as Book Value of all Equity (common plus preferred) plus Total Non-Current Liabilities minus Cash and Equivalents.

 Separately, and out of homage to the academic stature of PB, I’m not inclined to discard this metric lightly. But I will restrict its use. In theoretical terms, PB relates to return on equity (ROE); high ROEs warrant higher PBs. So I’ll use PB only for companies for which the five-year average ROE exceeds that average five-year ROE for its industry.

·     Therefore, I’m going to eliminate the former PB metric, and

·     I’m going to introduce EV2EA (Enterprise Value to Enterprise Assets), and

·     I’ll introduce a conditional PB that will be applied only if 5-year ROE is above the industry average

The Weights

This is a very challenging topic because it pulls us right into the epicenter of the controversy over quant versus non-quant, machine learning versus not, etc. The question: Should I use optimization to determine weights?

I have experimented with this sort of thing in the past. If I want to maximize simulated/backtested performance, its clear optimization is the way to go. But if I’m investing with an eye toward the future, the answer is less clear. To make optimization work, I would need to do it against the proper sample. That would mean that first, I’d have to identify a period or series of periods that reasonably reflects the conditions I expect to encounter in the forward-looking investment horizon (a difficult challenge under the best of conditions that is rendered much more so given the likelihood a generation-long decline in interest rates will give way to stagnation or, more likely, eventual increases). Second, I’d have to do an entirely new optimization for every portfolio based on the sub-universe I define by my screening rules. An optimization done, for example, on the universe of S&P 500 constituents, even if the future resembles the past, may still turn out useless if I try to apply my ranks to a sub-set of the S&P 500 consisting of stocks that also have favorable Vale and Sentiment characteristics (which is the subset that is used in the Cherry-picking strategy).

 So far in my use of the QVGM model and its sub-components, I’ve experienced satisfying live performance using weightings that are heuristically determined (i.e. I start with the notion that perfect confidence equates to a 100% ranking and then I adjust downward based on my actual judgment as to my confidence in he factor, and then refine for when weighting within groups or sub-groups need to add to 100%.)

The New “Basic: Value 2015” Ranking system

  Here are the factors presently used in the Portfolio123 “Basic: Value 2015” ranking system:

·     Market Cap to trailing 12 month Normalized Earnings

·     Forward PE with E based on the consensus analyst estimate of EPS for the current fiscal year

·     PEG, the PE to Growth ratio with PE calculated with reference to the current-year EPS estimate and with G being the consensus analyst estimate of long-term (i.e. 3- to5-year) EPS growth

·     Enterprise Value to Sales over the trailing 12 months

·     Enterprise Value to Enterprise Assets

·     Conditional Price to Book Value (only if 5-year average ROE is above the industry average)

The weightings are as follows

·     Value Based on Income Stream (65% of total)

o   Earnings (50 % of category)

§ Market Cap to Adjusted Earnings (33.33% of sub-category)

§ PE based on estimated EPS (33.33% of sub-category)

§ PEG (33.33% of sub-category)

o   Other Earnings (50 % of category)

§ Enterprise Value /Sales (100 % of sub-category)

·     Value Based on Assets (35% of total)

o   Enterprise Value to Enterprise Assets (50% of category)

o   Conditional Price/Book (50% of category)

 I originally opted on the Portfolio123 platform for the default protocol in which NA (not available) items result in a low factor score. I’m sticking with that, even for Conditional PB, which is apt to come up NA more often than the old PB metric. This is a value model. As such, I’m perfectly content to hold it against companies that for one reason or another don’t have enough data-points to allow me to analyze the stock as fully as I wish.  (When used in conjunction with QVGM, such companies can, and do, still get into a portfolio if their scores in the other categories are sufficiently high to offset a penalized Value-component ranking.)

 When I tested my new Value rankings, as well as the new “QVGM 2015” model that uses the new V component, I experienced modest improvements in performance under most test scenarios. They weren’t spectacular; some were barely noticeable. But since we’re investing for the future as opposed to data-mining or curve fitting, even straight-up even or possibly mildly lower performance would, in my view, justify switching to an approach that comports better with common sense.

When I tested based on models I use live, as opposed to broad universes, the new approach performed slightly better; the degree of improvement, although still not earth shattering, was better than what I saw in the rankings-only tests conducted against big non-investable universes. 

Next, I’ll review the Growth component.
More from Marc Gerstein