Saturday, September 29, 2012

Goldy- QE3 may reach $2 trillion

Goldman Estimate: QE3 could be $1.2 to $2.0 Trillion

by Bill McBride on 9/22/2012 10:56:00 PM
A few excerpts from a research note by Goldman Sachs chief economist Jan Hatzius:

• ... We now view the Fed as following a looser version of the “threshold rule” championed by Chicago Fed President Charles Evans.

• What are the thresholds? We read the committee as signaling that the federal funds rate will not rise until the unemployment rate has fallen to the 6½%-7% range. The corresponding threshold for the end of QE3 may be in the 7%-7½% range.

•These implicit commitments are undoubtedly subject to an inflation ceiling ... may be a year-on-year core PCE reading of 2½%-2¾%.

• All this is subject to change ... The flexibility to respond to such changes is a key advantage of keeping the thresholds implicit rather than explicit.

• ... Under the committee’s economic forecasts, we estimate that the funds rate would stay near zero until mid-2015, while QE3 would run through mid-2014 and total $1.2trn.

• Under our own economic forecasts, we estimate that the funds rate would stay near zero until mid-2016, while QE3 would run through mid-2015 and total just under $2trn.

• If the recovery continues to disappoint, additional steps are possible.
The keys will be to watch the unemployment rate and several core measures of inflation. As of August, the unemployment rate was at 8.1% - and mostly moving sideways - and core PCE for July was up 1.6% year-over-year (plenty of room to the 2½%-2¾% range).

Shadow housing inventory declining


Soberlook.com posted an interesting piece two weekends ago concerning the improving health of the U.S. housing market, which can be found here. Here is an excerpt.

After a period of false starts and unproductive legislative delays, Americans have become fairly efficient at clearing out delinquent home inventory. This is what makes the US so different from Japan for example. At this pace the so-called shadow inventory drag on the housing market will diminish rapidly in the next couple of years.

What stood out to me was the following chart from J.P. Morgan showing the estimated number of homes in the shadow inventory.

This chart made me instantly think of the following chart from CSFB, which has made the rounds for years now, showing the monthly mortgage resets from 2007 through 2016.

[IMFresets.jpg]

These charts say to me that the supply of distressed homes have declined significantly and will likely continue to do, albeit at a reduced rates through 2015. I think this indicates a rise in housing prices, as the supply of distressed homes is reduced. That said, I would still expect price appreciation to be anchored by the supply impact of potential sellers or homeowners current on their mortgages that are waiting for a price rebound to sell.

Dodd-Frank and the skrinking CDS market

Via soberlook.com. My first thought after reading this article is that if risk cannot mitigated and hedged through market participants, then the risk premiums will begin to work there way back into the markets. Market prices will react in kind.

The shrinking corporate CDS market

The Dodd–Frank financial reform is killing the single name corporate CDS market. Liquidity in this market is drying up quickly. This is due mostly to dealers' inability to take positions when they make markets (Volcker Rule) and a cumbersome clearing process that will impose higher margin on corporate CDS for end-users (in some cases higher than the equivalent positions in corporate bonds via repo). In fact the business of basis trades - bonds vs. CDS - is no longer viable in many cases because of the margin requirements on both sides and no ability to offset.

The fact that dealers who clear CDS are not expecting this business to be profitable (see discussion) is not helping either. And single name CDS regulated by the SEC while indices such as CDX regulated by the CFTC adds to the uncertainty. At the same time margin and clearing rules differ materially among the clearinghouses (ICE, CME) and trades are not fungible between them (a trade cleared on the CME can not be offset with the opposite trade cleared via ICE). This uncertainty is adding to this decline in liquidity. The situation is so bad that an index of 100 CDS doesn't have enough liquid CDS for the index to be formed.
FT: - Indices that track the price of credit default swaps (CDS), contracts which act as insurance against a default on corporate bond payments, have become a popular way for banks and hedge funds to speculate on the creditworthiness of American companies and for bond fund managers to hedge risks in their portfolio.

But underlying CDS trading has shrivelled to such an extent that there are not enough actively traded names to make up a 100-company index.
This takes us back to the question of making the financial markets "safer". Not a single institution has ever failed due to a problem with corporate single name CDS. But banks and corporations do use this product to hedge all sorts of things - including receivables, counterparty exposure, reducing loan exposure to a single company, etc. It's not at all clear therefore how nearly eliminating this market through blunt regulation will be helpful for the financial system or the economy as a whole.

The half-life of facts

I was reading and wondering what the half-life of market-related facts 

Truth decay: The half-life of facts

    Much of what we believe to be factual has an expiration date, but the good news is that we can see it coming
    IN DENTAL school, my grandfather was taught the number of chromosomes in a human cell. But there was a problem.
    Biologists had visualised the nuclei of human cells in 1912 and counted 48 chromosomes, and it was duly entered into the textbooks studied by my grandfather. In 1953, the prominent cell biologist Leo Sachs even said that "the diploid chromosome number of 48 in man can now be considered as an established fact".
    Then in 1956, Joe Hin Tjio and Albert Levan tried a new technique for looking at cells. They counted over and over until they were certain they could not be wrong. When they announced their result, other researchers remarked that they had counted the same, but figured they must have made a mistake. Tjio and Levan had counted only 46 chromosomes, and they were right.
    Science has always been about getting closer to the truth, and anybody who understands it knows that a continual transformation of accepted knowledge along the way is how it works. However, sometimes it can feel random and unsettling. Smoking has gone from doctor-recommended to deadly. Eating meat used to be good for you, then bad, then good again; now it's a matter of opinion. The age at which women are told to get mammograms has risen. We used to think that Earth was the centre of the universe, and our planet has since been demoted. I have no idea any longer whether or not red wine is good for me.
    It turns out that there is order within the shifting noise. The good news emerging from my field of scientometrics - the quantitative study of science - is that in the aggregate there are regularities to the changes, and we can even identify how fast facts will decay over time. This means we don't have to be all at sea in a world of changing knowledge. Portions of what we assume to be true will eventually be overturned, but realising that patterns exist could help to identify the apparent truths that are poised to expire.
    It is obvious that scientific knowledge is continually updated through new discoveries and the replication of studies, but until recent years little attention had been paid to how fast this change occurs. In particular, few had attempted to quantify how long it would take what we know at any given moment to become untrue, or replaced with a closer approximation of the truth.
    Among the first groups to measure this churning of knowledge was a team of researchers at Pitié-Salpêtrière hospital in Paris, France. To get a handle on it, Thierry Poynard and his colleagues chose to focus on medical fields in which they specialised: cirrhosis and hepatitis, two areas related to liver diseases. They took nearly 500 articles in these fields from over 50 years and gave them to a panel of experts to examine. Each expert was charged with saying whether the paper was factual, out-of-date or disproved (Annals of Internal Medicine, vol 136, p 888).
    Through doing this, Poynard and his colleagues were able to create a simple chart that showed the amount of factual content that had persisted over the previous decades (see diagram). They found something striking: a clear decay in the number of papers that were still valid. Furthermore, it was possible to get a clear measurement for the "half-life" of facts in these fields by looking at where the curve crosses 50 per cent on this chart: 45 years.
    Essentially, information can be likened to a radioactive material. Medical knowledge about cirrhosis or hepatitis takes about 45 years for half of it to become out of date or disproved.
    The half-life metaphor does not match up exactly to its radioactive namesake. For one thing, the time frames being examined make it difficult to see if the decay is truly exponential. Similarly, depending on the maturity of the field, the half-life need not be constant. Indeed, it has surely changed as medicine transitioned from art to science. Nonetheless, half-lives can be a useful way of thinking about the decay of knowledge.
    We can't predict which individual papers will be overturned, of course, just like we can't tell when individual radioactive atoms will decay, but we can observe the aggregate and see that there are rules for how a field changes over time. The cirrhosis and hepatitis results were nearly identical to an earlier study that examined the overturning of information in surgery. Two Australian surgeons found that half of the facts in that field also become false every 45 years (The Lancet, vol 350, p 1752)
    Unfortunately, convening a panel of experts to comb through all of science's past conclusions isn't feasible. So we have to sacrifice precision for our ability to look at lots of science relatively quickly. One simpler way to do this is by looking at the lifetime of citations, the coin of the scientific realm and the metric by which the impact of a research paper is measured.
    To understand the decay in the truth of a paper, we can measure how long it takes for people to stop citing the average paper in a field. Whether it is no longer interesting, no longer relevant or has been contradicted by new research, this paper is no longer a part of the living scientific literature. The amount of time it takes for others to stop citing half of the literature in a field is also a half-life of sorts.
    Through this we can begin to get rough estimates of the half-lives of many fields. For example, a study of all the papers in the Physical Review journals, a cluster of periodicals of great importance to physicists, found that the half-life in physics is about 10 years (arxiv.org/abs/physics/0407137).
    Different publication formats can also have varied half-lives. In 2008, Rong Tang of Simmons College in Boston looked at scholarly books in different fields and found that physics has a longer half-life (13.7 years) than economics (9.4), which in turn outstays mathematics, psychology and history (College & Research Libraries, vol 69, p 356).
    This is the opposite of what is found in journal articles, where knowledge at the frontiers of the hard sciences is overturned more rapidly than in the social sciences. This may be because immediate repeated experimentation can be more straightforward in the physical sciences than in the social sciences, where the data can be noisier.
    So drawing strong conclusions about the differences between the calculated half-lives of disparate fields should be approached with caution, because of the various influencing factors and different methods employed to gauge their longevity. Still, the point remains that specific bodies of knowledge can be viewed as having different expiry dates. This leads one to wonder whether reframing knowledge in this way should change how we interpret the facts that each of us use to navigate the world. After all, identifying that certain bodies of facts will endure longer than others might influence how we act on information - and identify what facts we should be most wary of.

    The spectrum of truth

    Let's imagine that we were to line up facts on a spectrum according to how quickly they change. On the far left, we have the groups of fast-changing facts, which are constantly in flux. These would be our knowledge about what level the stock markets closed at yesterday, or predictions for weather around the globe. You might say that these areas have tiny half-lives: the length of time they stay true is very brief.
    On the far right, we have the very slow-changing knowledge, featuring the facts that for all practical purposes are constant. Individually, this might include what we've learned about the number of continents on the planet, and on a group level, you might place pretty much everything the ancient Greeks wrote about geometry.
    In between, we would have the groups of facts that change, but not too quickly. This knowledge might change over the course of years or decades or a single lifetime. These facts in the middle are what I refer to as "mesofacts" - and their relatively slow rate of change may mean that many people fail to acknowledge their ultimate transience and looming expiry.
    Knowledge about nutrition might be considered to be one of these mesofactual areas. For example, in the US, we no longer use the four basic food groups - meats, dairy, cereals/breads, fruits/vegetables. There are now five food groups placed on a plate. Between these points there was a food pyramid that underwent multiple overhauls. We have also experienced long-term changes with regard to whether we should eat fatty foods, carbohydrates and many other things.
    Another mesofactual realm might be how we are supposed to take care of babies. Each generation has a set of facts that changes, from whether babies should sleep on their backs or stomachs to whether pregnant women can safely smoke or drink alcohol.
    Mesofacts are all around us, and just acknowledging their existence can be useful. After all, as studies of our perception of slow shifts in the natural world have revealed, we are often blind to gradual change (see "Too slow to see").
    Of course, this doesn't mean that everything is going to be overturned. If a government-funded study promotes a certain health practice, for example, we should not immediately dismiss it as being based on one of many evanescent facts. That would be foolishly throwing the informational baby out with the bath water. But we should not be shocked if that advice is later contravened.
    Fortunately, we are getting better at internalising this particular truth: many medical schools inform their students that within five years, half of what they have been taught will be wrong - and the teachers don't know which half it will be.
    Thankfully, facts don't expire arbitrarily. Even though knowledge changes, the astounding thing is that it changes in a regular manner. If we recognise this, we'll be better prepared to live in the rapidly changing world around us.

    All that glitters- getting constructive but not there yet

    Gold and precious shares appear locked in a holding pattern following the breakout that resulted from the announcement of QE3. As for the timing models, they appear to getting more constructive after topping out a 2.3 on the 6-month model and 0.66 on the 1-year model on September 17. That said, timing models remain in a range that suggests the precious metal shares will underperform the averages. Provided that the money supply is around $10.2 trillion, the 6-month timing model is 1.9.


    As for the 1-year model, it currently stands at 0.51.


    Just as a reminder, positive results on the timing model suggest that the shares of precious metal stocks will underperform the average results over various time frames. The more positive the model results, worse the results. The sames, but opposite, is true for instance when the model is in a negative range.

    Starting with this post, I will also be looking at the 3-month model calculation. Much like the 6-month and 1-year models, this model shows the same discrimination and positive performance results. The performance results for the 3-month model calculation are shown below.

    3-month model performance figures


    1w 3M 6M 1y 2y
    Avg 1.60% 5.88% 13.60% 27.10% 48.82%
    Median 2.06% 6.25% 11.74% 23.25% 43.85%
    Max 25.37% 89.59% 88.09% 151.78% 203.74%
    Min -30.86% -45.27% -62.18% -28.87% -27.10%
    St Dev 6.9% 18.7% 26.2% 32.1% 38.4%
    Sharpe 0.23 0.31 0.52 0.84 1.27
    #>0 71 73 75 80 87
    %>0 61.7% 64.6% 70.8% 79.2% 91.6%
    Count 115 113 106 101 95

    Buy-and-hold performance


    1w 3M 6M 1y 2y
    Avg 0.28% 3.07% 6.41% 13.98% 30.62%
    Median 0.27% 2.14% 4.74% 13.33% 29.82%
    Max 25.37% 89.59% 88.09% 151.78% 203.74%
    Min -30.86% -62.22% -62.18% -64.56% -50.89%
    St Dev 5.4% 15.1% 21.6% 27.5% 32.5%
    Sharpe 0.05 0.20 0.30 0.51 0.94
    #>0 351 366 389 421 447
    %>0 53.3% 56.4% 61.2% 69.0% 80.1%
    Count 661 649 636 610 558


    The 3-month model is also positive, standing at 1.4 through the end of September.


    This model also indicates underperformance of precious metal shares in future periods. I will remain on the sidelines and await a better entry point.