by Ross Pettit - 17 September 2008
One of the most important indicators of risk in debt markets is the grade (or collateralized debt obligations. Despite the controversy, the rating agencies remain the authority on assessing credit quality. Their impact on AIG's efforts to raise capital this week indicates how much market influence the rating agencies have.There are several independent companies that assess the credit quality of bonds. The bond rating gives an indication of the probability of default. Although the bond is what is rated, the rating is really a forecast of the ability of the entity behind the bond – e.g., a corporation or sovereign nation – to meet its debt service obligation.
Each rating firm uses a different and proprietary approach to assess credit quality, involving both quantitative and qualitative factors. For example, bond ratings by Moody’s Investors Service reflect long-term risk consideration, predictability of cash flow, multiple negative scenarios, and interpretation of local accounting practices. In practical terms, this means that things such as macro and micro economic factors, competitive environment, management team, and financial statements are all factors in determining the credit worthiness of a firm.
Rating agencies are subsequently able to characterise the risk of debt investments. An investment grade bond will have lower yield but offer higher safety (that is, lower probability of default). A junk bond will have higher yield but lower safety. Beween these extremes are intermediate levels of quality: a bond that is rated AA will have very high credit quality, but lower safety than a AAA bond, while a bond rated at A or BBB, while still investment grade, indicates lower credit quality than a AA bond.
This concept is portable to IT. Just as the entity behind a bond is rated, a team behind an IT assets under development can be rated for its “delivery worthiness.” The difference is that we look to the rating not as an indicator of the risk premium we demand, but as a threat (and therefore a discount) to yield we should expect from the investment.
To rate an IT team, we can look at quantitative factors, such as the raw capacity of hours to complete an estimated workload, variance in the work estimates, and so forth. But we also need to look to qualitative factors. Consider the following:
- Are we working on clear, actionable statements of business need? Are requirements independent statements of business functionality that can be acted upon, or are they descriptions of system behaviour laden with dependences and hand-offs?
- Are we creating technical debt? Is code quality good, characterised by a high degree of technical hygiene (is code written in a manner that it can be tested?) and an absence of code toxicity (e.g., code duplication and cyclomatic complexity?)
- Are we working transparently? Just as local accounting practices may need to be interpreted when rating debt, we must truely understand how project status is reported. Are we managing and measuring delivery of complete business functionality (marking projects to market), or are we measuring and reporting the completion of technical tasks (marking to model) with activities that complete business functionality such as integration and functional test deferred until late in the development cycle?
- Are we delivering frequently, and consistently translating speculative value into real asset value? In the context of rating an IT team, delivered code can be thought of synonymously with cash flow. The more consistent the cash flow, the more likely a firm will be able to service its debt.
- Are we resilient to staff turnover? Is there a high degree of turnover in the team? Is this a “destination project” for IT staff? Is there a significant amount of situational complexity that makes the project team vulnerable to staff changes?
At first glance, this may simply look like a risk inventory, but it’s more than that. It’s an assessment of the effectiveness of decisions made to match a team with a set of circumstances to produce an asset.
There are few, if any, absolute rules to achieving a high delivery rating. For example, assigning the top talent to the most important initiative may appear to be an obvious insurance policy for guaranteeing results. But what happens if that top talent is bored to tears because the project isn't a challenge? Such a project – no matter how much assurance is given to each person that they are performing a critical job – may very well increase flight risk. If that materialises, the expectation for returns of that project will crater instantly. If it's not expected, a team can appear to change from investment grade to junk very quickly.
While the rules aren’t absolute, the principles are. An IT team developing an asset expected to yield alpha returns will generally be characterised as a destination opportunity offering competitive compensation, operating transparently with actionable requirements, maintaining capability "liquidity" and a healthy “lifestyle,” and delivering functionally complete assets frequently to reduce operational exposure. All of these are characteristics that separate a team that is investment grade from one that is junk.
While these factors are portable across projects they may not be identically weighted for every team. This doesn’t undermine the value of the rating as much as it means we need to be acutely aware of the circumstances that any team faces. This also means that assessing the delivery worthiness of a team is borne of experience, and not a formulaic or deterministic exercise. While the polar opposites of investment-grade and junk may be clear, it takes a deft hand to recognise the subtle differences between a team that is worthy of a triple-A rating and one that is worthy of a single A, and even why that distinction matters. It also requires a high degree of situational awareness – employment market dynamics, direct inspection of artifacts (review of requirements, code, software), and certification of intermediate deliverables – so that the rating factors are less conjecture and more fact. Finally, it is an exercise to be repeated constantly, as the “market factors” in which a team operates – people, requirements, technology, suppliers and so forth – change constantly. This is consistent with how the rating agencies bring benefit to the market: they are not formulaic, they spend significant effort to interpret data, and they are updated with changing market conditions.
FDIC Chairman Sheila Bair commented recently that we have to look at the people behind the mortgages to really understand the risk of mortgage-backed securities. With IT projects, we have to look at the people and the situations behind the staffing spreadsheets and project plans. IT is a people business. We can measure effectiveness based on asset yield, but we are only going to be as effective as the capability we bring to bear on the unique situation – technological, geographical, economic, and even social-political – that we face. Rating is one means by which we can do that.
Investors in financial instruments have a consistent means by which to assess the degree of risk among different credit instruments. IT has no such mechanism to offer. Just as debt investors want to know the credit worthiness of a firm, so should IT investors know the delivery worthiness of their project teams.
Especially when alpha returns are on the line.
About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.
No comments:
Post a Comment