Manufacturing and Managing Customer-Driven Derivatives. Qu Dong

Читать онлайн.
Название Manufacturing and Managing Customer-Driven Derivatives
Автор произведения Qu Dong
Жанр Зарубежная образовательная литература
Серия
Издательство Зарубежная образовательная литература
Год выпуска 0
isbn 9781118632536



Скачать книгу

overnight daily risk report and analysis tools. The impact of the new model on the live portfolio in terms of P&L and risk sensitivities should be assessed and fully understood.

      System testing is vital for new model development. It is also essential for model change control. Live models may be changed and updated, and they should be subject to release change control procedure, including thorough system regression tests. All live models should be version-controlled, and this can usually be done easily by the source code's repository. Accompanying each model release, there should be a release note explaining any model changes together with version number and date, etc.

      Independent Model Validation

      Independent Model Validation (IMV) is based on the four-eyes principle to verify the model theory and test the model implementation. In practice, IMV develops its own equivalent models independently to conduct model comparison and testing. The actual model testing tends to be a large part of the work, as many implementation details need to be verified.

      Once the models have been tested and approved by IMV, they can be released into production for pricing and hedging. It is the best practice that IT carries out the model release into trading and risk production systems independent of quants and trading. IT should manage and maintain production systems following a standard but independent procedure.

Quants should communicate effectively with the IMV team to facilitate its model validation, and more importantly model testing tasks. Some of the key information is listed in Table 2.1.

Table 2.1 Key model information

      IMV is a very important development and control function, and its focus should be on mathematical verification and actual model testing. It should avoid spending time to go through front office quants' source codes for obvious reasons:

      • The amount of tiny detail in the source codes is overwhelming. Going through source codes does not help with the independent mathematical or numerical verification.

      • It does not help either with the most important part of IMV: the actual thorough model testing.

      • It can potentially compromise IMV's “independent” validation.

      • It substantially increases the bank's security risks of model source codes leaking out.

      • Overall it consumes valuable resources and prolongs the validation process with little real benefit on control or business.

      Object-Oriented Quant Library

      A quant library consisting of implemented models is the engine in the modern derivatives business. It should be scalable, simple and transparent, allowing generic, efficient and user-friendly modular interfaces to the pricing tools, trading and risk systems. The quant library requires a well-designed architecture at the outset as well as ongoing enhancement to survive and succeed.

      The quant library should be written in an object-oriented framework. Object-oriented programming and design has many advantages. At the programming level, the (C++ or C#) programs are well-structured and modular. At the practical level, it permits orthogonal combinations of objects. For example, by keeping the instrument/product objects distinctively separate from the valuation/model objects, the orthogonal combination allows one to price a particular instrument/product with any suitable model using any suitable numerical approach. This can be done at trade as well as portfolio level, reusing the same objects without coding repetitions.

      Key Objects in a Quant Library

Table 2.2 lists some examples of the key objects or components in a quant library.

Table 2.2 Key objects in a quant library

      When all the required objects are coded up properly in the quant library, it will allow efficient and flexible interactions among the objects in the process of developing new models and products. A generic description of a product can be constructed naturally by connecting together the relevant objects. For example: a swap consists of legs, a leg consists of cash flows, and cash flow consists of various attributes including currency, notional, pay/receive and auxiliary information. All the required details are wrapped up in an organized way that permits easier understanding and repeated usage without code repetitions.

      Objects Interconnection and Architecture

Figure 2.3 illustrates how the objects are interconnected from the architecture perspective.

Figure 2.3 Object Interconnection and Architecture

      Generic interface should be very thin, and its sole task is to transit and map data, reformatting data as necessary. Interfacing is extremely important, as the quant library must be integrated into trading and risk systems to be of value to the business. A badly designed interface will significantly increase the time and costs of developing new products. In the following, the “attributes table” approach is explained as an example of a generic interface for systems.

In trading and risk systems, common attributes such as spot, notional, currency, yield curve, etc. are readily available and a quant developer can simply pull them out and group them into objects that are fed into the pricing models and/or risk engines. For the exotic (or even common) attributes, an attributes table can be created inside the trading system. An example can be seen in Table 2.3.

Table 2.3 Example attributes table

      Once the attribute table is set up inside the trading system, the quant developer can simply loop through the table, and pass all attributes in the table into the model interface. The model interface should be designed so that it can recognize the attributes and map them into the relevant objects. The beauty of this approach is that the looping codes are simple, and they do not change, no matter what the attributes are. This makes the quant developer's job much easier and more standardized. For some risk engines or back office systems that sit outside the trading system, a risk developer can use similar looping codes to read attributes and call the same model interface. The attribute table approach makes it possible for the same looping codes to be used in the trading and downstream systems, for many different products. It is therefore feasible that once quants have developed and added a new product into the trading system, all downstream systems will automatically work.

      Object-oriented quant library architecture is fundamental in meeting the challenges in modern derivatives business. Many banks had to rewrite their quant libraries every a few years, wasting a huge amount of time and resources, because their prevailing libraries were not properly designed and constructed or simply became too complex to handle.

      A quant library should be a child born from the marriage of brilliant mathematical modelling and skilful IT programming. A well-designed and constructed, object-oriented quant library can offer:

      • Integrated business efficiency and much-enhanced productivity, including streamlined interfacing to systems and infrastructures.

      • Standardization of model development and testing process, and minimization of model implementation risks.

      • Application of higher-quality operational control procedures, allowing four eyes to watch a centralized piece.

      Finally, an object-oriented quant library should be kept simple. Overly complicated object structures are tempting, but they may in fact defeat the purpose of having an efficient quant library. So keep it simple and object-oriented (KISOO).

      Quantitative Documentation

Derivative models developed by Quants must be documented comprehensively. The key