Abstract: Using the most comprehensive source of commercially available data on the US National Market System, we analyze all quotes and trades associated with Dow 30 stocks in 2016 from the vantage point of a single and fixed frame of reference. Contrary to prevailing academic and popular opinion, we find that inefficiencies created in part by the fragmentation of the equity marketplace are widespread and potentially generate substantial profit for agents with superior market access. Information feeds reported different prices for the same equity---violating the commonly-supposed economic behavior of a unified price for an indistinguishable product---more than 120 million times, with "actionable" latency arbitrage opportunities totaling almost 64 million. During this period, roughly 22% of all trades occurred while the SIP and aggregated direct feeds were dislocated. The current market configuration resulted in a realized opportunity cost totaling over $160 million when compared with a single feed, single exchange alternative---a conservative estimate that does not take into account intra-day offsetting events.
Abstract: Both the scientific community and the popular press have paid much attention to the speed of the Securities Information Processor - the data feed consolidating all trades and quotes across the US stock market. Rather than the speed of the Securities Information Processor, or SIP, we focus here on its importance to efficient, price discovery. Via extensions to a previous market model, we experiment with four different coupling mechanisms which operate across the US stock market. Of the four, we find that the SIP contributes most to efficient price discovery.
Abstract: As demonstrated during the recent financial crisis, regulators require additional analytical tools to assess systemic risk in the financial sector. This paper describes one such tool; namely a novel market modeling and analysis capability. Our model builds upon two leading market models: one which emphasizes market micro-structure and another which emphasizes an ecology of trading strategies. We address a limitation of market modeling, namely the consideration of only one dominant trading strategy (ie, long positions). Our model aligns closely with several widely held stylized facts of financial markets. And a final contribution of this work stems from our empirical analysis of the fractal nature of both empirical markets and our market model.
Abstract: As the fielding of enterprise systems of systems becomes common it becomes increasingly important to understand the interactions between the systems as well as the important role that human behavior plays. This paper suggests that Agent-Directed Simulation is a valuable and crucial analysis tool for the Systems Engineer. The paper examines the concept of Agent-Directed Simulation for Systems Engineering and then introduces the notion of Human Complex Systems. An analysis infrastructure is described and a case study is provided to illustrate the concepts.
Abstract: This paper briefly introduces the inherent challenges to Systems of Systems engineering. A solution created by the authors is then described, the Infrastructure for Complex-systems Engineering (ICE). The paper concludes with two case studies making use of various aspects of the ICE. The first is an application to large venue protection; the second is an application to the modeling of financial markets.
Abstract: In this paper, we describe research and an application of agent based modeling to create financial network data. Creating a dataset of this type presented some unique challenges. First, the dataset we are trying to emulate is large and sparsely connected (20 million nodes, 20 million edges, in 500GB). Second, it includes multiple types of entities and relationships. A system made up of multiple types of entities with various relationships is tailor made for agent based modeling. Third, this dataset is being created as part of a larger project that is creating graph analysis tools that will work with massive, dynamic datasets. Therefore, it is important that we be able to control what the generated dataset contains so we can test various parts of our graph analysis system. An initial agent based model has been created using Netlogo. This prototype is being created iteratively as we continue to investigate the patterns and other features within the actual dataset. The domain in which the graph analysis tools are to be used is, understandably, of a sensitive nature. We wish to keep the datasets we produce unclassified so they can be released to the academic and analytic communities to aid in collaboration. This presents its own challenges as we need to produce a dataset that is a reasonable facsimile of the actual data for meaningful collaboration, however not so similar as to represent any sort of unreasonable disclosure of information.
Abstract: All models are abstractions of the real world. Determining the appropriate level of abstraction is a balancing of the complexity of the system being modeled, the available data resolution provided by data sources and subject matter experts, the needs of decision makers, and the limitations of the computational and developmental resources. Results from algorithmically linear, physical, closed-system simulations can often be improved by using higher-resolution inputs and by modeling lower-order phenomena. It is not as obvious; however, that ever-increasing resolution will necessarily improve the results from modeling complex systems. Two military course-of-action (COA) development case studies are examined to determine what level of model resolution is sufficient to provide significant insight into COA development. We examine the appropriate level of fidelity for modeling force structures and behaviors as well as the appropriate level of detail for modeling the terrain and physical environment. Methods for evaluating and comparing the results of varying model resolutions are presented.