Showing posts with label computational power. Show all posts
Showing posts with label computational power. Show all posts

Wednesday, January 8, 2014

Virtualizing Computational Workloads

Commentary:  This is a general discussion in which I wrapped up into the discussion an unique use of Virtualization. In the short term, companies can benefit from off loading heightened computational demands. They may desire to purchase computational power for a limited time versus the capital expenditure of purchasing and expanding the systems. The virtualized environment also can solve issues relating to geographically dispersed personnel. Overall, we are a long way from meaningfully and effectively using the excess computational power residing on the web or across an organization. This discussion though hopefully gives some insight on how to use that excess power.


Virtualizing Computational Workloads
by
JT Bogden, PMP  

Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.

Figure 1:  The Virtualized Concept


The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix.  The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.

Virtualization

The virtualization concept is one in which operating systems, servers, applications, management, networks, hardware, storage, and other services are emulated in software but to the end user it is completely independent of the hardware or unique technological nuances of system configurations. Examples of virtualization include software such as Fusion or VMWare in which Microsoft's operating system and software run on a Apple MacBook.  Another example of virtualization is the HoneyPot used in computer network defense. Software runs on a desktop computer that gives the appearance of a real network from inside the DMZ to a hacker attempting to penetrate the system. The idea is to decoy the hacker away from the real systems using a fake one emulated in software. An example of hardware virtualization is the soft modem. PC manufacturers found that it is cheaper to emulate some peripheral hardware in software. The problem with this is diminished system performance due to the processor being loaded with the emulation. The JAVA virtual engine is also another example of virtualization. This is a platform independent engine that permits JAVA coders to code identically the same on all platforms supported and the code to function as mobile code without accounting for each platform.

Provisioning In Virtualization

Once hardware resources are inventoried and made available for loading. Provisioning in a virtualized environment occurs in several ways. First, physical resources are provisioned by management rules in the virtualization software usually at the load management tier, Figure 1. Secondly, users of a virtual machine can schedule a number of processors, the amount of RAM required, the amount of disk space, and even the degree of precision required for their computational needs. This occurs in the administration of virtualized environment tier, Figure 1. Thus, idle or excess resources can, in effect, be economically rationed by an end user who is willing to pay for the level of service desired. In this way the end user enters into an operating lease for the computational resources for a period of time. No longer will the end user need to make a capital purchase of his computational resources.

Computational Power Challenges

I have built machines with multi-processors and arrayed full machines to handle complex computing requirements. Multi-processor machines were used to solve processor intensive problem sets such as Computer Aided Design, CAD, demands or high transaction SQL servers. Not only were multiple processors necessary but so were multiple buses and drive stacks in order to marginalize contention issues. The operating system typically ran on one buss while the application ran over several over other busses accessing independent drive stacks. Vendor solutions have progressed with newer approaches to storage systems and servers in order to better support high availability and demand. In another application, arrayed machines were used to handle intensive animated graphics compilations that involve solid modeling, ray tracing, and shadowing on animations running at 32 frames per second. This meant that a 3 minute animation had 5760 frames that needed to be crunched 3 different times. In solving this problem, the load was broken into sets. Parallel machines crunched through the solid model sets handing off to ray tracing machines then to shadowing machines.  In the end the parallel tracks converged into a single machine where the sets were re-assembled into the finished product. System failures limited work stoppages to a small group of frames that could be 're-crunched' then injected into the production flow.  

These kinds of problems sets are becoming more common today as computational demands on computers become more pervasive in society. Unfortunately, software and hardware configurations remain somewhat unchanged and in many cases unable to handle the stresses of complex or high demand computations. Many software packages cannot recognize more than one processor or if they do handle multiple processors the loading is batched and prioritized using a convention like first in first out (FIFO) or stacked job processing. This is fine for a production use of the computational power as given in the examples earlier. However, what if the computational demand is not production oriented but instead sentient processing or manufactures knowledge? I would like to explore an interesting concept in which computational power in the cloud is arrayed in a virtualized neural net.

Arraying for Computational Power in New Ways

Figure 2: Computational Node


One solution is to leverage arcane architectures in a new way. I begin with the creation of a virtual computational node in software, Figure 2, to handle an assigned information process. Then organize hundreds or even tens of thousands of computational nodes on an virtualized backplane, Figure 3. The nodes communicate in the virtual backplane listening for information being passed then process it, and publish the new information to the backplane. A virtualized general manager provides administration of the backplane and is capable of arraying the nodes dynamically in series or parallel to solve computational tasks. The node arrays should be designed using object oriented concepts. Encapsulated in each node is memory, processor power, its own virtual operating system and applications. The nodes are arrayed polymorphically and each node inherits public information.  In this way, software developers can design workflow management methods, like manufacturing flow, that array nodes and use queues to reduce crunch time, avoid bottle necks, and distribute the workload. Mind you that this is not physical but virtual.  The work packages are handed off to the load manager which tasks the physical hardware in the cloud, Figure 3.

Figure 3:  Complex Computational Architecture


This concept is not new. The telecommunications industry uses a variation of this concept for specialized switching applications rather than general use computing. There are also array processors used for parallel processing. Even the fictional story, Digital Fortress by Dan Brown centered on a million processor system. Unfortunately, none of these concepts were designed for general use computing. If arrayed computational architectures were designed to solve complex and difficult information sets then this has the potential for enormous possibilities. For example, arraying nodes to monitor for complex conditions then make decisions on courses of actions and enact the solution.

The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets.  A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.

The World Wide Web and Computational Limitations

This architecture within a cloud is limited to developing knowledge or lines of logic. Gaps or breaks in a line of logic may be inferred based on history which is also known as quantum leaps in knowledge or wisdom. Wisdom systems are different than knowledge systems. Knowledge is highly structured and its formation can be automated more easily.  Whereas wisdom is less structured having gaps in knowledge and information. Wisdom relies on inference and intuition in order to select valid information from its absence or out of innuendo,  ambiguity, or otherwise noise. Wisdom is more of an art whereas knowledge is more of a science.

Nonetheless, all the participating computers on the World Wide Web could enable a giant simulated brain. Of course, movies such as The Lawn Mower Man, Demon Seed, The Forbes Project, and War Games go the extra mile making the leap to self-aware machines that conquer the world. For now though, let's just use them to solve work related problems.

References:

Brown, Dan, May 2000. Digital Fortress, St Martin’s Press, ISBN: 9780312263126

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Knowledge Management Brief

CommentThis brief was originally written in Apr 2009 and has been updated reposting on Dec 21 2013. Knowledge Management, KM, is a broad field that includes not only the storage and recovery of information but the sourcing, mining, and use of information across an operation in order to achieve a durable competitive advantage.   

Knowledge Management Brief
by 
JT Bogden, PMP

Humans have passed through many ages from the Stone Age to the Information Age. With each Age came new ways of living. The Information Age has brought dispersed populations together through internetworking technologies to be productive in new ways. Likewise, business organizations are living, breathing, and conscious entities that requires proper care and feeding in order to mature into a productive knowledge workhorses. Knowledge management is an instrument in this endeavor. The goal of any organization is to skillfully employ resources available in order to achieve durable competitive advantage. 

Knowledge Management (KM) is many things to many people. Nonetheless, KM is a concept in which an enterprise consciously and comprehensively brings to bear its resources to gather, organize, analyze, refine, and disseminate information in order to develop meaningful knowledge.  Most enterprises have some kind of knowledge management framework in place. Advances in technology and understanding often create opportunities to improve KM in practice. However, these technologies and resources are not always applied in the most effective manner.

A conscious organization has the cognizant attribute of reason or the capability to form conclusions, inferences, and judgments in a reasonable amount of time or low organizational latency. Organizational latency is the period of time between discovery and the fulfillment of a need. This requires foundational capabilities to recall experiences and information, form complex information relationships, and then act on newly formed knowledge. Operations such as data mining, referential data storage, and organizational computation capabilities are prerequisites to knowledge management.

Data mining is often used in business intelligence which is the ability to resource then organize information to identify patterns and establish complex relationships to exploit for profit. The assumption is that the information is already collected and just needs to be found then formed relationally. This is not always true and data miners may have to explore outside the organizational information stores. Data mining patterns generally fall into five categories:
  1. Associative: Looking for patterns where one event is connected to another event.
  2. Sequence or path analysis: Looking for patterns where one event leads to another later event.
  3. Classification: Looking for new patterns.
  4. Clustering: Finding and visually documenting groups of facts previously unknown.
  5. Forecasting: Discovering patterns in data that can lead to reasonable predictions about the future. Often viewed as predictive analysis which generally has present and future models
Referential data storage is an organization’s memory. Critical to effective recall is how the organization structures data storage. Innumerable approaches have been developed. Files have been stored in shared directories, databases of various forms have emerged, automated document storage and retrieval systems maintain document control, and artificial intelligence systems build experience. Highly advanced storage systems use spherical reference methods as opposed to rectangular retrieval methods as well as holographic storage technologies.

Organizational computational power is the cumulative processing capability of an organization that includes both humans and machines. Elemental computational power can be in parallel, arrayed, and/or distributed. Moreover, the elements can be thought of as neural agents either human or machine. Nonetheless, the combinational sum of this power relates to an organization’s horsepower to develop knowledge. There is some debate over how to determine and measure organizational computational power. Machine computational power is often related in terms of instructions processed per unit time. Human computational power is somewhat more difficult and historical attempts have centered on intelligence and emotional quotients, IQ or EQ, as well as processing capability of the biological neural networks. Metrics for knowledge performance in the civilian world are Key Performance Indicators which relate to Objectives and Effects in Effects Based Outcome methodologies. Nonetheless, computational power is a horsepower indicator to solve complex problem sets.

By properly applying methods and processes, an organization can develop knowledge quickly and succinctly for use in its daily operations increasing effectiveness and creating competitive differentials. In doing so, the key measure of effectiveness is a decreasing organizational latency.  The major focus areas for developing meaningful knowledge management systems can be found in how organizational memory, information resourcing, and computational architectures are formed.  The goal of any organization is to skillfully employ resources available in order to achieve durable competitive advantage.

Tuesday, January 7, 2014

Healthcare Information Virtual Environment System (HIVES)

Comment:  This post was an outcropping of a project we worked in my Master's Program. The problem set was complex and required the project team to scope, identify, then management risks, objectives, and the projects necessary to implement the overarching project. 

Healthcare Information Virtual Environment System (HIVES)
by
JT Bogden, PMP

Healthcare information systems have a vast array of various equipment, clinics, labs, governmental agencies, manufacturers, doctor offices, and innumerable other organizations providing, collecting, and processing information. Classic issues of stove piping or 'Silos' have emerged causing inefficiencies in the industry such as multiple lab test and/or diagnostics being prescribed. The advent of a nationalized health records system increases the complexity of these networks as well. In order to gain management and control over these information systems, the American National Standards Institute (ANSI) hosts the Healthcare Information Technology Standards Panel, (HITSP). This is one of several cooperative efforts, between industry and government to create standards. However, all too often the standards result in a highly complex architecture and system design to the chagrin of efficienies. This is because early standards and architectures often focus on resolving major issues with little forethought into the broader architecture. Many argue that little information is known or that the project is far too complex. Years later, this results in an effort to simplify and streamline the system again.

Allowing a Frankenstein architecture to emerge would be a travesty when our initial objectives are to streamline the healthcare processes removing redundancies and latencies in the current system. The planners should design the system for streamlined performance early. Large scale projects like these are not new and history tells us many good things. The evolution of complex systems such as the computer, the car, and the internet have emerged out of a democratization of design. Literally, tens of thousands of people have contributed to these systems and those models are one approach to resolving the large scale complex information systems involved in healthcare. What we have seen emerge out of the democratization of design is a standardization of interfaces in a virtualized environment. For example, the headlamps are nearly identical for every car with standard connectors and mounts even though the headlight assemblies are artfully different on each car. The computer has standard hardware and software interfaces even though the cards and software perform different functions. The virtual computer is independent of vendor product specifications. Instead, the vendor performs to a virtual computer standard in order for their products and services to function properly.

Let us take a moment to explain that virtualization is the creation of a concept, thing, or object as an intangible structure for the purpose of study, order, and/or management. The practice is used in across a breadth of disciplines to include particle physics and information science. Within the information realm, there are several different virtualization domains to include software, hardware, training, and management virtualization. My interest is not in the use of any specific virtualized technology but instead in exploring healthcare virtualization management as a practice.

I propose a need for a Healthcare Information Virtual Environment System (HIVES), Figure 1, which is essential to reducing complexity and establishing a standard for all those participating in the healthcare industry. The virtual environment is not a technological system. Instead it is a management system or space in which medical information is exchanged by participating objects within the virtual environment. Real things like clinics, offices, data centers, and equipment sit on the virtualized backplane or space.  HIVES would have a set of standards for participating equipment, clinics, hospitals, insurance agencies, data centers, etc... connecting to the environment in order to exchange information. Many may remark that these standards exist. I am able to locate dozens of vendor products and services supporting hardware, software and even service virtualization which are not a standard virtualized management of the overarching healthcare environment that is what the nationalized healthcare system is attempting to manage. I have reviewed HITSP and noted there is no clear delineation of a virtualized managed environment.

Figure 1: HIVES


In such an environment, I envision data being placed into the environment would have addressing and security headers attached. In this way, data is limited to those listening and who have authorization to gather, store, and review specific information. For example, a doctor prescribes a diagnostic test. An announcement is made in the environment of the doctors request addressed to testing centers. Scheduling software at a testing facility participating in the environment picks up the request then schedules the appointment. It announces the appointment in the virtualized environment in which the doctor's office software is listening to receive the appointment data. Once the patient arrives the machines perform the diagnostics placing the patient's data back in the environment. A analyst picks up the record reviews it and posts the assessment in the environment. In the meantime, a data center participating in the environment that holds the patient's record is listening and collects all new information posted in the environment regarding the patient then serves those records to authenticated requests. The patient returns to the doctors office which request the patient's record from the data center through the environment.

The advantages to having such an environment whether called HIVES or something else are astronomical. The patient's records are available to all participating in the environment, security levels and access can be administered in the environment efficiently to ensure HIPPA and other security compliance standards, bio-surveillance data is more readily available with higher accuracy in the data centers, the environment can be an industry driven standard and managed through a consortium, and the government could be an equal participant in the environment.  

Moreover, to be a participant, the manufacturer, clinic, lab, hospital, doctor office, data center or any others have to meet the clearly defined standards and become a consortium participant at some level. Thus, complexity of the architecture and systems interfacing can be tremendously reduced achieving the stated objectives of healthcare reform and streamlining.