Wednesday, January 8, 2014

Personal Data Storage

This post is a departure from beaten track as we will discuss Redundant Arrays of Independent Disks; RAIDs. Having reliable data storage is essential to a personal data storage plan.  Many folks may chuckle at the thought of a personal data storage plan but there are good reasons for having one - especially if you have lost data in the past.  If you store large volumes of digital information such as tax records, medical records, vehicle documents, digital music, movies, photo collections, graphics, and libraries of source code or articles then a storage data plan is essential to reduce the risk of loss. Loss can occur due to disk drive failures, accidental deletions, or hardware controller failures that wipe drives.  Online services offer affordable subscription plans to back up your PC or store data in a cloud but you risk the loss of privacy when using these services. Having local, portable, and reliable data storage is the best approach and a personal data management plan is the centerpiece of the effort.

Personal Data Storage 
by
JT Bogden, PMP

Such a plan should be designed around two points. First, there should be a portable detachable and reliable independent disk drive system. Second, there should be a backup system. We will focus on the first point in this post. 

Figure 1: Completed RAID
After a lot of research, I settled on a barebones SANS Disk Raid TR4UT+(B) model, Figure 1. The device has a maximum capacity of 16 TBs and supports up to USB 3.0 and has an option to operate from a controller card improving data transfer raters over USB 3.0.  Fault tolerance methods of cloning, numerous RAID levels, and JBOD are supported as well.  Thus, the unit is well poised for a long term durable use.  

Since the device is barebones, I had to find drives that are compatible. Fortunately, the device was compatible with 11 different drives ranging from 500MB up to 4TBs across three vendors.  I had to figure out what characteristics mattered and determine which of the drives were optimal for my needs. The approach I used was a spreadsheet matrix, Figure 2. The illustrated matrix is a shortened form and did not consider the 4TB drive as it was cost prohibitive from the start as were several of the other drives. The 3 TB drive was used to breakout or create a spread for the other options. I computed the coefficient of performance, CP, then averaged them for the overall performance. In the end, I selected the Hitachi UltraStar 1TB in this example and purchased 4 of them. They are a high end server drive that are quiet and can sustain high data transfer rates for long periods of time. 

Figure 2: Decision Matrix for Drive Selection and Purchase

Figure 3: Installed Drives 
After selection, purchasing, and installation of the drives, Figure 3, RAID 5 was selected for the drive configuration. RAID 5 permitted hot swappable drives should one fail and provided more disk space than the other RAID modes. RAID 5 is a cost effective mode providing good performance and redundancy. Although, writes are a little slow. 

The final part of the process was to initialize and format the drives. File Allocation Tables, FAT and FAT32 are not viable options as they provide little recovery support.  New Technology File System, NTFS, improves reliability and security among other features. However, there is an emergent file system GUID Partition Table, GPT, which improves upon NTFS and breaks through older limitations. Current versions of Mac OS and MS Windows support this file system on a read and write level. Therefore, in a forward looking expectation of future movement towards this file system, the RAID was initialized then formatted with GPT. The formatting process was slow and took a long time. 

In the end, the RAID unit was accessible by both Windows and the MacBook Pro. All the data and personal information on disparate USB drives, memory sticks, and the local machines were consolidated to the RAID device. For the first time all my music, movies, professional files, and personal data were in one place with the strongest protection. The final cost was less than $650. The cost can be kept down if you shop around for the components: Amazon. It took about 8 hours of direct effort. Although the formatting and files transfers occurred as I did other things. 

While I will still use my memory sticks and a 1 TB portable USB drive with my notebooks, the RAID is the primary storage device. It can be moved relatively easily if I change locations and/or swap it between computers if necessary. The device can also be installed as a serverless network drive and hung off of a wireless router. I prefer not to use it in that manner as the risk of exposure or loss of privacy slightly increases.

Overall, the system is quiet and has a low power drain while in operation with heightened data protection. I encourage others to rethink how they are storing their data and invest in a solid reliable solution.  As the solid state drive come into increasing use, the traditional silver oxide platter drives will drop in price dramatically.  This will enable more folks to build drive arrays like mine at lower costs then convert them later to the solid state systems as those prices drop. 

Virtualizing Computational Workloads

Commentary:  This is a general discussion in which I wrapped up into the discussion an unique use of Virtualization. In the short term, companies can benefit from off loading heightened computational demands. They may desire to purchase computational power for a limited time versus the capital expenditure of purchasing and expanding the systems. The virtualized environment also can solve issues relating to geographically dispersed personnel. Overall, we are a long way from meaningfully and effectively using the excess computational power residing on the web or across an organization. This discussion though hopefully gives some insight on how to use that excess power.


Virtualizing Computational Workloads
by
JT Bogden, PMP  

Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.

Figure 1:  The Virtualized Concept


The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix.  The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.

Virtualization

The virtualization concept is one in which operating systems, servers, applications, management, networks, hardware, storage, and other services are emulated in software but to the end user it is completely independent of the hardware or unique technological nuances of system configurations. Examples of virtualization include software such as Fusion or VMWare in which Microsoft's operating system and software run on a Apple MacBook.  Another example of virtualization is the HoneyPot used in computer network defense. Software runs on a desktop computer that gives the appearance of a real network from inside the DMZ to a hacker attempting to penetrate the system. The idea is to decoy the hacker away from the real systems using a fake one emulated in software. An example of hardware virtualization is the soft modem. PC manufacturers found that it is cheaper to emulate some peripheral hardware in software. The problem with this is diminished system performance due to the processor being loaded with the emulation. The JAVA virtual engine is also another example of virtualization. This is a platform independent engine that permits JAVA coders to code identically the same on all platforms supported and the code to function as mobile code without accounting for each platform.

Provisioning In Virtualization

Once hardware resources are inventoried and made available for loading. Provisioning in a virtualized environment occurs in several ways. First, physical resources are provisioned by management rules in the virtualization software usually at the load management tier, Figure 1. Secondly, users of a virtual machine can schedule a number of processors, the amount of RAM required, the amount of disk space, and even the degree of precision required for their computational needs. This occurs in the administration of virtualized environment tier, Figure 1. Thus, idle or excess resources can, in effect, be economically rationed by an end user who is willing to pay for the level of service desired. In this way the end user enters into an operating lease for the computational resources for a period of time. No longer will the end user need to make a capital purchase of his computational resources.

Computational Power Challenges

I have built machines with multi-processors and arrayed full machines to handle complex computing requirements. Multi-processor machines were used to solve processor intensive problem sets such as Computer Aided Design, CAD, demands or high transaction SQL servers. Not only were multiple processors necessary but so were multiple buses and drive stacks in order to marginalize contention issues. The operating system typically ran on one buss while the application ran over several over other busses accessing independent drive stacks. Vendor solutions have progressed with newer approaches to storage systems and servers in order to better support high availability and demand. In another application, arrayed machines were used to handle intensive animated graphics compilations that involve solid modeling, ray tracing, and shadowing on animations running at 32 frames per second. This meant that a 3 minute animation had 5760 frames that needed to be crunched 3 different times. In solving this problem, the load was broken into sets. Parallel machines crunched through the solid model sets handing off to ray tracing machines then to shadowing machines.  In the end the parallel tracks converged into a single machine where the sets were re-assembled into the finished product. System failures limited work stoppages to a small group of frames that could be 're-crunched' then injected into the production flow.  

These kinds of problems sets are becoming more common today as computational demands on computers become more pervasive in society. Unfortunately, software and hardware configurations remain somewhat unchanged and in many cases unable to handle the stresses of complex or high demand computations. Many software packages cannot recognize more than one processor or if they do handle multiple processors the loading is batched and prioritized using a convention like first in first out (FIFO) or stacked job processing. This is fine for a production use of the computational power as given in the examples earlier. However, what if the computational demand is not production oriented but instead sentient processing or manufactures knowledge? I would like to explore an interesting concept in which computational power in the cloud is arrayed in a virtualized neural net.

Arraying for Computational Power in New Ways

Figure 2: Computational Node


One solution is to leverage arcane architectures in a new way. I begin with the creation of a virtual computational node in software, Figure 2, to handle an assigned information process. Then organize hundreds or even tens of thousands of computational nodes on an virtualized backplane, Figure 3. The nodes communicate in the virtual backplane listening for information being passed then process it, and publish the new information to the backplane. A virtualized general manager provides administration of the backplane and is capable of arraying the nodes dynamically in series or parallel to solve computational tasks. The node arrays should be designed using object oriented concepts. Encapsulated in each node is memory, processor power, its own virtual operating system and applications. The nodes are arrayed polymorphically and each node inherits public information.  In this way, software developers can design workflow management methods, like manufacturing flow, that array nodes and use queues to reduce crunch time, avoid bottle necks, and distribute the workload. Mind you that this is not physical but virtual.  The work packages are handed off to the load manager which tasks the physical hardware in the cloud, Figure 3.

Figure 3:  Complex Computational Architecture


This concept is not new. The telecommunications industry uses a variation of this concept for specialized switching applications rather than general use computing. There are also array processors used for parallel processing. Even the fictional story, Digital Fortress by Dan Brown centered on a million processor system. Unfortunately, none of these concepts were designed for general use computing. If arrayed computational architectures were designed to solve complex and difficult information sets then this has the potential for enormous possibilities. For example, arraying nodes to monitor for complex conditions then make decisions on courses of actions and enact the solution.

The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets.  A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.

The World Wide Web and Computational Limitations

This architecture within a cloud is limited to developing knowledge or lines of logic. Gaps or breaks in a line of logic may be inferred based on history which is also known as quantum leaps in knowledge or wisdom. Wisdom systems are different than knowledge systems. Knowledge is highly structured and its formation can be automated more easily.  Whereas wisdom is less structured having gaps in knowledge and information. Wisdom relies on inference and intuition in order to select valid information from its absence or out of innuendo,  ambiguity, or otherwise noise. Wisdom is more of an art whereas knowledge is more of a science.

Nonetheless, all the participating computers on the World Wide Web could enable a giant simulated brain. Of course, movies such as The Lawn Mower Man, Demon Seed, The Forbes Project, and War Games go the extra mile making the leap to self-aware machines that conquer the world. For now though, let's just use them to solve work related problems.

References:

Brown, Dan, May 2000. Digital Fortress, St Martin’s Press, ISBN: 9780312263126

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Impacts of Complexity on Project Success

Commentary: This is the relevant portions of an extensive paper written in my Masters of Information Technology coursework.  The paper highlights a common concern among many project managers. That is the lack of quality information early in a project especially in complex projects.  The overall paper proposed research into project complexity and early planning efforts.

Impacts of Complexity on Project Success
by
JT Bogden, PMP

Introduction

Project management practice and principles have been maturing and continue to mature. The general paradigm to plan well applies to early project planning and has a significant influence on the success or failure of a project. This research is in support of identifying the key relationships between influential factors affecting scope and risk in complex projects during early project planning. Attention to the complexity is important since the nature of information technology, IT, projects are complex. Complexity tends to increases risk. "Project abandonment will continue to occur -- the risks of technology implementation and the imperfect nature of our IT development practices make it inevitable" (Iacovou and Dexter, 2005, 84). Therefore, this study is focused on the early information technology project planning practices when the project is vague and the outcomes are unknown and unforeseen. The purpose is to better manage scope gap early.

Problem Statement. Poor scope formulation and risk identification in complex projects during the early planning have lead to lower project performance and weakened viability. Therefore, project managers are challenged to manage these issues early in order to increase the project's viability and success.

Argument.  Project complexity influences performance just as taking shortcuts in a rush for results causes an outcome with complexity like characteristics. Lower performance outcomes may result from project essential factors relating to scope and risk objectives that are overlooked or not properly managed resulting in increased cost, delays, and/or quality issues that jeopardize the projects viability and success. 

Body of Works Review

This effort intends to explore the significant body of works that has emerged to date. Literature research was conducted across a diversity of project types in support of the research problem statement that poor scope formulation and risk identification of a complex project during the early planning affect project performance and project viability in relationship to complexity of the project. This is by no means the first time research of this nature has been explored in these three areas; scope definition, risk identification, and project complexity. 

The common threads in the body of works that has emerged spans decades to include project management as a whole, risk and scope factors that affect project success, information and communications challenges, and complexity impacts on scope and risk. The works researched in other disciplines provide many transferrable lessons learned. For example, construction and engineering projects have in common to information technology projects complexity issues as well as information reporting and sharing concerns. Other works from supporting disciplines contribute to factors on education, intellect, and learning in support of competency influences on risk. A 2001 trade publication article indicated that causes for failed projects tend to be universal.  The article's author, John Murray, concludes that information technology projects fail for a small set of problems rather than exotic causes (Murray, 2001, p 26-29).

In a 2008 construction project study, the researchers discussed the construction industries front end planning which is explained as the same as the project charter process. The works details a study of fourteen companies and their project planning processes then presents a model process. The study results are summarized into critical criterion of success. In conclusion, fifty percent of the projects did not have required information for front-end planning activities. Problem areas were identified in a follow on study to include weak scope and risk identification as well as other basic issues (Bell and Back, 2008).

The problems of scope definition researched in the body of works indicates that cooperative planning and information sharing have been key factors in developing scope. A 2007 study on concurrent design addressed the complexities and risk of concurrent design projects. The researchers posed a model of interdependent project variables. The linkages illustrate the direction of the communications or information sharing between the variables. In the researcher's analysis they conclude that through cooperative planning in the form of coupling and cross-functional involvement significantly reduce rework risk. Early uncertainty resolution depends on cross-functional participation (Mitchell and Nault, 2007).

The Technology Analysis and Strategic Management Journal published an article in 2003 discussing outsourcing as a means of risk mitigation. The outcome of the case under review was project failure due to a lack of clear requirements and poor project management. This was attributed to conflict and a loss of mutual trust between the outsourced vendor and the information technology client. The result was one vendor cutting losses due to weak commitment when compared to in house project support. The researcher suggested that shared risk may be effective in a partnership such as outsourcing but requires strong communication and some level of  ownership (Natovich, 2009, p 416).  This article's case study illustrates that cooperation is critical in information technology projects. A 1997 study discussed mobilizing the partnering process in engineering and construction projects during complex multinational projects. Researchers argued developing project charters fostered stronger partnerships and reduced risk. In general, the article promotes a shared purpose supported by a method based on vision, key thrusts, actions, and communication. The works offers management practices and predictors for conflict resolution and successful projects. One of the best predictors of success in high performance project managers is the ability to reconcile views rather than differentiate; influence through knowledge; and consider views over logic or preferences (Brooke and Litwin, 1997).

The literature has also indicated competencies of project members and conflict resolution have been key factors of interest. Northeastern University's explored strengthen information technology project competencies having conducted a survey of 190 employers finding that employers considered hands on experience, communications ability, and behavioral mannerism of the student among other attributes. The researcher makes a call for a mixture of improvements to student curriculum that involves project management skills both visionary and hand-on as well as group interaction (Kesner, 2008).  The efforts to strengthen competencies have not only been in traditional education institutions but also in professional organizations such as the American Institute of Certified Public Accountants (AICPA). A 2008 article discussed the accounting industry's approach to identifying and correctly placing information technology staff based on assessed competency levels. The AICPA is using a competency set that is found cross industry and levels of skill ("IT Competency", 2008).  Some dated literature is also indicating that in order to solve vague problem sets within complex project has centered on a willingness and ability to engage the vague circumstances, to think abstractly.  A 1999 psychology publication discussed the typical intellectual engagement involving a desire to engage and understand the world; interest in a wide variety of things; a preference for complete understandings of a complex problem; and a general need to know. The study associated intellect with the typical intellectual  engagement their environment in an intellectual manner, problem solve, believe they possess greater locust of control over the events in their lives (Ferguson, 1999, p 557-558).  Additional research is necessary in this area with this work being so dated.

In a 2006 article researchers sought to understand reporting to senior manager methodology regarding software development projects. The works discussed reporting and governance in an organization then break into four functional areas and further refine the best practices into a common view.  The researchers noted that little attention has been given to how senior managers and the board can be informed about project progress and offered several method of informing them. The researchers reported that senior managers need information grouped into three classes; support decisions, project management, and benefits realization assessments. The researcher then discusses a variety of reports and their attributes. The researchers concluded that senior managers and board members need effective reporting if they are to offer oversight to the software development project (Oliver and Walker, 2006).  Another 2006 study indicated that continuous reporting, information sharing, builds the case for compelling board member involvement based on four factors: cost overrun history, material expenditures, [software] complexity, and any adverse effects on the company (Oliver and Walker, 2006, p 58).

The challenges of project complexity management have utilized information technology governance as a key factor in project success.  Information technology governance has been sought as a framework to align organizational goals with project goals.  In a 2009 qualitative study, researchers sought to treat information technology governance, change management, and project management as closely related then stated a premise that information technology governance must be governed to ensure that problems due to weak governance are corrected.  They postulate the question how much information technology governance is a requirement. Then they organize information technology governance into three broad groups; corporate governance, scope economies, and absorptive capacity exploring these groupings. The researchers finally relate information technology governance to the enterprise at all levels discussing results of a survey given to numerous actors in the organization's CRM [Customer Relationship Management] projects. They also found that most companies surveyed had risk and problem management programs that were mature rather than given lip service. The problem areas that stood out were communicating with senior management as well as consultants and vendors. In conclusion, the researchers remark that information technology governance depends on senior management involvement and sound project management ability (Sharma, Stone, and Ekinci, 2009).

Given scope, risk and project complexity, information technology governance offers a framework for unifying organizational objectives.  Research completed in 2009 showed that information technology governance covers all the assets that may be involved in information technology, whether human, financial, and physical, data, or intellectual property (Sharma, Stone, Ekinci, 2009, p 30).  The same research has also shown that information technology governance required top down involvement stating that successful implementations of information technology governance depends on senior management involvement, constancy, and positive project management abilities (Sharma, Stone, and Ekinci, 2009, p 43).  Senior management requires information to be shared and a 2006 project journal publication supports remarking that continuous reporting builds the case for compelling board member involvement based on four factors: cost overrun history, material expenditures, [software] complexity, and any adverse effects on the company (Oliver and Walker, 2006, pp 50-58).

The body of works while much broader than sampled and demonstrates support and strength in a number of areas of the problem statement.  The literature selected ranges in date from 1997 to 2010 with the greater portion of the works were more recent, 2007 or thereafter. Some of the areas of work are dated or sparse. This indicates a need additional research such as in the area of problem solving abilities in vague or unclear circumstances.  While much of the research was across several industries principally from industry and trade journals in information technology, general construction, or engineering the project management principles and findings transferrable between project types. The works were also with several academic studies and only two open source articles.  Most of the works were authoritative under peer review. The dated works were cited more frequently than the more current works as to be expected.

The compelling thread line in the body of works is that scope and risk concerns influenced by project complexity with cooperation, information sharing, conflict resolutions, and competencies as significant factors in project success.

Discussion

Technology projects are challenged with a variety of factors that contribute towards the performance of the project. The body of works indicates that risk and scope complicated by project complexity directly influence project success from the outset. Thus, early project planning is crucial toward success. The body of works relating to the elemental aspects of competencies, information, cooperation, and conflict management offers historical support to risk and scope formulation. The one point that seemed to standout is information sharing and flow at all levels.  Additional research is necessary into the body of knowledge behind successful project managers and the relationship to the ability to reason through complex and obscure project problem sets as related to project related competencies. Dated literature indicates a relationship between the positive locust of control and willingness to engage abstract problems.

Commentary: I suggest that compartmentalizing a complex project into smaller projects should strengthen the locust of control and improve problem solving challenges. In short, the smaller problem set is more easily grasp than an overwhelming large set of problems. Thus, reducing risk and strengthening scope definition.  In breaking a complex project into smaller achievable projects, the organization will gain greater control over the entire process and gain incremental successes towards the ultimate goal. Continuous improvement would characterize such an evolution.  The master project manager must assess the order in which the smaller projects are completed. Some may be completed simultaneously while others may be completed sequentially. 

A risk of scope creep may be introduced as an outcome of mitigating scope gap. To remain focused all the projects must align with the organizational strategic objectives as they take strategy-to-task. New ideas need to be vetted in meaningful ways for the organization and aligned with the overall objectives in a comprehensive change management plan. 


Communication is also essential in managing complex projects. The use of a Wiki as a point of  foundational policies and information is often a best practice. 

Large scale sudden disruptions of an organization are required under certain circumstances. However, in most circumstances complex projects need to be properly broken into smaller manageable efforts then become part of a continuous improvement effort within the organization. 

References

(2004). Skills shortage behind project failures. Manager: British Journal of Administrative Management, (39), 7. Retrieved from Business Source Complete database.

(2008). AICPA's IT competency tool takes you down the path to success!. CPA Technology Advisor, 18(6), 60. Retrieved from Business Source Complete database.

Brooke, K., & Litwin, G. (1997). Mobilizing the partnering process. Journal of Management in Engineering, 13(4), 42. Retrieved from Business Source Complete database.

Chua, A. (2009). Exhuming it projects from their graves: an analysis of eight failure cases and their risk factors. Journal of Computer Information Systems, 49(3), 31-39. Retrieved from Business Source Complete database.

Ferguson, E. (1999). A facet and factor analysis of typical intellectual engagement (tie): associations with locus of control and the five factor model of personality. Social Behavior & Personality: An International Journal, 27(6), 545. Retrieved from SocINDEX with Full Text database.

Bell, G.R. & Back, E.W. (2008). Critical Activities in the Front-End Planning Process. Journal of Management in Engineering, 24(2), 66-74. doi:10.1061/(ASCE)0742-597X(2008)24:2(66).

Iacovoc, C., & Dexter, A. (2005). Surviving it project cancellations. Communications of the ACM, 48(4), 83-86. Retrieved from Business Source Complete database.

Kesner, R. (2008). Business school undergraduate information management competencies: a study of employer expectations and associated curricular recommendations. Communications of AIS, 2008(23), 633-654. Retrieved from Business Source Complete database.

Kutsch, E., & Hall, M. (2009). The rational choice of not applying project risk management in information technology projects. Project Management Journal, 40(3), 72-81. doi:10.1002/pmj.20112.

Mitchell, V., & Nault, B. (2007). Cooperative planning, uncertainty, and managerial control in concurrent design. Management Science, 53(3), 375-389. Retrieved from Business Source Complete database.

Murray, J. (2001). Recognizing the responsibility of a failed information technology project as a shared failure. Information Systems Management, 18(2), 25. Retrieved from Business Source Complete database.

Natovich, J. (2003). Vendor related risks in it development: a chronology of an outsourced project failure. Technology Analysis & Strategic Management, 15(4), 409-419. Retrieved from Business Source Complete database.

Oliver, G., & Walker, R. (2006). Reporting on software development projects to senior managers and the board. Abacus, 42(1), 43-65. doi:10.1111/j.1467-6281.2006.00188.x.

Seyedhoseini, S., Noori, S., & Hatefi, M. (2009). An integrated methodology for assessment and selection of the project risk response actions. Risk Analysis: An International Journal, 29(5), 752-763.
doi:10.1111/j.1539-6924.2008.01187.x.

Sharma, D., Stone, M., & Ekinci, Y. (2009). IT governance and project management: A qualitative study. Journal of Database Marketing and Customer Strategy Management, 16(1), 29-50. doi:10.1057/dbm.2009.6.

Skilton, P., & Dooley, K. (2010). The effects of repeat collaboration on creative abrasion. Academy of Management Review, 35(1), 118-134. Retrieved from Business Source Complete database.

Sutcliffe, N., Chan, S., & Nakayama, M. (2005). A competency based MSIS curriculum. Journal of Information Systems Education, 16(3), 301-310. Retrieved from Business Source Complete database.

Vermeulen, F., & Barkema, H. (2002). Pace, rhythm, and scope: process dependence in building a profitable multinational corporation. Strategic Management Journal, 23(7), 637. doi:10.1002/smj.243.

Caterpillar Leverages Information Technologies for Sustainable Growth

Comment: This was a paper I wrote in 2008 on Caterpillar's use of technology. I thought it highlighted many interesting points. 

Caterpillar Leverages Information Technologies for Sustainable Growth
by
JT Bogden, PMP

Business is warfare based principally on sage utilization of information which is a key factor determining success in business. Caterpillar has long recognized that access to accurate information in order to build actionable knowledge is critical to business success. Caterpillar is a complex global enterprise operation based out of Peoria, Illinois that through well tuned information management is achieving incredible success. Sales revenues during 2007 exceeded forty four billion dollars. (Caterpillar, 2007, Annual Rpt p 33) Enterprise growth goals by 2010 are projected to exceed fifty billion dollars. (Caterpillar, 2007, Annual Rpt p 27) This expansion of the revenues is coming with solid vision and sage business design. Caterpillar’s vision centers on sustainable development utilizing a strategy of innovation and technologies in support of the company’s objectives. (Caterpillar, 2007, Shape Rpt p 36). This means information and the requisite systems are principle to analysis, rapidity of decision making, and identification of actionable business opportunities.

Intellectual Capital Drives Innovation

Many professionals in business incorrectly believe intellectual capital, IC, is simply good ideas that become proprietary because of the station at which the idea was imagined. As an outcome, these professionals believe a company has a legal claim to a good idea. The reality is that good ideas are abundant as nearly everyone has a good idea but most lack the means to put the good idea into effect.

Intellectual capital is better thought of as knowledge that can be converted into commercial value and a competitive advantage resulting in intellectual assets of the company. The conversion of knowledge into commercial value requires a means to codify the knowledge into an intellectual asset. In order to achieve this companies provide structural capital in support of the human capital to gain access to intellectual assets. Thus, IC results from human and structural capital operating in an unique relationship forming intellectual assets. Companies distinguish their operations from the competition by combining knowledge and the infrastructure in differing ways. The process of converting knowledge into intellectual assets results in the innovation that companies seek to commercialize (Sullivan, 1998, p23).

According to the book The Innovator’s Solution by Clayton Christensen innovation in business means growth resulting from the introduction of something new that can be exploited for commercial value. Christian further explains that sustainment growth focuses on delivering new and improved benefits to high-end customers. He then comments that companies are more interested in disruptive growth which results in reduce cost, simplicity, and new directions. Introducing something new is often thought of as unpredictable which is not desirable to most companies. Christensen believes the key to innovative success is not predicting human conduct as rarely does innovation come from a single human fully developed. Instead, He comments that companies must understand the forces that act on humans. What this means is that when innovation is managed through design there is predictability then companies are more readily apt to embrace the change.

In the classic understanding of design, there are three characteristic aspects; the visceral or how the design looks, behavioral relating to the designs functionality, and reflective qualities that provoke thought. In classic design beauty is also found. Good designs demonstrate beauty through harmony and fluid execution. As companies increase in size and complexity the problem of accessing knowledge becomes exponentially difficult. Communicating messages between the top intent and bottom action can become confused and misdirected if not properly managed. Thus, a reliance on finely tuned information technologies becomes an imperative.

Caterpillar has exercised deliberate efforts to employ information technologies that demonstrate good design. For example, a visual imaging company, Real D-3D, posted a company website an article regarding Caterpillars’s need to speed engineering projects to market by employing visualization technology in a project called “CrystalEyes”. According to this article a key feature of the CrystalEyes project was to make the information tool simple to use for engineers and clients alike that eliminated prototyping iterations as well as the tool also had to be cost effective, cross platform, and easily integrated with existing systems. These requirements demonstrated behavioral qualities of a good design. Real D-3D described “CrystalEyes” as a stereographic imaging tool that is an improvement beyond the ghostly holographic effects that met all the design criteria. They were describing for example, designs that can simulate in 3-D the full effect of parallax and other phenomenon related to stereoscopic imaging. Thus, “CrystalEyes” illustrated the visceral elements of a good design. The benefit CrystalEyes delivered was a high performance design visualization tool that eliminated physical builds until the very end. (Copy Editors, Real D-3D) Using the CrystalEyes tool afforded clients and engineers alike the ability to fully understand a design in work provoking thought or the reflective qualities of good system design throughout the engineering iterations.

Management Information Systems Build Decision Support SubSystems

Management information systems, MIS, are complex. These systems come in a variety of technologies and capabilities. One size does not fit all operations. In general MIS involves, at least, three elements; a network or hardware lay down, supported management concepts, then integrated decision analysis and reporting. Through the combinations of these elements companies are able to leverage themselves in competitive ways and provide the infrastructure for innovation.

Caterpillar leads the industry with decision support subsystems. Data is infused into the creation of products and services in support of growth that is collected from significant customer segments and Caterpillar’s geographically dispersed operations. The systems span over two hundred independent dealers globally and their proprietary networks. Caterpillar’s efforts include numerous projects and software tools that fuse these systems together and include but are not limited to:
  • VIMS: Vital Information Management System is a vehicle borne alert system that assesses the equipment’s safe and optimal operating condition. When a problem begins to emerge or is discovered the system alerts the operators and owners then provides safe shut down procedures if necessary. This enhances the service life of the equipment and is an decision support subsystem.
  • Product Link: A wireless system that simplifies the work of tracking the fleet providing assets management information. Product link couples with VIMS.
  • Paperless Reporting: A wireless project that integrates Dealer Business systems and Service Technician’s Workbench with field service technicians reducing errors and streamlining data entry requirements.
  • EquipmentManager: Software designed to report fleet performance and manage assets. This application is the owner’s frontend that presents the VIMS and product Link performance information on demand in meaningful ways.
  • VIMS Supervisor: Vital Information Management System Supervisor Software provides custom fleet production and maintenance reports by extracting data from a VIMS Database.
  • Caterpillars authoring system: A system that is both an information consumer and producer organized to streamline global technical publication operations.
The VIMs, Product Link, Paperless reporting, and the authoring projects are of particular interest as they are subsystems that impact a sequence of other systems ultimately feeding up to top level decision support systems.

Product Link Pools Global Equipment Performance Information

Caterpillar introduced a subsystem called “Product Link” that leverages equipment performance information collected by VIMS towards decision support. “Product Link” is a management tool that tracks and gathers information about Caterpillar’s earthmoving equipment. An online HUB Magazine article written by Caterpillar’s Information Centre discussed the subsystem as composed of two antennas, a data module, and interconnecting wiring. They explain that one antenna collects GPS data while the other antenna provides bidirectional communication with the network operations center. The data module referees the collection of performance and GPS data as well as instructions from the network operations center. Information collected is transmitted to a Caterpillar network operations center wirelessly through low Earth orbit, LEO, satellites. At the network operations center the information is further evaluated then reports are prepared and sent to the equipment owner. Equipment owners are able to access the information over the Internet using the “Equipment Manager” software.

The benefit to both parties is essential to asset management with improved service life of the equipment, reduced down time, and strengthened return on the investment according to Caterpillar. These have been principle reasons the customer purchases Caterpillar equipment. Therefore, understanding the equipment utilization, location, and performance data helps Caterpillar design heartier equipment meeting equipment owner expectations.

This subsystem has seamless operation with the equipment reporting to the Network Operations Centre where the data is collated and eventually is rolled up to into top level decision making support systems demonstrating beauty in the design’s fluidity. The information provided to the owner through “EquipmentManager” answers concerns about utilization, security, and uptime according to Caterpillar further illustrating functionality and reflective utilization of the design.

Paperless Reporting Links Field Service Technicians Into Global Systems

A case study was researched and published in Directions Magazine by Mike DeMuro, Product Support Manager for Michigan Caterpillar, regarding a Michigan Caterpillar’s paperless project initiative. According to the article Michigan Caterpillar field service technicians were experiencing time consuming and error prone process in their dispatch system reporting. Technicians were using an antiquated process of paper forms that were transcribed onto the system in the classic data entry manner. In some cases, information was passed verbally and transcribed days later. Often the information was incomplete or erroneous. Caterpillar sought to streamline the process. A statewide centralized dispatch system was in order to form a mobile office assesses DeMuro.

DeMuro explains that the design of the system utilized an enterprise data integration service that offered both cellular and satellite coverage. Caterpillar’s Dealer Business System and Service Technician’s Workbench was integrated into the enterprise data integration service and Microsoft Outlook. After data was entered once into the system, technicians could drop and drag data into Outlook templates and distribute the data without error prone re-typing. The emails were received by servers and scripts parsed the data into the other systems further reducing errors and increasing productivity. This created a paperless culture of online forms that transmitted data wirelessly between service vehicles equipped with the system and staff functions. DeMuro further claims the benefit of this innovative approach radically improved billing cycles, accuracy, and timeliness of data reporting. Other first order benefits lead to reduce overhead for data re-entry, increase productivity and revenue generating hours, timely parts delivery, and seamless integration of systems. This resulted in secondary effects of improving cash flows and accounting for receivables explains DeMuro.

Again Caterpillar was able to achieve beauty in its seamless design for field service technician reporting. The error rates were subject to initial data entry and additional entry was eliminated leading to very productive functionality of design. The data gathered is cascaded through to higher level systems for further evaluation.

Technical Authoring System Forms Intellectual Assets

Caterpillar was experiencing problems with the technical publications accuracy, timeliness, and availability. There were over 300 products with some having lifecycles as lengthy as 50 years. Compounding this immense data requirement was operations in 35 languages. Therefore, in the late 1990’s Caterpillar envisioned a need for a better method of managing this labor intensive effort of technical documentation. They pursued innovation by taking advantage of emerging Standard Generalized Markup Language, SGML, standards that overcame the limitations of the existing methods. The introduction of the new approach delivered levels of efficiency based on reuse and automation that had never been observed.

Caterpillar began by creating a Technical Information Division, TID, that had the global responsibility of producing the documentation necessary to support operations. They expanded the technical documentation staffing by 200% then organized the automated publishing system, the structural capital, which enabled the staff’s effectiveness to deliver the technical documentation or intellectual assets. These assets included maintenance manuals, operations and troubleshooting guides, assembly and disassembly manuals, specification manuals, testing and special instructions, adjustment guides, and system operation bulletins.

In the design of the authoring system, Caterpillar took a modular approach to information creation and automated where possible. The system designers built on top of industry standards and even utilize MIL-PRF-28001 for page composition. They utilized reusable ‘information elements’ that are capable of being utilized in multiple formats and forms. This approach drastically reduced cost associated with creation, review, revising, and translating information. Through automation of a document formation and information elements, Caterpillar was able to achieve collaborative authoring that trimmed time-to-market and permitted increased focus by subject matter experts that strengthen the quality of the product. The efficiencies achieved staggering improvements in work flow and analysis, document development, style sheet designs, and legacy conversions. In the end, Caterpillar experienced accuracy, timeliness, and availability of technical information that became of immense commercial value and competitive advantage.

Caterpillar’s copyrighted technical documentation is of such immense value that criminal elements have attempted to exploit this information. In May 2002 Caterpillar’s digital library of parts and product catalogues, service manuals, schematics, tooling data, and product bulletins was compromised. U.S. customs reported that they had seized a half million dollars in counterfeit Caterpillar technical documents. This criminal activity demonstrates that the value of well designed intellectual assets can be of significant value as well as vulnerable.

Data Warehousing Efforts Consolidate Enterprise Data

Designing solid data management methods are critical to business success. MIS approaches decision making generally from the process such as a purchase order process whereas decision support systems tend to focus on conduct and behavioral characteristics such as fuel consumption trends. This requires data gathered to be stored, parsed, and analyzed in ways that support strategic decision making over operational management of the operations. The outcome of a well designed data warehousing system is equipment managers shift their focus from operational level decision making to corporate level strategic decision making regarding asset management.

Data marts are working subsets of larger primary database systems used to present unique views on subject matter topics. These data marts are then organized in a way to permit multi-dimensional modeling of the operations. This multi-dimensional model is called the data cube. Online Transaction Processing, OLTP, and Online Analytic Processing, OLAP, usher data routinely into the data cubes and conduct ongoing analytic evaluation of the data in support of on demand or real time review. These tools have also been advanced over the Internet permitting decision support system authorized users to conduct the analysis they are seeking.
The benefits of data warehousing involve better end user control of the data analysis, improved tooling for identification and investigation into problems, strengthened strategic decision making, and improved knowledge discovery. Data warehousing is the foundation of computer aided construction equipment and asset management.

Caterpillar has sought a global data solution and chose TeraData Inc as its business partner in March 0f 2008. TeraData business decision support solution is comprised of component products built on top of the “Active Data Warehouse” product. The component products provide intelligence, analytics, and other support services to decision making.

The Active Data Warehouse product is the underpinning of their services and refers to the technical aspects required to achieve the desired objectives of the data warehouse. This database is designed to receive the feeds from mature MIS subsystems such as Caterpillar’s VIMs, the paperless reporting, and Authoring subsystems. This results in a repository of data that possesses high confidence of data accuracy. The database can be utilized in ordinary MIS support to ecommerce, portals, and other web applications but has greater impact when coupled with decision support applications. With the confidence in the data accuracy, complex reporting and data mining that can be generated on tactical or short notice queries in near real time makes this solution a powerful tool. This capability originates from TeraData’s strategy built on the findings of a 2001 Gartner report that data marts cost 70% more per subject area than a comparable traditional data warehouse. (Sweeney, 2007). TeraData seeks to consolidate data marts, reduce redundancy, and streamline the data loading process into a centralized analytic source in effect creating a massive sole source data mart equivalent to the enterprise wide data set. This streamlining is consistent with Caterpillar’s desires to innovate through technology resulting in the 2008 agreement to improve Caterpillars decision support.

Business Intelligence Products Strengthen Decision Support

TeraData’s component products include a suite of applications that utilize Caterpillar’s enterprise wide data warehouse for analytic and intelligence reporting. Tools in this suite includes strategic and operational Intelligence applications, data mining tools, modeling software, and analytical tool sets that handle extremely large datasets looking for criminal conduct as well as emerging trends. Included also in the suite are maintenance and management tools.

Bringing Information Technology Projects in Focus

Caterpillar brings together disparate systems into a symbiotic global information presence through network operation centers, communication networks, and data processing methods and systems. The elements of good design are observed throughout the systems at Caterpillar and create a culture that promotes innovation whether that is technical publication, engineering, or field management of the equipment. With this foundation in place Caterpillar began a process of increasing vertical accuracy across their systems into decision support systems. The disparate enterprise data is rolled up into the decision support systems data warehouse and requisite set of tooling establishing a formidable competitive instrument. Agreements with TeraData in early 2008 lead to solutions to implement near real time reporting with increased accuracy. As an outcome, Caterpillar has propelled to the forefront of heavy equipment manufacturers to become the industry leader with growth projections that eclipse the competitors. Nonetheless, Caterpillar is restless. Becoming number one in the industry is simply not enough for this giant.

The Future is Bright

Caterpillars positioning in the industry as the leader is not the end state for this company. One concept of business is that no company makes a profit over the long term. The purpose of any business is to be a vehicle that provides income and dignity to human life. In executing this concept principles and moral responsibilities are assigned to companies and governed a cooperation between government and industry. Caterpillar has taken on the next evolution of large corporations, corporate governance. They define their vision in a sustainability report called “Shape”. The term shape is a key notion that is inclusive of the forces that forge innovation in the shaping of knowledge into business plans. Caterpillar has identified the pillars of its “Shape” initiative as:
  • Energy and Climate: Caterpillar realizes the importance of energy security and the impact energy consumption by the equipment has on the ecology.
  • Growth and Trade: Expanding economies and international business are important to sustainable operations.
  • People and Planet: Caterpillar equipment builds economies and lifts people out of poverty.
  • Further and Faster: Shape take form over time then accelerates as the vision organizes. Caterpillar must be willing to drive the vision beyond that which is currently known in order to embrace the future of sustainability.
Using caterpillars systems and technologies, the company is actively seeking and organizing a plan to reach for the moral high ground and is embracing corporate governance. Caterpillar’s equipment is known to move mountains. In time, as corporate governance takes shape Caterpillar will emerge as a social force that levels societal inequities while elevating human dignity around the globe. Humans will have jobs with disposable incomes, improved roads, hospitals, and strengthened economies built by Caterpillar’s equipment and backed by Caterpillar’s social conscience.

References:
  1. Bartlett PG, 1997, “Caterpillar Inc's New Authoring System”, SGML Conference Barcelona 1997, Retrieved October 15, 2008, http://www.infoloom.com/gcaconfs/WEB/barcelona97/bartlet8.HTM#
  2. Caterpillar Public Affairs Office, 2007, “2007 Caterpillar Annual Report”, Retrieved October 10, 2008, http://www.cat.com
  3. Caterpillar Public Affairs Office, 2007, “Shape: Sustainability Report”, Retrieved October 10, 2008, http://www.cat.com
  4. Caterpillar Public Affairs Office, 2008, “Caterpillar Logistic Services Inc Web Site”, Retrieved October 12, 2008, http://logistics.cat.com
  5. Christensen, Clayton M, (2003), “The Innovators Solution”, (1st ed), Boston Massachusetts, HBS Press
  6. Copy Editor, ”Caterpillar moves Mountains in Speeding Time-To-Market using CrystalEyes and Stereo3D Visualizations”, Real D-3D, http://reald-corporate.com/news_caterpillar.asp
  7. Copy Editor, July 2007, “New-generation Product Link system from Caterpillar improves asset utilization and reduces operating costs”, HUB, Retrieved Octover 18, 2008, http://www.hub-4.com/news/633/newgeneration-product-link-system-from-caterpillar-improves-asset-utilization-and-reduces-operating-costs
  8. DeMuro, Mike, April 2005, “Michigan CAT Case Study”, Directions Media, Retrieved October 17, 2008, http://www.directionsmag.com/article.php?article_id=823&trv=1
  9. Eckerson, Wayne W., (2007), “Best Practices in Operational BI: Converging Analytical and Operational Processes”, TDWI Best Practices Report
  10. Hongqin Fan, 2006, “Data Warehousing for the Construction Industry”, NRC
  11. Schwartz, Evan I., (2004), “Juice: Creative Fuel that drives World Class Inventors”, (1st ed), Boston Massachusetts, HBS Press
  12. Sullivan, Patrick H., (1998), “Profiting from Intellectual Capital: Extracting value from Innovation”, (1st ed), New York, John Wiley & Sons, Inc.
  13. Sweeney, Robert J., (2007), “Case Study: TeraData Data Mart Consolidation ROI”, TeraData Corp.

ITIL Affects Measurable Organizational Value

Commentary:  This posting has had over 3500 reads and is in the top all time posts.  


ITIL Affects Measurable Organizational Value
by
JT Bogden, PMP

ITIL, Information Technology Infrastructure Library, is an emerging standard that congeals and stabilizes best practices for information technology implementations within an organization. The standard focuses on service delivery levels and operational guidance. Wrapped up in these two focus areas are activities that were already in practice in many organizations and are now focused under the ITIL standard. The standard now offers a baseline from which to establish an organizations service level and system performance. But what does that mean in terms of Measurable Organizational Value, MOV?

The first thing we must understand is MOV is aligned with the organizations strategy and it's ability to extract benefit from it's efforts. MOV is not about how well the company or it's staff do their jobs. Instead, MOV relates to the achievement of strategic objectives the organization seeks.  The strategic objectives could involve many aspects of the organization to include sustainability and profitability as well as corporate governance objectives.

The measurement of organizational value closely follows Effect Based Outcomes, EBO, methodology. In EBO objectives are established. Each each objective may have two or three associated effects. Each effect may have one or two Measurements of Effectiveness, MOE's. For example, the following objective, effects, and MOE's may be typical of a company launching a social media campaign.

  O.1 Promote a social media campaign

         E.1.1 An increase/decrease in customer awareness of services

              MOE.1.1 The number of customer queries about the services

         E.2.1 An increase/decrease in customer participation of service design

              MOE.2.1 The number of service enhancement suggestions

An additional element, indicators, are used to establish decision points and refocus resources. For example, once customer suggestions reach 100 per week a decision is made that the customer participation program is mature. Funding for this program is rolled back to sustainment levels and the money is refocused into other efforts. The MOE is then monitored continuously for a decision point when a decrease to 50 customer suggestions per week is hit. Then resources are allocated to increase awareness to the program. This is typical MOV initiative to sustain  the desired effects and objective achievement.

Throughout the organization, various factors affect the availability of resources used to achieve MOV. ITIL provides a framework for managing those resources. Essentially, ITIL impacts MOV in many ways. While MOV is focused on organizational objectives, how well the organization stays focused and executes its tasks contributes to improved or strengthened MOV. For example, Activities that:
  • Reduce costs. 
  • Improve IT services through the use of proven best practice processes.
  • Improve customer satisfaction through a more professional approach to service delivery.
  • Support standards and guidance.
  • Improve productivity.
  • Improve use of skills and experience.
  • Improve delivery of third party services through the specification of ITIL or ISO 20000 as the standard for service delivery in services procurements.
ITIL has an impact on available resources that are applied to improve MOV initiatives.  Additionally, ITIL is linked to MOV through its service levels which are tied to strategic objectives of the organization. 
  • Service Strategy. The service strategy aligns IT with the business objectives which are measured in terms of MOV.
  • Service Design. This structures the IT architectures in support of operations creating policies that impact MOV. Attention must be given to ensure that the policies are not counter productive but instead support the organizational objectives. Streamlined policies en-culture optimized processes and resource utilization increasing resource availability for MOV initiatives.
  • Service Transition. This is focused on change management. The purpose of change management is to stay focused on the organizational objectives and avoid costly detours which can impact MOV if not in alignment.
  • Service Operation. This covers delivery and control processes ensuring stability. All to often during the operations and maintenance phase environmental variations can lead the operations away from the objectives. This keeps the focus on MOV.
  • Continual Service Improvement. This is concerned with 'tweaking' the IT service management. It wraps up best practices and processes like lean, Six Sigma, TQM, etc... in a incremental continuous improvement process. These are course corrections that emphasize MOV and increase reources available for MOV initiatives.
Overall, Measurable Organizational Value, MOV, is closely coupled to ITIL in terms of strategy-to-task service levels.  Systems of systems thinking ties all aspects of the organization into its productive achievement of MOV and its initiatives.

This posting has had over 700 reads and is in the top all time posts.  

Technical Brief Series

Comment: Many of the postings in this blog are of a technical nature but do not go too deep.  I have compiled these postings into this series for quick referencing purposes. Most of these technical briefs were part of a overarching effort to quickly train staff on specific subject matter.  I hope that you them useful.


Short Read Archives
by
JT Bogden, PMP

Several years ago while in a leadership role running a operationalized telecommunication cell, I was challenged with a variety of knowledge levels within the staff as well as high staff turn over. Every quarter I was transitioning about 25% of the staff and observing 100% turnover annually. Reason was the cell was a interim duty in which staff got a check-in-the-box before moving on to their primary duty. Also up to 35% of the staff was augments that were assigned during crisis events.  I needed a method to bring the staff up to speed and continually increase knowledge.  After looking around, I drew upon operations management practices and my own educational experiences looking to McGraw-Hill's SRA program. SRA is the acronym for Science Research Associates, Inc but also has become known as Short Read Archive. 

The SRA program was simple. SRA card series were created for self-paced learning with a frame work of benchmarks or milestones.  The front side was a short read and the back side of the card was exercises and a quiz. Forms were completed for the quiz and turned into the instructor for grading. Some SRA card sets were self-graded and benchmark testing was graded by the instructor.  I choose to employ the short read approach and coupled that to weekly scheduled training.  Each staff member had a standard training plan and would draw the short briefs based on the schedule for them.  The goal was to cross-train everyone under a training management program. 

The original short reads were drafted by the staff then reviewed, smoothed, and published to the set.  The idea was not to teach deep detail but create increasing familiarity with the systems and technologies in use. In all there were about 50 short reads with a cyclic monthly review of 4 of the short reads. 

In preparing these for post, I updated a few and consolidated some cards. Most likely I will not post all 50 short reads, just the more interesting ones.  Most discuss the technology on a high level.
I will continue to add to this list over time. 

Computer Attacks: Hackers

Commentary: This is the first in a series of four posts on Computer Attacks. This post focuses on hackers and their practices.  The notion is that if you want to stop a hacker then you need to think like a hacker. This post details the approach hackers use. 

The Computer System Hacker
by
JT Bogden, PMP

Security is always a concern in software and operating systems. The structure of the operating system is critical to security. Microsoft divided their operating system into three components; command.com, MSBIOS.sys, and IO.sys files in order to make it easier for users to install peripherals and configure the system. Dividing the operating system into three components created vulnerabilities that permitted a virus and hackers to bypass the command.com file and access the system resources directly. Hackers and mal-contents wasted no time exploiting this vulnerability.  Microsoft has been chasing its proverbial tail on this problem with so many different viruses circulating today.

Hackers come in a variety of styles and flavors from the teen re-coder to nation states battling on the World Wide Web. Even the militant combatants, terrorists or individuals acting on their self interest,   have found the hacking to be an effective instrument in an asymmetric warfare campaign. The goal is for a relatively low cost and effort to cause the target, usually a nation state or large corporation, to spend millions, if not, billions countering the threat. This has the potential to economically exhaust the target and distract it from other more productive efforts. Thus, given a basic understanding of how the threat operates will yield insights into how to diminish their impact. However, keep a few things in mind:
  1. Almost all hacking requires physical access at some early point in the process in order to seed the system or to plant back doors. Therefore, physical security and monitoring is very important.
  2. Almost all advanced or experienced hackers go unnoticed for extended periods of time.
  3. Attacks are rarely temporally cohesive meaning they can be low and slow or erratic with respect to time appearing as noise or buried in the noise.
Hackers employ a series of nine steps when attempting to exploit vulnerabilities throughout the networks (McClure, 2009). According to McClure, who wrote a series of Hacking Exposed books, these nine steps are:
  • Foot Printing: An information gathering process that targets a range of addresses and\or naming structures in order to map a network.
  • Scanning: A focused assessment of listening ports and services to seek the most promising avenue of attack.
  • Enumeration: More intense and intrusive probing begins as user accounts and vulnerable shared devices are discovered.
  • Gaining Access: Enough data has been collected to make an informed attempt to access the victim systems.
  • Escalating Privileges: If the initial access was gained by a user level account, attempts to gain administrative control within the immediate system are made.
  • Pilfering: The attacker seeks to gain complete control of a system by gaining greater access to trusted systems.
  • Covering Tracks: Once complete control is obtained, the attacker seeks to hide his work from the system administrators by clearing logs and hiding tools.
  • Creating Back Doors: At will access is laid through out the information system to ensure that privileged access is easily regained. This involves rogue user accounts, batch files, trojans, remote control services, Bots, other virus programs etc…
  • Denial of Service: The attacker may decide to disable the victim for any justification. Numerous DOS methods exist and most of the data for such an attack is discovered in the footing-printing, scanning, and enumeration steps.
McClure remarks that defending a system involves detection, identification, and suppression of the threat. Understanding the mind of the ‘Bad guys’ is essential to countering their assault. Therefore, to know the vulnerabilities, methods, and processes used is imperative to expose and stop the attackers in the early stages of the assault.

CommentaryThe mindset of the hacker ranges from intellectual curios rebels to outright militant combatants seeking means to deter, deny, disrupt, and/or destroy their target's efforts. At one time, I tracked hacker groups. I also studied methods and technologies that could be exploited.  In extreme malicious efforts, it is possible through code to shutdown the CPU fan then run a high load on the processor causing it to heat and simply burn up. Cooling of the CPU is extremely important. Fortunately, many manufacturers use large heat sinks that can prevent the CPU from detonating without the fan running. However, in an over-clock situation that may not be the case since over-clocking will most likely require enhanced cooling. In order to over-clock a system the mal-content would need access at the board level in order to set the CPU clock speed jumpers. So physical security is essential. A disgruntled employee or service technician may be a typical culprit of such as act. System security and integrity checks, both physical and virtual, is essential to defending the informations systems. Typically, a trilogy of administrators, technicians\coders, and security inspectors  each having limited access is necessary for a system of checks and balances in the security system. 


Note: Over-clocking is a technique to gain higher speed performance out of a system. It often involves more than over-clocking a processor. System over-clocking requires adjustments to RAM and board speeds in order to keep everything in sync to the timing diagrams.  When over-clocking precision and accuracy are diminished. Also the cooling efforts dramatically increase. Over-clock saw its greatest gains when processors and boards were much slower than today. Over-clocking earlier technology often saw exponential speed gains. Processors and board speeds at the time of this post are approaching the limit where gains in speed are becoming negligible, processing is achieving near real-time capability. 

Reference:

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

McClure, Stuart, January 2009. Hacking Exposed 6, Mcgraw-Hill Company, ISBN 9780071613743

Computer Attacks: Viruses

Discussion: It is important for everyone to better understand computer viruses since they have become a real part of our computational life. Viruses are not fully autonomous but are programs written by people. These people often have a point to make, want to test their diabolical skills, make a mark in the world, or have malicious intent against an entity or state. Like biological viruses, computer viruses require a “host” to infect. Once a virus program is executed, it is able to do its “dirty work” to the local system, network, and peripheral devices or create vulnerabilities to be exploited later. It is necessary to take precautions in order to ensure the integrity of the system. Technological staff should become intimate with virus technology and methodologies in order to understand vulnerabilities and damage that these nuisance programs cause.

Computer Attacks: Viruses
by 
JT Bogden, PMP
DEFINITIONS:
Cross-Scripting: A technique used by attackers to insert malicious code into a web page in place of mobile code or an application module known as an applet. This is a delivery mechanism that is capable of delivering and executing a virus program as the web page posts. Defense against this kind of assault is difficult. The end user may observe a blanked out area on the page where the applet or image should have occurred. Current virus checkers have heuristical algorithms monitoring for the behavior of cross-scripted programs running which aides in the identification of newer viruses.
Logic, Time Bombs, and Easter Eggs: This is a focused virus that is designed to trigger when a specific event occurs or in a specified amount of time. The result of this type of virus is almost always catastrophic. These viruses are often scripted in login scripts, batch files, or buried in code placed by an individual on a target system. These are mostly the result of disgruntled employees. For example, computer coders have been known to place Easter eggs, malicious routines placed deep inside code, in order to assure job security. Should they be fired or their user account deleted the routine executes wiping drives, shutting the system down, or begins some other form of hostile action such as emailing sensitive information to the competitors. Currently, every version of Microsoft Word has an Easter Egg embedded. Placing the text=rand()in a word document and then pressing the enter key will cause three paragraphs to appear in the document. Microsoft has officially commented that this is not an Easter Egg but the process of evoking one can be demonstrated.
Malicious Logic: This is hardware, software, or firmware intentionally included, installed, or delivered to an information system or network for unauthorized purposes.
Polymorphic Virus: This describes a virus that infects each object differently in an attempt to fool virus scanners. These types of viruses can not be detected with a simple pattern match as is possible with most viruses. These viruses imposture as a legitimate program such as format.com. The process to imposture another legitimate program can be pre-pended, embedded, or appended to a legitimate file.
Resident Virus: The virus is usually broken into multiple pieces such as main code and a small launcher application. This type of virus attaches itself to the operating system where it hides a small launcher application and loads on boot into memory. These viruses attempt to hide their main code between tracks or beyond the writeable area. In doing so, the code can survive either a low level or high level format of the drive. Low level formats re-establish the cylinders, tracks, and sector locations. High level formats zero out the file allocation tables and rewrite the addresses. The launcher may be polymorphic also infecting a legitimate program like format.com. So when someone formats the drive to eliminate the virus it preserves the main code and loads the small launcher again.
Robots (BOTS): These are rogue programs that, when placed on the networks or Internet, explore the system by simulating human activity and then communicates its findings to a host. These programs often migrate or travel through the networks gaining access along the way to servers, routers, gateways, and workstations. These programs are most often used to gather intelligence data, aggregate information, or map pathways as spiders. On the Internet, the most universal BOTS are the programs that access web sites to gather content for search engines. Often these BOTS are called crawlers, aggregators, and spiders. Each has a different purpose.
Trojan Horse: A program that has embedded virus code designed to trigger by an event or date and time. These programs appear to have benefit to the user but instead have intelligence gathering capabilities, create back doors, or are designed to cause damage to information stored on the system. The difference between a Trojan Horse and a logic, time bomb, or Easter Egg is the end users involvement. End users innocuously use a Trojan Horse thinking it is a legitimate application. The end user has no awareness or tactile contact with a logic, time bomb, or Easter Egg.
Virus: Any program for the purpose of mal-intent when executed causes damage in some form and reproduces its own code or attaches to another program t that end.
Worms: An independent self-reproducing virus program that is distinguished from other virus forms. They are not attached to another program file but are able to propagate over a network and increase their activity by gaining access to email contact list or access to routing tables.
LESSON:
Computer viruses may seem mysterious, but they are easy to understand. Viruses are nothing more than destructive software that spreads from program to program or disk to disk. If you have a virus, you are no longer in control of your personal computer (PC). When you boot your PC or execute a program, the virus may also be executing and spreading its infection. Even though some viruses are not as malicious as others are, they are all disastrous in their own ways.
Characteristics of Viruses
There are different ways to categorize viruses depending on their characteristics. They can be slow, fast, sparse, companion, or overwriting.
Slow viruses - Viruses that take longer to detect because they spread very slowly and often do not cause havoc until they have sufficient numbers proliferated. They often bury themselves in network noise and attempt to disguise any pattern of their activity from any intrusion detection systems or virus detection systems.
Fast viruses - Viruses that spread rapidly by aggressively infecting everything that they can access. When active in memory, it infects not only the programs that are executed, but also the programs that are opened.
Sparse viruses - Viruses that infect files occasionally. It will infect files whose length falls within a certain range in order to prevent detection.
Companion viruses - Viruses that create a new program. Companion viruses uses the fact that files have the same filename, but with different extensions, and switches these files. You will notice that there is a problem when you normal run an .EXE file and you end up running a .COM file.
Overwriting viruses - Viruses that overwrite each file it infects with itself and the original program will no longer function. These files are consider to be impostures.
Existence of Viruses
The question on everyone’s mind when there is a discussion on computer viruses is: How do I know when I have a virus? Viruses have different characteristics, but there are little changes that you can look for and these changes will let you know that you have a virus. Some viruses display messages, music or pictures. The main indicators are the changes in size and content of your programs. Once you realize that you have a computer virus, you must stop it.Viruses are written to deliberately invade a victim’s computer, which makes them the most difficult to guard.
Virus Behavior
Computer viruses are known to be in different forms, but they all have two phases to their execution: the infection and the attack phases.
a. Infection phase-When a user executes a program with a virus, the virus infects other programs. Some infect programs each time they’re executed and others infect upon a certain trigger such as a day or time. If the virus infects too soon, they can be discovered before they do their “dirty work”. Virus writers want their programs to spread as far as possible before detection or they begin to achieve their objective at which time they will be known.
Many viruses go resident in memory just as a terminate and stay resident (TSR) program. This means that the virus can wait an extended period of time for something as simple as inserting a floppy before it actually infects a program. TSR programs are very dangerous since it’s hard to guess what trigger condition they use for their infection. Resident viruses occupy memory space and can cause the infamous Blue Screen of Death in MicroSoft operating systems.
b. Attack phase-Many viruses do unpleasant things such as deleting files, simulating typing, warble video screens, or slowing down your PC. Others do less harmful things such as creating messages or animation on your screen. Just as the infection phase can be triggered, the attack phase also has its own trigger. Most viruses delay revealing their presence by launching their attack after they’ve had time to spread. This could be delayed for days, weeks, or even years.
The attack phase is optional. Anything that writes itself to your disk without permission is stealing storage space. Many viruses simply reproduce without a trigger for an attack phase. These types of viruses damage the programs or disks they infect. This is not intentional on the part of the virus, but simply because the virus often contains very poor coding.
Classes of Viruses
There are four main classes of viruses: File Infectors, System or Boot Sector infectors, Macro viruses, and Stealth viruses.
File Infectors- Out of all of the known viruses, these are the most common types. File infectors attach themselves to files that they know how to infect, usually .COM and .EXE, and overwrite part of the program. When the program is executed, the virus is executed and infects more files. Overwriting viruses do not tend to be very successful since the program rarely continues to function properly. When this happens, the virus is almost immediately discovered. The more sophisticated file viruses modify the programs so that the original instructions are saved and executed after the virus finishes. File infectors can also remain resident in memory and use “stealth” techniques to hide their presence.
System or Boot Sector Infectors- These types of viruses plant themselves in your system sectors. System sectors are special areas on your disk containing programs that are executed when you boot your PC. These sectors are invisible to normal programs but are vital in the operation of your PC. There are two types of system sectors found on DOS PCs: DOS boot sectors and partition sectors (also known as Master Boot Records or MBRs).
System sector viruses, commonly known as boot sector viruses, modify the program in either the DOS boot sector or partition sector. One example of this virus would be to receive a floppy from a trusted source that contains the boot disk virus. When your operating system is running, files on the floppy can be read without triggering the virus. Once you leave the floppy in the drive, and turn the computer off, the computer will look in your floppy drive first. It will find your floppy with its boot disk virus, load it, and make it temporarily impossible to use your hard drive.
Macro Viruses- This particular virus seems to be the most misunderstood. This virus can also be classified as a file virus because they are from Microsoft Office applications. These applications have their own macro languages built in. These viruses execute because Microsoft has defined special macros that automatically execute. The mere act of opening an infected Word document or infected Excel spreadsheet can allow the virus to be executed. Macro viruses have been successful because most people regard documents as data and not as programs.
Stealth Viruses- These viruses attempt to hide their presence. Some techniques include hiding the change in date and time and the increase in file size. Others can prevent anti-virus software from reading the part of the file where the virus is loaded. They can also encrypt the virus code using variable encryption techniques.
WideSpread Myths
Viruses are often misunderstood. They can only infect your computer if you execute an infected file or boot from an infected floppy disk. Here are a few other common myths being spread regarding viruses.
You can get a virus from data. Data is not an executable program, so this is a myth. If someone sends you a data file that contained a virus, you would have to rename the file to execute it and become infected. In essence, the virus must be executable in order to be hostile. Data is inert and simply consumes space.

Viruses can infect your CMOS memory. CMOS stands for Complimentary Metal Oxide Semiconductor. It is functionally different than the dynamic TTL (Transistor Transistor Logic) RAM used for executing programs. The CMOS memory is very small and is not designed for executable routines. CMOS contains system configuration, time and date information. Viruses can damage your CMOS, but the CMOS will not get infected. If your CMOS memory is corrupted, you may not be able to access your disks or boot your PC.

You can write-protect your hard drive. There are some programs that claim to write-protect your hard drive. This will only be done by software. Write protecting will stop some viruses and will protect your disk from someone inadvertently writing to it. It also renders updates and functional operation of the computer ineffective as SWAP files and other temporary caching cannot be completed.

Viruses come from online systems.Online systems are pinnacle in the spread of viruses.It is after downloading that there are innumerable methods of invoking the virus through macros, automatic reposting of a webpage, automatic views of emails, and other methods.Even loading a plug-n-play DvD, CDROM, or memory stick can invoke a virus.

You can get a virus from graphic files. Graphic files, such as .JPG or .GIF, contain images. These images are displayed. In order to get a virus, a program has to be executed. Since graphic files are nothing more than data files, they pose no executable threat. However, through a technique of stegnography text and data including code can be embedded in an image file. A launcher program may know this and look for the code to call in and execute. However, the launcher is apart from the image file and executable.
Virus Protection Software
There are many techniques that can be used to detect viruses on computers. Each one has its own strengths and weaknesses. It would be great to actually stop viruses from infecting your computer. Since that can not be, we can do the next best thing: use anti-virus software and attempt to detect viruses. If you detect a virus, you can remove it and prevent it from spreading.
Virus Scanners
Scanning is the only technique that can recognize a virus while it is still active. Once a virus has been detected, it is important to remove it as quickly as possible. Virus scanners look for special code characteristics of a virus. The writer of a scanner extracts identifying pieces from code that the virus inserts. The scanner uses these pieces to search memory, files, and system sectors. If a match is found, the virus scanner will announce that a virus has been found and seek to isolate it.
If scanning is your only defense against viruses, you can improve the odds of detecting a virus on your computer by using two or more scanners. You should also make sure that you get the latest version of virus scanners.
Disinfector
Most vendors that sell scanners also have a disinfector. A disinfector has the same limitations of a scanner, except it must be current to be safe to use. A disinfector also has an even bigger disadvantage: many viruses can’t be removed without damaging the infected file. There have also been many reports that files are still damaged even when the program claims to have disinfected the file. A disinfector is good to use, but use it with care.
Another disadvantage with a disinfector is that some of your programs may no longer work after being disinfected. Many disinfectors will not tell you that it failed or to correctly restore the original program. You can safely use a disinfector if you have the capability to check and make sure the original file is restored.
Interceptors
Interceptors, also known as resident monitors, are particularly useful for deflecting logic bombs and Trojans. The interceptor monitors operating system requests that write to disk or do other things that the program considers threatening. If a request is found, it generally pops up and asks you if you want to allow the request to continue. There is no reliable way to intercept direct branches into low level code or to intercept direct input and output instructions done by the virus. Some viruses attempt to modify the interrupt vectors to disable any monitoring code. It is important to realize that monitoring is risky. An interception product would be useful to another protection program. There are many ways to bypass interceptors, so you should not depend on interceptors as a primary defense against viruses.
Inoculators
Inoculators are also known as immunizers. There are two types of inoculators. One type modifies your files or system sectors in an attempt to fool viruses into thinking that the user is already infected. It does this by making the same changes that the viruses use to identify the file or sector as infected. Presumably, the virus will not infect anything because it will think that everything is already infected. This works only for a small amount of viruses and is considered unreliable today.
The second type is an attempt to make your programs self-check by attaching a small section of check code on your programs. When the program executes, the check code first calculates the check data and compares it to the stored data. The check code will warn you of any changes to the program. This can be a disadvantage because the self-checking code and check data can be modified or disabled. Another disadvantage would be that some programs refuse to run if they have been modified this way. Presumably, this creates alarms from other anti-virus programs since the self-check code changes the original program in the same way a virus would. Some products use this technique to substantiate their claim to detect unknown viruses. As a result, this would not be a reliable way to get rid of viruses.
Integrity Checker
Integrity checker reads your entire disk and records integrity data, which acts as a signature for the files, boot sectors, and other areas. A virus must change something on your computer. The integrity check identifies these changes and alerts you to a virus. This program is the only solution that can handle all the other threats to your data along with viruses. They also provide the only reliable way to find what damage a virus has done. A well-written integrity checker should be able to detect any virus, not just known viruses.
An integrity checker won’t identify a virus by name unless it includes a scanner component. Many anti-virus software now incorporate this technique. Some older integrity checkers were simply too slow or hard to use to be truly effective. A disadvantage of a bare-bones integrity checker is that it can’t differentiate file corruption caused by a bug from corruption caused by a virus. You should make sure to verify that your product will read all files and system sectors in their entirety rather than just spot-checking.

Other Threats to Computers

There are many other threats to your computer. Problems with hardware, software, and typos are more likely to cause undetected damage to your data and may appear to be virus-like. It’s easy to understand the threat that disk failure represents. Even though viruses are a threat, we need to address other threats as well by fault tolerancing our systems, running multiple processor cores, and installing stable quality RAM. Even driver updates can cause damage and loss of data. Therefore, automatic updates should be turned off and all updates reviewed regularly.>/div>

Conclusion

There are many variants of viruses out in the real world today. No one is safe from being infected. That’s why it is so important to take precautions. If you receive anything from an unknown source, delete it. Always update your antivirus with the latest signature files. Most viruses do little damage, but there are still others that can delete important files from your hard drive causing your PC to become inoperable. A few minutes of prevention is better than several hours of frustration and lost data.

Reference:

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

McClure, S, (2009). Hacking Exposed 6, Mcgraw-Hill Company, ISBN 9780071613743

Slade, R. (1994). Computer viruses: how to avoid them, how to get rid of them, and how to get help.New York.Springer-Verlag.