Wednesday, April 20, 2016

Human Centric Computing

Commentary: This post was originally written Dec 2010 and was updated then reposted Dec2013. Chief Executive Officers fear commoditization of their products and services. This is a major indicator of attenuating profit margins and a mature market. Efforts such as "new and improved" are short term efforts to extend a product life cycle. A better solution is to resource disruptive technologies that cause obsolescence of standing products and services, shift the market, and create opportunities for profit. Human centric computing does just that and project manager may find they are involved in implementing these projects of various levels of complexity. Thus, project managers should have a grasp of this technology and even seek solutions in their current projects. 

Human Centric Computing
by 
JT Bogden, PMP

Human centric computing has been around for a long time. Movies for decades have fantasized and romanticized about sentient computers and machines that interfaced with human beings naturally.  More recent movies have taken this to the ultimate end with characters such as Star Trek's Data, Artificial Intelligence A.I.'s character David, or the movie iRobot's character Sonny.   In all cases these machines developed self-awareness or the essence of what is considered to be uniquely human but remained machines.  The movie Bicentennial Man went the opposite direction from a self-aware machine who became human. This is fantasy and there is a practical side to this.

Michael Dertouzos in his book, 'The Unfinished Revolution', discusses early attempts at developing the technologies behind these machines. The current computational technologies are being challenged as the Unfinished Revolution plays out. I am not in full agreement with the common understanding of the Graphical User Interface, GUI, as "a mouse-driven, icon-based interface that replaced the command line interface". This is a technology specific definition that is somewhat limiting and arcane in thinking. A GUI is more akin to a visceral human centric interface in which one form utilizes a mouse and icons. Other forms use touch screens, voice recognition, and holography. Ultimately, the machine interfaces with humans as another human would in the end state.

Human Centric Computing

Humans possess sensory capabilities that are fully information based. Under the auspices of information theory, during the 1960's human sensory has been proven to be consistent with Fourier's Transforms. These are mathematical formulas in which information is represented by signals in terms of time domain frequencies. In lay term, your senses pickup natural events and biologically convert the event to an electrical signals in your nervous system. Those signals have information encoded in them and arrive at the brain where they are processed holographically. The current computational experience touches three of the five senses. The visceral capability provides the greatest information to the user currently because the primary interface is visual and actually part of the brain. The palpable and auditory are the lesser utilized with touch screens, tones, and command recognition. The only reason smell or taste is used is if the machine is fried in a surge which leaves a bad taste in one's mouth. However, all the senses can be utilized since their biological processing is identically the same. The only need is for the correct collection devices or sensors.

Technological Innovations Emerging

If innovations such as these device examples are fully developed and combined with the visceral and palpable capabilities cited earlier truly human centric machines will have merged and the 'face' of the GUI forever will have changed.

Microsoft's new Desktop is literally a desktop that changes the fundamental way humans interact with digital systems by discarding the mouse and keyboard altogether. Bill Gates remarks that the old adage was to place a computer on every desktop. Now Gates remarks that Microsoft is replacing the desktop completely with a computational device. This product increases the utilization of the palpable combined with the visceral in order to sort and organize content then transfer the content between systems with the relative ease of using the finger tips (Microsoft, 2008). For example, a digital camera is set on the surface and the images stored are downloaded then appear as arrayed images on the surface for sorting with your fingertips. Ted Talk's Multi-touch Interface highlights the technology. 

Another visceral and palpable product is the Helio Display. This device eliminates the keyboard and mouse as well. Image appears in three dimensions on a planar field cast into the air using a proprietary technology. Some models permit the use of one’s hands and fingers in order to ‘grab’ holographic objects in mid-air and move them around (IO2, 2007). Another example of this concept is Ted Talk's video Grab a Pixel

On the touch screens of various forms virtual keyboards can be brought up if needed. However, speech software allows for not only speech-to-text translation but also control and instructions.  Speech engines that can provide high quality instructions replacing error tones and help text. Their telephony products are capable of interaction with callers. Their software also comes in 25 languages. (Loquendo, 2008).

There are innumerable human centric projects ongoing. In time, these products will increasingly make it to the market in various forms where they will be further refined and combined into other emerging technologies. One such emerging trend and field is a blending of Virtual Reality and the natural.  The Ted Talks video 'Six Sense' illustrates some of the ongoing projects and efforts to change how we interconnect with systems.

Combining sensory and collection technology with neutral agents may increase the ability to evaluate information bring the computer systems closer to self-awareness and true artificial intelligence.  Imagine a machine capable of intaking an experience then sharing that experience in a human manner. 

Commentary:  Project managers seeking to improve objectives where selection and collection of information can be quickly gathered with out typing or wiping a mouse across the screen, should consider use of these type of products whenever possible.  Although costly now, the cost for these technologies will drop as the new economy sets in. 

References:

Dertouzos, M.L. (2001). The unfinished revolution: human-centered computers and what they can do for us.  (1st ed.), HarperBusiness 

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

IO2 Staff Writer, 2007. HelioDisplay, Retrieved 25FEB09 from http://www.io2technology.com/

Microsoft Staff Writer, 2008. Microsoft Surface, 2008, Retrieved 25FEB09 from
 http://www.microsoft.com/surface/product.html#section=The%20Product

Loquendo Staff Writers, 2008. Loquendo Corporate Site, Retrieved 25FEB09 from
 http://www.loquendo.com/en/index.htm

Sunday, March 13, 2016

Hip Pocket Tools To Collect Quick Data


Hip Pocket Tools To Collect Quick Data
By
JT Bogden, PMP

As project managers working in IT, having a grasp of simple activities and practices goes a long way in understanding the complexity behind many projects and how things are inter-related. Writing code to quickly gather browser or network information is one of those simple little things that can have a major impact on a project as they provide a wide breadth of information about a web application's users and environment. This information can be useful across a breadth activities within an organization as well.

Let us look at the use of writing mobile code which sense the environmental conditions. Information collected should be used to properly route the web application to code that accounts for the environmental conditions. Sometimes that means simply adapting dynamically to the qualities like screen width. At other times, code has to account for browser differences. For example, some browsers and versions support features like Geolocation and other do not.

I have scripted some mobile code to detect the current environmental conditions as Blogger would permit, Table 1. The Blogger web application does not allow full featured client side javascript and removes or blocks some code statements. In the code, logic detects the environmental conditions in Internet Explorer, Safari, Chrome, Opera, and Firefox routing the results to the tabular output. Geolocation is tricky and does not execute is all browser versions and may not post results or error results in some browsers and versions. Please try reviewing this post in multiple browsers and platforms (iPad, iPod, PC, MacBooks).

MOBILE CODE RESULTS
Device
Screen Resolution
Client Resolution
Java Enabled
Cookies Enabled
Colors
Full User Agent
GEOLOCATION
Table 1: Sniffer Results
(To collect geolocation, please approve in browser)

Embedding code into web applications and storing the results in a database can yield valuable histories. Project managers planning and coordinating projects, whether writing a web application or conducting some other IT related project, must be able to have an understanding of the environmental situation. Almost always there are anomalies. The use of mobile code can provide valuable information back to the project manager before issues arise.

Mobile code is one of the hip pocket tools that can provide important information. Installing and tracking data over time can show progress, effects, and flush out the anomalies before they become problems.

Neural Agents

Comment: Several years ago, I was the leader of an operationalized telecommunication cell. The purpose of the cell was to monitor the effectiveness and readiness of the telecommunications in support of the ongoing operations. The staff regularly turned over due to the operational tempo and I had to train new staff quickly. I did so by preparing a series of technical briefs on topics the cell dealt with. This brief was dealing with  Neural Agents which I have updated and provided additional postings that paint a picture of potential advanced systems. 

Neural Agents
by
JT Bogden, PMP

Figure 1: Agent Smith
The Matrix movie franchise
Neural Agents have this spooky air about themselves as though they are sentient and have clandestine purpose. The movie franchise the 'The Matrix', Figure 1, made use of Agent Smith as a artificial intelligence designed to eliminate zombie processes in the simulation and human simulations that became rogue such as Neo and Morpheus.  In the end, Agent Smith is given freedom that results in him becoming rogue and rebellious attempting to acquire increasing powers over the simulation. 

The notion of artificial intelligence has been around forever. Hollywood began capturing this idea in epic battles between man and machines in the early days of Sci-Fi.  More recently, the movie "AI" highlighted a future where intelligent machines survive humans. Meanwhile, the Star Trek franchise advances intelligent ships using biological processing and has a race of humanoid machines called the Borg.  Given all the variations of neural technologies, the Neural Agent remains a promising technology emerging in the area of event monitoring but not acting quite as provocative as Agent Smith. The latest development in neural agents in support of artificial intelligence. Neural agents, Neugents which are not related to Ted Nugent, are becoming popular in some enterprise networks. 

Companies can optimize their business and improve their analytical support capabilities as this technology enables a new generation of business applications that can not only analyze conditions in business markets, but they can also predict future conditions and suggest courses of action to take. 

Inside the Neugent

Neural agents are small units or agents, containing hardware and software, that are networked.  Each agent has  processors and contain a small amount of local memory. Communications channels (connections) between the units carry data that is encoded usually on independent low bandwidth telemetry. These units operate solely on their local data and input is received from over the connections to other agents. They transmit their processed information over telemetry to central monitoring software or other agents. 

The idea for neugents came from the desire to produce artificial systems capable of “intelligent” computations similar to those of the human brain. Like the human brain, neugents “learn” by example or observations. For example, a child recognizes colors by examples of colors. Neugents work in a similar way: They learn by observation. By going through this self-learning process, neugents can acquire more knowledge than any expert in a field is capable of achieving.

Neugents improve the effectiveness of managing large environments by detecting complex or unseen patterns in data. They analyze the system for availability and performance. By doing this, neugents can accurately “predict” the likelihood of a problem and even develop enough confidence over time that it will happen. Once a neugent has “learned” the system’s history, it can make its predictions based on the analysis, and it will generate an alert, such as: “There is a 90% chance the system will experience a paging file error in the next 30 minutes”.

How Neugents Differ From Older Agents

Conventional or older agent technology requires someone to work out a step-by-step solution to a problem then code the solution. Neugents, on the other hand, are designed to understand and see patterns, to train. The logic behind the neugent is not discrete but instead symbolic.  They assume responsibility for learning then adapt or program themselves to the situation and even self-organize. This process of adaptive learning increases the neugent's knowledge, enabling it to more accurately predict future system problems and even suggest changes.  While these claims sound far reaching, progress has been made in many areas improving adaptive systems. 


Neugents get more powerful as you use them. The more data it collects, the more it learns. The more it learns, the more accurate its predictions. This solution comes from two complimentary technologies: the ability to perform multi-dimensional pattern recognition based on performance data and the power to monitor the IT environment from an end-to-end business perspective.

Systems Use of Neugents and Benefits

Genuine enterprise management is built on a foundation of sophisticated monitoring. Neugents apply to all areas. They can automatically generate lists for new services and products, determine unusual risks and fraudulent practices, and predict future demand for products, which enable businesses to produce the right amount of inventory at the right time. Neugents help reduce the complexity of the Information Technology (IT) infrastructure and applications by providing predictive capabilities and capacities. The logic behind the neugent is not discrete but instead symbolic. 

Neugents have already made an impact on the operations of lots of Windows Server users who have already tested the technology. They can take two weeks of data, and in a few minutes, train the neural network. Neugents can detect if something’s wrong. They have become a ground-breaking solution that will empower IT to deliver service that today’s digital enterprises require.

With business applications becoming more complex and mission-critical, the us of neugents is more necessary to predict then address performance and availability problems before downtime occurs. By providing true problem prevention, Neugents offer the ability to avoid the significant costs associated with downtime and poor performance. Neugents encapsulate performance data and compare it to previously observed profiles. Using parallel pattern matching and data modeling algorithms, the profiles are compared to identify deviations and calculate the probability of a system problem.

Conclusion

Early prediction and detection of critical system states provide administrators an invaluable tool to manage even the most complex systems. By predicting system failures before they happen, organizations can ensure optimal availability. Early predictions can help increase revenue-generating activities as well as minimizing the associated costs due to system downtime. Neugents alleviate the need to manually write policies to monitor these devices.

Neugents provide the best price/performance for managing large and complex systems. Organizations have discovered that defining an endless variety of event types can be exhausting, expensive and difficult to fix. By providing predictive management, Neugents help achieve application service levels by anticipating problems and avoiding unmanageable alarm traffic as well as onerous policy administration.

Monday, May 4, 2015

EDI Overview

Commentary: This post was originally posted 10Mar2011. I have made a few updates then published the post again. EDI is in pervasive use in manufacturing, logistics, banking, and general day-to-day website purchases. Many people have little understanding about EDI. I want to highlight what it is, general implementations, challenges, and benefits.

EDI Overview
by
JT Bogden, PMP

Electronic Data Interchange, EDI, is the pervasive method of conducting business transactions electronically. It is not technology dependent nor driven by technology implementations. Instead, EDI is driven by business needs and is a standard means for communicating business transactions. The process centers on the notion of a document. This document contains a host of information relating to purchase order, logistics, financial, design, and/or Personal Protected Information (PPI). These informational documents are exchanged between business partners and/or customers conducting business transactions. Traditional methodologies used paper which has a lot of latency in that system and is error prone. Replacing the paper based system and call center systems with electronic systems do not change the processes at all since the standard processes remain independent of the implementation or medium. When the processes are conducted via electronic media the latency is compressed out of the system and errors are reduced making the electronic processing far more desirable given the critical need for speed and accuracy in external processes.

Implementing EDI is a strategy-to-task effort that must be managed well due to some of the complexities of the implementation. A seven step phased process highlights an EDI implementation.

Step 1 - Conduct a strategic review of the business.
Step 2 - Study the internal and external business processes.
Step 3 - Design an EDI solution that supports the strategic structure and serves the business needs
Step 4 - Establish a pilot project with selected business partners.
Step 5 – Flex and test the system
Step 6 - Implement the designed solution across the company
Step 7 - Deploy the EDI system to all business partners

The technology used in EDI varies based on the business strategies, Figure 1. In general, EDI services can operate through three general methods; 1) the Value Added Network (VAN), 2) Virtual Private Network (VPN), 2) by Point-to-Point (P2P), or 30 Web EDI. The first two are for small to medium sized EDI installations where a direct association is known, established, and more secure. Web EDI is conducted through a web browser over the World Wide Web and the simplest form of EDI for vary broad based and low value purchases. The technology in use varies slightly and comes down to three forms of secure communications at a central gateway service such as Secure File Transfer Protocol (FTPS), Hypertext Transport Protocol Secure (HTTPS), or AS2 Protocol which is used almost exclusively by Walmart. Web EDI requires no specialized software to be installed and works across International boundaries well. Web EDI is a hub-and-spoke approach and messages to multiple EDI systems through its gateway service. In addition, industry security organizations provide standards and oversight for data in motion and at rest. PCI Data Security Standards is one such organization.

Figure 1: EDI Systems Architecture
Overall, EDI can reduce latency in business transactions and tightly adhere to the organizational strategies without significant adaptation of the organization. Organizations may try to outsource the people, processes, and even the technology as part of their strategy and objectives but processes remain consistent. In conclusion, EDI is a standard that is applied in various forms offering numerous advantages to an organization and business transactions.

Wednesday, January 8, 2014

Personal Data Storage

This post is a departure from beaten track as we will discuss Redundant Arrays of Independent Disks; RAIDs. Having reliable data storage is essential to a personal data storage plan.  Many folks may chuckle at the thought of a personal data storage plan but there are good reasons for having one - especially if you have lost data in the past.  If you store large volumes of digital information such as tax records, medical records, vehicle documents, digital music, movies, photo collections, graphics, and libraries of source code or articles then a storage data plan is essential to reduce the risk of loss. Loss can occur due to disk drive failures, accidental deletions, or hardware controller failures that wipe drives.  Online services offer affordable subscription plans to back up your PC or store data in a cloud but you risk the loss of privacy when using these services. Having local, portable, and reliable data storage is the best approach and a personal data management plan is the centerpiece of the effort.

Personal Data Storage 
by
JT Bogden, PMP

Such a plan should be designed around two points. First, there should be a portable detachable and reliable independent disk drive system. Second, there should be a backup system. We will focus on the first point in this post. 

Figure 1: Completed RAID
After a lot of research, I settled on a barebones SANS Disk Raid TR4UT+(B) model, Figure 1. The device has a maximum capacity of 16 TBs and supports up to USB 3.0 and has an option to operate from a controller card improving data transfer raters over USB 3.0.  Fault tolerance methods of cloning, numerous RAID levels, and JBOD are supported as well.  Thus, the unit is well poised for a long term durable use.  

Since the device is barebones, I had to find drives that are compatible. Fortunately, the device was compatible with 11 different drives ranging from 500MB up to 4TBs across three vendors.  I had to figure out what characteristics mattered and determine which of the drives were optimal for my needs. The approach I used was a spreadsheet matrix, Figure 2. The illustrated matrix is a shortened form and did not consider the 4TB drive as it was cost prohibitive from the start as were several of the other drives. The 3 TB drive was used to breakout or create a spread for the other options. I computed the coefficient of performance, CP, then averaged them for the overall performance. In the end, I selected the Hitachi UltraStar 1TB in this example and purchased 4 of them. They are a high end server drive that are quiet and can sustain high data transfer rates for long periods of time. 

Figure 2: Decision Matrix for Drive Selection and Purchase

Figure 3: Installed Drives 
After selection, purchasing, and installation of the drives, Figure 3, RAID 5 was selected for the drive configuration. RAID 5 permitted hot swappable drives should one fail and provided more disk space than the other RAID modes. RAID 5 is a cost effective mode providing good performance and redundancy. Although, writes are a little slow. 

The final part of the process was to initialize and format the drives. File Allocation Tables, FAT and FAT32 are not viable options as they provide little recovery support.  New Technology File System, NTFS, improves reliability and security among other features. However, there is an emergent file system GUID Partition Table, GPT, which improves upon NTFS and breaks through older limitations. Current versions of Mac OS and MS Windows support this file system on a read and write level. Therefore, in a forward looking expectation of future movement towards this file system, the RAID was initialized then formatted with GPT. The formatting process was slow and took a long time. 

In the end, the RAID unit was accessible by both Windows and the MacBook Pro. All the data and personal information on disparate USB drives, memory sticks, and the local machines were consolidated to the RAID device. For the first time all my music, movies, professional files, and personal data were in one place with the strongest protection. The final cost was less than $650. The cost can be kept down if you shop around for the components: Amazon. It took about 8 hours of direct effort. Although the formatting and files transfers occurred as I did other things. 

While I will still use my memory sticks and a 1 TB portable USB drive with my notebooks, the RAID is the primary storage device. It can be moved relatively easily if I change locations and/or swap it between computers if necessary. The device can also be installed as a serverless network drive and hung off of a wireless router. I prefer not to use it in that manner as the risk of exposure or loss of privacy slightly increases.

Overall, the system is quiet and has a low power drain while in operation with heightened data protection. I encourage others to rethink how they are storing their data and invest in a solid reliable solution.  As the solid state drive come into increasing use, the traditional silver oxide platter drives will drop in price dramatically.  This will enable more folks to build drive arrays like mine at lower costs then convert them later to the solid state systems as those prices drop. 

Virtualizing Computational Workloads

Commentary:  This is a general discussion in which I wrapped up into the discussion an unique use of Virtualization. In the short term, companies can benefit from off loading heightened computational demands. They may desire to purchase computational power for a limited time versus the capital expenditure of purchasing and expanding the systems. The virtualized environment also can solve issues relating to geographically dispersed personnel. Overall, we are a long way from meaningfully and effectively using the excess computational power residing on the web or across an organization. This discussion though hopefully gives some insight on how to use that excess power.


Virtualizing Computational Workloads
by
JT Bogden, PMP  

Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.

Figure 1:  The Virtualized Concept


The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix.  The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.

Virtualization

The virtualization concept is one in which operating systems, servers, applications, management, networks, hardware, storage, and other services are emulated in software but to the end user it is completely independent of the hardware or unique technological nuances of system configurations. Examples of virtualization include software such as Fusion or VMWare in which Microsoft's operating system and software run on a Apple MacBook.  Another example of virtualization is the HoneyPot used in computer network defense. Software runs on a desktop computer that gives the appearance of a real network from inside the DMZ to a hacker attempting to penetrate the system. The idea is to decoy the hacker away from the real systems using a fake one emulated in software. An example of hardware virtualization is the soft modem. PC manufacturers found that it is cheaper to emulate some peripheral hardware in software. The problem with this is diminished system performance due to the processor being loaded with the emulation. The JAVA virtual engine is also another example of virtualization. This is a platform independent engine that permits JAVA coders to code identically the same on all platforms supported and the code to function as mobile code without accounting for each platform.

Provisioning In Virtualization

Once hardware resources are inventoried and made available for loading. Provisioning in a virtualized environment occurs in several ways. First, physical resources are provisioned by management rules in the virtualization software usually at the load management tier, Figure 1. Secondly, users of a virtual machine can schedule a number of processors, the amount of RAM required, the amount of disk space, and even the degree of precision required for their computational needs. This occurs in the administration of virtualized environment tier, Figure 1. Thus, idle or excess resources can, in effect, be economically rationed by an end user who is willing to pay for the level of service desired. In this way the end user enters into an operating lease for the computational resources for a period of time. No longer will the end user need to make a capital purchase of his computational resources.

Computational Power Challenges

I have built machines with multi-processors and arrayed full machines to handle complex computing requirements. Multi-processor machines were used to solve processor intensive problem sets such as Computer Aided Design, CAD, demands or high transaction SQL servers. Not only were multiple processors necessary but so were multiple buses and drive stacks in order to marginalize contention issues. The operating system typically ran on one buss while the application ran over several over other busses accessing independent drive stacks. Vendor solutions have progressed with newer approaches to storage systems and servers in order to better support high availability and demand. In another application, arrayed machines were used to handle intensive animated graphics compilations that involve solid modeling, ray tracing, and shadowing on animations running at 32 frames per second. This meant that a 3 minute animation had 5760 frames that needed to be crunched 3 different times. In solving this problem, the load was broken into sets. Parallel machines crunched through the solid model sets handing off to ray tracing machines then to shadowing machines.  In the end the parallel tracks converged into a single machine where the sets were re-assembled into the finished product. System failures limited work stoppages to a small group of frames that could be 're-crunched' then injected into the production flow.  

These kinds of problems sets are becoming more common today as computational demands on computers become more pervasive in society. Unfortunately, software and hardware configurations remain somewhat unchanged and in many cases unable to handle the stresses of complex or high demand computations. Many software packages cannot recognize more than one processor or if they do handle multiple processors the loading is batched and prioritized using a convention like first in first out (FIFO) or stacked job processing. This is fine for a production use of the computational power as given in the examples earlier. However, what if the computational demand is not production oriented but instead sentient processing or manufactures knowledge? I would like to explore an interesting concept in which computational power in the cloud is arrayed in a virtualized neural net.

Arraying for Computational Power in New Ways

Figure 2: Computational Node


One solution is to leverage arcane architectures in a new way. I begin with the creation of a virtual computational node in software, Figure 2, to handle an assigned information process. Then organize hundreds or even tens of thousands of computational nodes on an virtualized backplane, Figure 3. The nodes communicate in the virtual backplane listening for information being passed then process it, and publish the new information to the backplane. A virtualized general manager provides administration of the backplane and is capable of arraying the nodes dynamically in series or parallel to solve computational tasks. The node arrays should be designed using object oriented concepts. Encapsulated in each node is memory, processor power, its own virtual operating system and applications. The nodes are arrayed polymorphically and each node inherits public information.  In this way, software developers can design workflow management methods, like manufacturing flow, that array nodes and use queues to reduce crunch time, avoid bottle necks, and distribute the workload. Mind you that this is not physical but virtual.  The work packages are handed off to the load manager which tasks the physical hardware in the cloud, Figure 3.

Figure 3:  Complex Computational Architecture


This concept is not new. The telecommunications industry uses a variation of this concept for specialized switching applications rather than general use computing. There are also array processors used for parallel processing. Even the fictional story, Digital Fortress by Dan Brown centered on a million processor system. Unfortunately, none of these concepts were designed for general use computing. If arrayed computational architectures were designed to solve complex and difficult information sets then this has the potential for enormous possibilities. For example, arraying nodes to monitor for complex conditions then make decisions on courses of actions and enact the solution.

The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets.  A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.

The World Wide Web and Computational Limitations

This architecture within a cloud is limited to developing knowledge or lines of logic. Gaps or breaks in a line of logic may be inferred based on history which is also known as quantum leaps in knowledge or wisdom. Wisdom systems are different than knowledge systems. Knowledge is highly structured and its formation can be automated more easily.  Whereas wisdom is less structured having gaps in knowledge and information. Wisdom relies on inference and intuition in order to select valid information from its absence or out of innuendo,  ambiguity, or otherwise noise. Wisdom is more of an art whereas knowledge is more of a science.

Nonetheless, all the participating computers on the World Wide Web could enable a giant simulated brain. Of course, movies such as The Lawn Mower Man, Demon Seed, The Forbes Project, and War Games go the extra mile making the leap to self-aware machines that conquer the world. For now though, let's just use them to solve work related problems.

References:

Brown, Dan, May 2000. Digital Fortress, St Martin’s Press, ISBN: 9780312263126

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Impacts of Complexity on Project Success

Commentary: This is the relevant portions of an extensive paper written in my Masters of Information Technology coursework.  The paper highlights a common concern among many project managers. That is the lack of quality information early in a project especially in complex projects.  The overall paper proposed research into project complexity and early planning efforts.

Impacts of Complexity on Project Success
by
JT Bogden, PMP

Introduction

Project management practice and principles have been maturing and continue to mature. The general paradigm to plan well applies to early project planning and has a significant influence on the success or failure of a project. This research is in support of identifying the key relationships between influential factors affecting scope and risk in complex projects during early project planning. Attention to the complexity is important since the nature of information technology, IT, projects are complex. Complexity tends to increases risk. "Project abandonment will continue to occur -- the risks of technology implementation and the imperfect nature of our IT development practices make it inevitable" (Iacovou and Dexter, 2005, 84). Therefore, this study is focused on the early information technology project planning practices when the project is vague and the outcomes are unknown and unforeseen. The purpose is to better manage scope gap early.

Problem Statement. Poor scope formulation and risk identification in complex projects during the early planning have lead to lower project performance and weakened viability. Therefore, project managers are challenged to manage these issues early in order to increase the project's viability and success.

Argument.  Project complexity influences performance just as taking shortcuts in a rush for results causes an outcome with complexity like characteristics. Lower performance outcomes may result from project essential factors relating to scope and risk objectives that are overlooked or not properly managed resulting in increased cost, delays, and/or quality issues that jeopardize the projects viability and success. 

Body of Works Review

This effort intends to explore the significant body of works that has emerged to date. Literature research was conducted across a diversity of project types in support of the research problem statement that poor scope formulation and risk identification of a complex project during the early planning affect project performance and project viability in relationship to complexity of the project. This is by no means the first time research of this nature has been explored in these three areas; scope definition, risk identification, and project complexity. 

The common threads in the body of works that has emerged spans decades to include project management as a whole, risk and scope factors that affect project success, information and communications challenges, and complexity impacts on scope and risk. The works researched in other disciplines provide many transferrable lessons learned. For example, construction and engineering projects have in common to information technology projects complexity issues as well as information reporting and sharing concerns. Other works from supporting disciplines contribute to factors on education, intellect, and learning in support of competency influences on risk. A 2001 trade publication article indicated that causes for failed projects tend to be universal.  The article's author, John Murray, concludes that information technology projects fail for a small set of problems rather than exotic causes (Murray, 2001, p 26-29).

In a 2008 construction project study, the researchers discussed the construction industries front end planning which is explained as the same as the project charter process. The works details a study of fourteen companies and their project planning processes then presents a model process. The study results are summarized into critical criterion of success. In conclusion, fifty percent of the projects did not have required information for front-end planning activities. Problem areas were identified in a follow on study to include weak scope and risk identification as well as other basic issues (Bell and Back, 2008).

The problems of scope definition researched in the body of works indicates that cooperative planning and information sharing have been key factors in developing scope. A 2007 study on concurrent design addressed the complexities and risk of concurrent design projects. The researchers posed a model of interdependent project variables. The linkages illustrate the direction of the communications or information sharing between the variables. In the researcher's analysis they conclude that through cooperative planning in the form of coupling and cross-functional involvement significantly reduce rework risk. Early uncertainty resolution depends on cross-functional participation (Mitchell and Nault, 2007).

The Technology Analysis and Strategic Management Journal published an article in 2003 discussing outsourcing as a means of risk mitigation. The outcome of the case under review was project failure due to a lack of clear requirements and poor project management. This was attributed to conflict and a loss of mutual trust between the outsourced vendor and the information technology client. The result was one vendor cutting losses due to weak commitment when compared to in house project support. The researcher suggested that shared risk may be effective in a partnership such as outsourcing but requires strong communication and some level of  ownership (Natovich, 2009, p 416).  This article's case study illustrates that cooperation is critical in information technology projects. A 1997 study discussed mobilizing the partnering process in engineering and construction projects during complex multinational projects. Researchers argued developing project charters fostered stronger partnerships and reduced risk. In general, the article promotes a shared purpose supported by a method based on vision, key thrusts, actions, and communication. The works offers management practices and predictors for conflict resolution and successful projects. One of the best predictors of success in high performance project managers is the ability to reconcile views rather than differentiate; influence through knowledge; and consider views over logic or preferences (Brooke and Litwin, 1997).

The literature has also indicated competencies of project members and conflict resolution have been key factors of interest. Northeastern University's explored strengthen information technology project competencies having conducted a survey of 190 employers finding that employers considered hands on experience, communications ability, and behavioral mannerism of the student among other attributes. The researcher makes a call for a mixture of improvements to student curriculum that involves project management skills both visionary and hand-on as well as group interaction (Kesner, 2008).  The efforts to strengthen competencies have not only been in traditional education institutions but also in professional organizations such as the American Institute of Certified Public Accountants (AICPA). A 2008 article discussed the accounting industry's approach to identifying and correctly placing information technology staff based on assessed competency levels. The AICPA is using a competency set that is found cross industry and levels of skill ("IT Competency", 2008).  Some dated literature is also indicating that in order to solve vague problem sets within complex project has centered on a willingness and ability to engage the vague circumstances, to think abstractly.  A 1999 psychology publication discussed the typical intellectual engagement involving a desire to engage and understand the world; interest in a wide variety of things; a preference for complete understandings of a complex problem; and a general need to know. The study associated intellect with the typical intellectual  engagement their environment in an intellectual manner, problem solve, believe they possess greater locust of control over the events in their lives (Ferguson, 1999, p 557-558).  Additional research is necessary in this area with this work being so dated.

In a 2006 article researchers sought to understand reporting to senior manager methodology regarding software development projects. The works discussed reporting and governance in an organization then break into four functional areas and further refine the best practices into a common view.  The researchers noted that little attention has been given to how senior managers and the board can be informed about project progress and offered several method of informing them. The researchers reported that senior managers need information grouped into three classes; support decisions, project management, and benefits realization assessments. The researcher then discusses a variety of reports and their attributes. The researchers concluded that senior managers and board members need effective reporting if they are to offer oversight to the software development project (Oliver and Walker, 2006).  Another 2006 study indicated that continuous reporting, information sharing, builds the case for compelling board member involvement based on four factors: cost overrun history, material expenditures, [software] complexity, and any adverse effects on the company (Oliver and Walker, 2006, p 58).

The challenges of project complexity management have utilized information technology governance as a key factor in project success.  Information technology governance has been sought as a framework to align organizational goals with project goals.  In a 2009 qualitative study, researchers sought to treat information technology governance, change management, and project management as closely related then stated a premise that information technology governance must be governed to ensure that problems due to weak governance are corrected.  They postulate the question how much information technology governance is a requirement. Then they organize information technology governance into three broad groups; corporate governance, scope economies, and absorptive capacity exploring these groupings. The researchers finally relate information technology governance to the enterprise at all levels discussing results of a survey given to numerous actors in the organization's CRM [Customer Relationship Management] projects. They also found that most companies surveyed had risk and problem management programs that were mature rather than given lip service. The problem areas that stood out were communicating with senior management as well as consultants and vendors. In conclusion, the researchers remark that information technology governance depends on senior management involvement and sound project management ability (Sharma, Stone, and Ekinci, 2009).

Given scope, risk and project complexity, information technology governance offers a framework for unifying organizational objectives.  Research completed in 2009 showed that information technology governance covers all the assets that may be involved in information technology, whether human, financial, and physical, data, or intellectual property (Sharma, Stone, Ekinci, 2009, p 30).  The same research has also shown that information technology governance required top down involvement stating that successful implementations of information technology governance depends on senior management involvement, constancy, and positive project management abilities (Sharma, Stone, and Ekinci, 2009, p 43).  Senior management requires information to be shared and a 2006 project journal publication supports remarking that continuous reporting builds the case for compelling board member involvement based on four factors: cost overrun history, material expenditures, [software] complexity, and any adverse effects on the company (Oliver and Walker, 2006, pp 50-58).

The body of works while much broader than sampled and demonstrates support and strength in a number of areas of the problem statement.  The literature selected ranges in date from 1997 to 2010 with the greater portion of the works were more recent, 2007 or thereafter. Some of the areas of work are dated or sparse. This indicates a need additional research such as in the area of problem solving abilities in vague or unclear circumstances.  While much of the research was across several industries principally from industry and trade journals in information technology, general construction, or engineering the project management principles and findings transferrable between project types. The works were also with several academic studies and only two open source articles.  Most of the works were authoritative under peer review. The dated works were cited more frequently than the more current works as to be expected.

The compelling thread line in the body of works is that scope and risk concerns influenced by project complexity with cooperation, information sharing, conflict resolutions, and competencies as significant factors in project success.

Discussion

Technology projects are challenged with a variety of factors that contribute towards the performance of the project. The body of works indicates that risk and scope complicated by project complexity directly influence project success from the outset. Thus, early project planning is crucial toward success. The body of works relating to the elemental aspects of competencies, information, cooperation, and conflict management offers historical support to risk and scope formulation. The one point that seemed to standout is information sharing and flow at all levels.  Additional research is necessary into the body of knowledge behind successful project managers and the relationship to the ability to reason through complex and obscure project problem sets as related to project related competencies. Dated literature indicates a relationship between the positive locust of control and willingness to engage abstract problems.

Commentary: I suggest that compartmentalizing a complex project into smaller projects should strengthen the locust of control and improve problem solving challenges. In short, the smaller problem set is more easily grasp than an overwhelming large set of problems. Thus, reducing risk and strengthening scope definition.  In breaking a complex project into smaller achievable projects, the organization will gain greater control over the entire process and gain incremental successes towards the ultimate goal. Continuous improvement would characterize such an evolution.  The master project manager must assess the order in which the smaller projects are completed. Some may be completed simultaneously while others may be completed sequentially. 

A risk of scope creep may be introduced as an outcome of mitigating scope gap. To remain focused all the projects must align with the organizational strategic objectives as they take strategy-to-task. New ideas need to be vetted in meaningful ways for the organization and aligned with the overall objectives in a comprehensive change management plan. 


Communication is also essential in managing complex projects. The use of a Wiki as a point of  foundational policies and information is often a best practice. 

Large scale sudden disruptions of an organization are required under certain circumstances. However, in most circumstances complex projects need to be properly broken into smaller manageable efforts then become part of a continuous improvement effort within the organization. 

References

(2004). Skills shortage behind project failures. Manager: British Journal of Administrative Management, (39), 7. Retrieved from Business Source Complete database.

(2008). AICPA's IT competency tool takes you down the path to success!. CPA Technology Advisor, 18(6), 60. Retrieved from Business Source Complete database.

Brooke, K., & Litwin, G. (1997). Mobilizing the partnering process. Journal of Management in Engineering, 13(4), 42. Retrieved from Business Source Complete database.

Chua, A. (2009). Exhuming it projects from their graves: an analysis of eight failure cases and their risk factors. Journal of Computer Information Systems, 49(3), 31-39. Retrieved from Business Source Complete database.

Ferguson, E. (1999). A facet and factor analysis of typical intellectual engagement (tie): associations with locus of control and the five factor model of personality. Social Behavior & Personality: An International Journal, 27(6), 545. Retrieved from SocINDEX with Full Text database.

Bell, G.R. & Back, E.W. (2008). Critical Activities in the Front-End Planning Process. Journal of Management in Engineering, 24(2), 66-74. doi:10.1061/(ASCE)0742-597X(2008)24:2(66).

Iacovoc, C., & Dexter, A. (2005). Surviving it project cancellations. Communications of the ACM, 48(4), 83-86. Retrieved from Business Source Complete database.

Kesner, R. (2008). Business school undergraduate information management competencies: a study of employer expectations and associated curricular recommendations. Communications of AIS, 2008(23), 633-654. Retrieved from Business Source Complete database.

Kutsch, E., & Hall, M. (2009). The rational choice of not applying project risk management in information technology projects. Project Management Journal, 40(3), 72-81. doi:10.1002/pmj.20112.

Mitchell, V., & Nault, B. (2007). Cooperative planning, uncertainty, and managerial control in concurrent design. Management Science, 53(3), 375-389. Retrieved from Business Source Complete database.

Murray, J. (2001). Recognizing the responsibility of a failed information technology project as a shared failure. Information Systems Management, 18(2), 25. Retrieved from Business Source Complete database.

Natovich, J. (2003). Vendor related risks in it development: a chronology of an outsourced project failure. Technology Analysis & Strategic Management, 15(4), 409-419. Retrieved from Business Source Complete database.

Oliver, G., & Walker, R. (2006). Reporting on software development projects to senior managers and the board. Abacus, 42(1), 43-65. doi:10.1111/j.1467-6281.2006.00188.x.

Seyedhoseini, S., Noori, S., & Hatefi, M. (2009). An integrated methodology for assessment and selection of the project risk response actions. Risk Analysis: An International Journal, 29(5), 752-763.
doi:10.1111/j.1539-6924.2008.01187.x.

Sharma, D., Stone, M., & Ekinci, Y. (2009). IT governance and project management: A qualitative study. Journal of Database Marketing and Customer Strategy Management, 16(1), 29-50. doi:10.1057/dbm.2009.6.

Skilton, P., & Dooley, K. (2010). The effects of repeat collaboration on creative abrasion. Academy of Management Review, 35(1), 118-134. Retrieved from Business Source Complete database.

Sutcliffe, N., Chan, S., & Nakayama, M. (2005). A competency based MSIS curriculum. Journal of Information Systems Education, 16(3), 301-310. Retrieved from Business Source Complete database.

Vermeulen, F., & Barkema, H. (2002). Pace, rhythm, and scope: process dependence in building a profitable multinational corporation. Strategic Management Journal, 23(7), 637. doi:10.1002/smj.243.

Caterpillar Leverages Information Technologies for Sustainable Growth

Comment: This was a paper I wrote in 2008 on Caterpillar's use of technology. I thought it highlighted many interesting points. 

Caterpillar Leverages Information Technologies for Sustainable Growth
by
JT Bogden, PMP

Business is warfare based principally on sage utilization of information which is a key factor determining success in business. Caterpillar has long recognized that access to accurate information in order to build actionable knowledge is critical to business success. Caterpillar is a complex global enterprise operation based out of Peoria, Illinois that through well tuned information management is achieving incredible success. Sales revenues during 2007 exceeded forty four billion dollars. (Caterpillar, 2007, Annual Rpt p 33) Enterprise growth goals by 2010 are projected to exceed fifty billion dollars. (Caterpillar, 2007, Annual Rpt p 27) This expansion of the revenues is coming with solid vision and sage business design. Caterpillar’s vision centers on sustainable development utilizing a strategy of innovation and technologies in support of the company’s objectives. (Caterpillar, 2007, Shape Rpt p 36). This means information and the requisite systems are principle to analysis, rapidity of decision making, and identification of actionable business opportunities.

Intellectual Capital Drives Innovation

Many professionals in business incorrectly believe intellectual capital, IC, is simply good ideas that become proprietary because of the station at which the idea was imagined. As an outcome, these professionals believe a company has a legal claim to a good idea. The reality is that good ideas are abundant as nearly everyone has a good idea but most lack the means to put the good idea into effect.

Intellectual capital is better thought of as knowledge that can be converted into commercial value and a competitive advantage resulting in intellectual assets of the company. The conversion of knowledge into commercial value requires a means to codify the knowledge into an intellectual asset. In order to achieve this companies provide structural capital in support of the human capital to gain access to intellectual assets. Thus, IC results from human and structural capital operating in an unique relationship forming intellectual assets. Companies distinguish their operations from the competition by combining knowledge and the infrastructure in differing ways. The process of converting knowledge into intellectual assets results in the innovation that companies seek to commercialize (Sullivan, 1998, p23).

According to the book The Innovator’s Solution by Clayton Christensen innovation in business means growth resulting from the introduction of something new that can be exploited for commercial value. Christian further explains that sustainment growth focuses on delivering new and improved benefits to high-end customers. He then comments that companies are more interested in disruptive growth which results in reduce cost, simplicity, and new directions. Introducing something new is often thought of as unpredictable which is not desirable to most companies. Christensen believes the key to innovative success is not predicting human conduct as rarely does innovation come from a single human fully developed. Instead, He comments that companies must understand the forces that act on humans. What this means is that when innovation is managed through design there is predictability then companies are more readily apt to embrace the change.

In the classic understanding of design, there are three characteristic aspects; the visceral or how the design looks, behavioral relating to the designs functionality, and reflective qualities that provoke thought. In classic design beauty is also found. Good designs demonstrate beauty through harmony and fluid execution. As companies increase in size and complexity the problem of accessing knowledge becomes exponentially difficult. Communicating messages between the top intent and bottom action can become confused and misdirected if not properly managed. Thus, a reliance on finely tuned information technologies becomes an imperative.

Caterpillar has exercised deliberate efforts to employ information technologies that demonstrate good design. For example, a visual imaging company, Real D-3D, posted a company website an article regarding Caterpillars’s need to speed engineering projects to market by employing visualization technology in a project called “CrystalEyes”. According to this article a key feature of the CrystalEyes project was to make the information tool simple to use for engineers and clients alike that eliminated prototyping iterations as well as the tool also had to be cost effective, cross platform, and easily integrated with existing systems. These requirements demonstrated behavioral qualities of a good design. Real D-3D described “CrystalEyes” as a stereographic imaging tool that is an improvement beyond the ghostly holographic effects that met all the design criteria. They were describing for example, designs that can simulate in 3-D the full effect of parallax and other phenomenon related to stereoscopic imaging. Thus, “CrystalEyes” illustrated the visceral elements of a good design. The benefit CrystalEyes delivered was a high performance design visualization tool that eliminated physical builds until the very end. (Copy Editors, Real D-3D) Using the CrystalEyes tool afforded clients and engineers alike the ability to fully understand a design in work provoking thought or the reflective qualities of good system design throughout the engineering iterations.

Management Information Systems Build Decision Support SubSystems

Management information systems, MIS, are complex. These systems come in a variety of technologies and capabilities. One size does not fit all operations. In general MIS involves, at least, three elements; a network or hardware lay down, supported management concepts, then integrated decision analysis and reporting. Through the combinations of these elements companies are able to leverage themselves in competitive ways and provide the infrastructure for innovation.

Caterpillar leads the industry with decision support subsystems. Data is infused into the creation of products and services in support of growth that is collected from significant customer segments and Caterpillar’s geographically dispersed operations. The systems span over two hundred independent dealers globally and their proprietary networks. Caterpillar’s efforts include numerous projects and software tools that fuse these systems together and include but are not limited to:
  • VIMS: Vital Information Management System is a vehicle borne alert system that assesses the equipment’s safe and optimal operating condition. When a problem begins to emerge or is discovered the system alerts the operators and owners then provides safe shut down procedures if necessary. This enhances the service life of the equipment and is an decision support subsystem.
  • Product Link: A wireless system that simplifies the work of tracking the fleet providing assets management information. Product link couples with VIMS.
  • Paperless Reporting: A wireless project that integrates Dealer Business systems and Service Technician’s Workbench with field service technicians reducing errors and streamlining data entry requirements.
  • EquipmentManager: Software designed to report fleet performance and manage assets. This application is the owner’s frontend that presents the VIMS and product Link performance information on demand in meaningful ways.
  • VIMS Supervisor: Vital Information Management System Supervisor Software provides custom fleet production and maintenance reports by extracting data from a VIMS Database.
  • Caterpillars authoring system: A system that is both an information consumer and producer organized to streamline global technical publication operations.
The VIMs, Product Link, Paperless reporting, and the authoring projects are of particular interest as they are subsystems that impact a sequence of other systems ultimately feeding up to top level decision support systems.

Product Link Pools Global Equipment Performance Information

Caterpillar introduced a subsystem called “Product Link” that leverages equipment performance information collected by VIMS towards decision support. “Product Link” is a management tool that tracks and gathers information about Caterpillar’s earthmoving equipment. An online HUB Magazine article written by Caterpillar’s Information Centre discussed the subsystem as composed of two antennas, a data module, and interconnecting wiring. They explain that one antenna collects GPS data while the other antenna provides bidirectional communication with the network operations center. The data module referees the collection of performance and GPS data as well as instructions from the network operations center. Information collected is transmitted to a Caterpillar network operations center wirelessly through low Earth orbit, LEO, satellites. At the network operations center the information is further evaluated then reports are prepared and sent to the equipment owner. Equipment owners are able to access the information over the Internet using the “Equipment Manager” software.

The benefit to both parties is essential to asset management with improved service life of the equipment, reduced down time, and strengthened return on the investment according to Caterpillar. These have been principle reasons the customer purchases Caterpillar equipment. Therefore, understanding the equipment utilization, location, and performance data helps Caterpillar design heartier equipment meeting equipment owner expectations.

This subsystem has seamless operation with the equipment reporting to the Network Operations Centre where the data is collated and eventually is rolled up to into top level decision making support systems demonstrating beauty in the design’s fluidity. The information provided to the owner through “EquipmentManager” answers concerns about utilization, security, and uptime according to Caterpillar further illustrating functionality and reflective utilization of the design.

Paperless Reporting Links Field Service Technicians Into Global Systems

A case study was researched and published in Directions Magazine by Mike DeMuro, Product Support Manager for Michigan Caterpillar, regarding a Michigan Caterpillar’s paperless project initiative. According to the article Michigan Caterpillar field service technicians were experiencing time consuming and error prone process in their dispatch system reporting. Technicians were using an antiquated process of paper forms that were transcribed onto the system in the classic data entry manner. In some cases, information was passed verbally and transcribed days later. Often the information was incomplete or erroneous. Caterpillar sought to streamline the process. A statewide centralized dispatch system was in order to form a mobile office assesses DeMuro.

DeMuro explains that the design of the system utilized an enterprise data integration service that offered both cellular and satellite coverage. Caterpillar’s Dealer Business System and Service Technician’s Workbench was integrated into the enterprise data integration service and Microsoft Outlook. After data was entered once into the system, technicians could drop and drag data into Outlook templates and distribute the data without error prone re-typing. The emails were received by servers and scripts parsed the data into the other systems further reducing errors and increasing productivity. This created a paperless culture of online forms that transmitted data wirelessly between service vehicles equipped with the system and staff functions. DeMuro further claims the benefit of this innovative approach radically improved billing cycles, accuracy, and timeliness of data reporting. Other first order benefits lead to reduce overhead for data re-entry, increase productivity and revenue generating hours, timely parts delivery, and seamless integration of systems. This resulted in secondary effects of improving cash flows and accounting for receivables explains DeMuro.

Again Caterpillar was able to achieve beauty in its seamless design for field service technician reporting. The error rates were subject to initial data entry and additional entry was eliminated leading to very productive functionality of design. The data gathered is cascaded through to higher level systems for further evaluation.

Technical Authoring System Forms Intellectual Assets

Caterpillar was experiencing problems with the technical publications accuracy, timeliness, and availability. There were over 300 products with some having lifecycles as lengthy as 50 years. Compounding this immense data requirement was operations in 35 languages. Therefore, in the late 1990’s Caterpillar envisioned a need for a better method of managing this labor intensive effort of technical documentation. They pursued innovation by taking advantage of emerging Standard Generalized Markup Language, SGML, standards that overcame the limitations of the existing methods. The introduction of the new approach delivered levels of efficiency based on reuse and automation that had never been observed.

Caterpillar began by creating a Technical Information Division, TID, that had the global responsibility of producing the documentation necessary to support operations. They expanded the technical documentation staffing by 200% then organized the automated publishing system, the structural capital, which enabled the staff’s effectiveness to deliver the technical documentation or intellectual assets. These assets included maintenance manuals, operations and troubleshooting guides, assembly and disassembly manuals, specification manuals, testing and special instructions, adjustment guides, and system operation bulletins.

In the design of the authoring system, Caterpillar took a modular approach to information creation and automated where possible. The system designers built on top of industry standards and even utilize MIL-PRF-28001 for page composition. They utilized reusable ‘information elements’ that are capable of being utilized in multiple formats and forms. This approach drastically reduced cost associated with creation, review, revising, and translating information. Through automation of a document formation and information elements, Caterpillar was able to achieve collaborative authoring that trimmed time-to-market and permitted increased focus by subject matter experts that strengthen the quality of the product. The efficiencies achieved staggering improvements in work flow and analysis, document development, style sheet designs, and legacy conversions. In the end, Caterpillar experienced accuracy, timeliness, and availability of technical information that became of immense commercial value and competitive advantage.

Caterpillar’s copyrighted technical documentation is of such immense value that criminal elements have attempted to exploit this information. In May 2002 Caterpillar’s digital library of parts and product catalogues, service manuals, schematics, tooling data, and product bulletins was compromised. U.S. customs reported that they had seized a half million dollars in counterfeit Caterpillar technical documents. This criminal activity demonstrates that the value of well designed intellectual assets can be of significant value as well as vulnerable.

Data Warehousing Efforts Consolidate Enterprise Data

Designing solid data management methods are critical to business success. MIS approaches decision making generally from the process such as a purchase order process whereas decision support systems tend to focus on conduct and behavioral characteristics such as fuel consumption trends. This requires data gathered to be stored, parsed, and analyzed in ways that support strategic decision making over operational management of the operations. The outcome of a well designed data warehousing system is equipment managers shift their focus from operational level decision making to corporate level strategic decision making regarding asset management.

Data marts are working subsets of larger primary database systems used to present unique views on subject matter topics. These data marts are then organized in a way to permit multi-dimensional modeling of the operations. This multi-dimensional model is called the data cube. Online Transaction Processing, OLTP, and Online Analytic Processing, OLAP, usher data routinely into the data cubes and conduct ongoing analytic evaluation of the data in support of on demand or real time review. These tools have also been advanced over the Internet permitting decision support system authorized users to conduct the analysis they are seeking.
The benefits of data warehousing involve better end user control of the data analysis, improved tooling for identification and investigation into problems, strengthened strategic decision making, and improved knowledge discovery. Data warehousing is the foundation of computer aided construction equipment and asset management.

Caterpillar has sought a global data solution and chose TeraData Inc as its business partner in March 0f 2008. TeraData business decision support solution is comprised of component products built on top of the “Active Data Warehouse” product. The component products provide intelligence, analytics, and other support services to decision making.

The Active Data Warehouse product is the underpinning of their services and refers to the technical aspects required to achieve the desired objectives of the data warehouse. This database is designed to receive the feeds from mature MIS subsystems such as Caterpillar’s VIMs, the paperless reporting, and Authoring subsystems. This results in a repository of data that possesses high confidence of data accuracy. The database can be utilized in ordinary MIS support to ecommerce, portals, and other web applications but has greater impact when coupled with decision support applications. With the confidence in the data accuracy, complex reporting and data mining that can be generated on tactical or short notice queries in near real time makes this solution a powerful tool. This capability originates from TeraData’s strategy built on the findings of a 2001 Gartner report that data marts cost 70% more per subject area than a comparable traditional data warehouse. (Sweeney, 2007). TeraData seeks to consolidate data marts, reduce redundancy, and streamline the data loading process into a centralized analytic source in effect creating a massive sole source data mart equivalent to the enterprise wide data set. This streamlining is consistent with Caterpillar’s desires to innovate through technology resulting in the 2008 agreement to improve Caterpillars decision support.

Business Intelligence Products Strengthen Decision Support

TeraData’s component products include a suite of applications that utilize Caterpillar’s enterprise wide data warehouse for analytic and intelligence reporting. Tools in this suite includes strategic and operational Intelligence applications, data mining tools, modeling software, and analytical tool sets that handle extremely large datasets looking for criminal conduct as well as emerging trends. Included also in the suite are maintenance and management tools.

Bringing Information Technology Projects in Focus

Caterpillar brings together disparate systems into a symbiotic global information presence through network operation centers, communication networks, and data processing methods and systems. The elements of good design are observed throughout the systems at Caterpillar and create a culture that promotes innovation whether that is technical publication, engineering, or field management of the equipment. With this foundation in place Caterpillar began a process of increasing vertical accuracy across their systems into decision support systems. The disparate enterprise data is rolled up into the decision support systems data warehouse and requisite set of tooling establishing a formidable competitive instrument. Agreements with TeraData in early 2008 lead to solutions to implement near real time reporting with increased accuracy. As an outcome, Caterpillar has propelled to the forefront of heavy equipment manufacturers to become the industry leader with growth projections that eclipse the competitors. Nonetheless, Caterpillar is restless. Becoming number one in the industry is simply not enough for this giant.

The Future is Bright

Caterpillars positioning in the industry as the leader is not the end state for this company. One concept of business is that no company makes a profit over the long term. The purpose of any business is to be a vehicle that provides income and dignity to human life. In executing this concept principles and moral responsibilities are assigned to companies and governed a cooperation between government and industry. Caterpillar has taken on the next evolution of large corporations, corporate governance. They define their vision in a sustainability report called “Shape”. The term shape is a key notion that is inclusive of the forces that forge innovation in the shaping of knowledge into business plans. Caterpillar has identified the pillars of its “Shape” initiative as:
  • Energy and Climate: Caterpillar realizes the importance of energy security and the impact energy consumption by the equipment has on the ecology.
  • Growth and Trade: Expanding economies and international business are important to sustainable operations.
  • People and Planet: Caterpillar equipment builds economies and lifts people out of poverty.
  • Further and Faster: Shape take form over time then accelerates as the vision organizes. Caterpillar must be willing to drive the vision beyond that which is currently known in order to embrace the future of sustainability.
Using caterpillars systems and technologies, the company is actively seeking and organizing a plan to reach for the moral high ground and is embracing corporate governance. Caterpillar’s equipment is known to move mountains. In time, as corporate governance takes shape Caterpillar will emerge as a social force that levels societal inequities while elevating human dignity around the globe. Humans will have jobs with disposable incomes, improved roads, hospitals, and strengthened economies built by Caterpillar’s equipment and backed by Caterpillar’s social conscience.

References:
  1. Bartlett PG, 1997, “Caterpillar Inc's New Authoring System”, SGML Conference Barcelona 1997, Retrieved October 15, 2008, http://www.infoloom.com/gcaconfs/WEB/barcelona97/bartlet8.HTM#
  2. Caterpillar Public Affairs Office, 2007, “2007 Caterpillar Annual Report”, Retrieved October 10, 2008, http://www.cat.com
  3. Caterpillar Public Affairs Office, 2007, “Shape: Sustainability Report”, Retrieved October 10, 2008, http://www.cat.com
  4. Caterpillar Public Affairs Office, 2008, “Caterpillar Logistic Services Inc Web Site”, Retrieved October 12, 2008, http://logistics.cat.com
  5. Christensen, Clayton M, (2003), “The Innovators Solution”, (1st ed), Boston Massachusetts, HBS Press
  6. Copy Editor, ”Caterpillar moves Mountains in Speeding Time-To-Market using CrystalEyes and Stereo3D Visualizations”, Real D-3D, http://reald-corporate.com/news_caterpillar.asp
  7. Copy Editor, July 2007, “New-generation Product Link system from Caterpillar improves asset utilization and reduces operating costs”, HUB, Retrieved Octover 18, 2008, http://www.hub-4.com/news/633/newgeneration-product-link-system-from-caterpillar-improves-asset-utilization-and-reduces-operating-costs
  8. DeMuro, Mike, April 2005, “Michigan CAT Case Study”, Directions Media, Retrieved October 17, 2008, http://www.directionsmag.com/article.php?article_id=823&trv=1
  9. Eckerson, Wayne W., (2007), “Best Practices in Operational BI: Converging Analytical and Operational Processes”, TDWI Best Practices Report
  10. Hongqin Fan, 2006, “Data Warehousing for the Construction Industry”, NRC
  11. Schwartz, Evan I., (2004), “Juice: Creative Fuel that drives World Class Inventors”, (1st ed), Boston Massachusetts, HBS Press
  12. Sullivan, Patrick H., (1998), “Profiting from Intellectual Capital: Extracting value from Innovation”, (1st ed), New York, John Wiley & Sons, Inc.
  13. Sweeney, Robert J., (2007), “Case Study: TeraData Data Mart Consolidation ROI”, TeraData Corp.