Wednesday, July 20, 2016

Using Artificial Intelligence In Business

Comment:  This brief was originally written in Oct 2009 and was updated then posted again on December 21, 2013. Many movies and books have dramatized the use of Artificial Intelligence, AI, delivering social messages of various sorts. However, the use AI methods and techniques are often overlooked. This post will discuss AI basics. I may expand this post to a series on AI since the world has entered a time that involves increasing use of AI as well as biological and quantum computing among other emerging technologies. My estimation is that many technologies will be converging rapidly in the near term. Ray Kurzweil, famed author of the book "The Age of Spiritual Machines", speaks to a technological singularity that he believes will occur in the next 10 years. The technological singularity he discusses is the convergence of principally three technologies that result in machines and/or  humanoids having greater-than-human intelligence. That technology field is call trans-humanism, Image 1.


Image 1: Trans-humanism Symbol
Artificial Intelligence In Business
by 
JT Bogden, PMP

When most people think of AI they think of an all-knowing computer system. However, realistic designs must be formally constrained or have boundary limits set due to processing power, memory, and temporal limitations in traditional computational systems.  Also, decision-making is often multi-layered having some sort of fail-safe or fall back. Hence, there is a simplicity to the design that is wrapped around a Theory of Mind, ToM, which is a humanization of the system. Ultimately, the system interacts and behaves in an intelligent human-centric manner.

An AI engine, Figure 1, is object oriented and receives inputs from its environment or other sources such as a database of the project or operational data. Any AI engine is limited in the ability to make inferences by the information delivered to the inputs from sensory devices and other input equipage. Thus, as part of the design that constraints the data requirements need to be carefully considered.
Figure 1:  AI Engine Model
In coding the elements of the engine a concept of the finite-state machine, FSM, is used as an organizational tool to break the problem set into manageable sub-problems. An FSM has a data structure reflective of the states inherent to the machine, input conditions, and transition functions. Another type of machine used is the Fuzzy-State machine, FuSM's, that handle partial truths. FuSM's do not maintain a defined state but instead compute activation levels and the overall state is determined by the combination of activated states. Skeletal code wraps the object classes of FSM's and FuSM's into a management system unique to the engine. FSM's and FuSM's can be a message, event, data, or inertial driven. State machines yield flexibility and scalability to the intelligence system and are more complex than this brief paragraph.

Another framework is an algorithmic model. In the computational theory of the universe, the natural universe is an irreversible algorithmic computation reflecting times arrow. Algorithmic approaches are useful  for rule and probabilistic based processing having limited sets of outcomes such as the rolling of a dice. There are only 6 possible outcomes using 1 dice. The algorithm is a probabilistic outcome of the roll that one of 6 sides will show. 

The ultimate AI framework is the neural network (a neural net). Natural neural nets are  the structure of animal and human brains. AI reflects the brain activity using input, hidden, and output nodes to process information. Input nodes gather information with some basic processing. Thus, an input node can be some sort of sensory device or even a neugent or agent of some sort. Hidden networked nodes do more advanced processing routing the processed information forming line or pathways of logic. Output nodes usually format the processed information or result in some sort of action. Output nodes can be displays, motors, servers, actuators, voice, or a host of other devices. A node is said to fire when inputs match the states and circumstances for processing. Outputs from several nodes could be the inputs to another node or set of nodes. The complex pathways through a neural net that result in an output are a line of logic.  A valid line of logic is said to be a truth. Whereas, an invalid line of logic is considered to be untrue.  The Neural net  has their place in complex real-time problem solving situations as the nodal network conducts parallel processing while evaluating multiple outcomes simultaneously.

Using these techniques in an operation to compute outcomes and discern knowledge is exceptionally useful in marketplace competition.  Using a complex adaptive architecture the organizational structure can form a neural net processing knowledge and lines of logic if properly designed. In this case, the World Wide Web becomes a medium for processing lines of logic across geospatially dispersed nodes. Some nodes can be human supervised while other nodes are automated. In essence, the entire organization becomes a brain exhibiting life-like qualities.

AI constructs can be employed in projects to monitor for emerging outcomes and assist by algorithmically computing outcomes then correcting for issues before they emerge. Data feeds into the AI engine can monitor schedules and resources as well as change requests for adherence to the project portfolio. AI is particularly useful in identifying emerging issues or partial truths before they become real issues.

AI will not become fully useful to organizations and business until they have evolved and formed structures supportive of AI.  If organizations adopt constructs that reflect the natural environment rather than a human imposed structures only then will they be able to adapt to change and responsibly leverage emerging conditions with greater simplicity. 

Wednesday, June 22, 2016

Organizational Computational Architecture

Commentary:  This brief was originally written in Sep 2012 and was updated reposting on Dec 21 2013. Many people think of computational capabilities as something that a box full of silicon and circuits on their desk performs. Moreover, they tend to think of computations in linear steps. Rarely does anyone think about the human mind-brain as a processor. If thought in this way, professionals and industry may be able to take advantage of the combination of a mind-brain and machine processing in interesting ways.

Organizational Computational Architecture
by 
JT Bogden, PMP

Organizational computational power is the cumulative processing capability of an organization that includes both machines and - humans?  Machine computational power is well accepted. Determining machine computational power is related in terms of instructions processed per unit time such as cycles per second or Hz, MHz, GHz, and Millions of Instructions per Second, MIPS. But human computational power is something of a different genre or is it?

Human computational power has been somewhat awkward to pin down as the study of the brain precedes computer and information science. Thus, the information science paradigms were never part of the early research. Historical attempts have centered on intelligence and emotional quotients. These are related to learning rates or adaptability rates of the mind-brain based on eight characteristic factors from the groundbreaking book 'Frames of Mind' by Howard Gardner. However, looking at the commonalities between the brain and machine base processing there are many interesting commonalities. The brain has up to 10 quadrillion synapses or transistor like switches and processes at rates up to about 1,680 GHz. The brain storage capacity varies from 1TB to 10TB or on average is about 5TB of information. The brian serves function as in circuits. The mind has been an enigma until the mind is thought of as software or possessing method. Arguably the character on the mind or the essence of the mind is the individual human being or in some circles - the soul. Thus, the mind-brain combination parallels the software-hardware paradigm. 

Diet and exercise are keys to optimizing performance of the brain. Brain food diets affect the function of the brain and brain exercises affects the method of the mind. For the purpose of this discussion, all human’s process information at an average equal rate with equal storage capacity. Therefore, human computational power in organizations can also be determined much like machine computational power using a mathematical formula based on how the human brains are related to each other resulting in a throughput metric for the organization.

Organizational Throughput

A sage organization seeks to understand the full computational power on hand then seeks to align and enhance that power according to the operations. Approaches such as knowledge management have been employed but often have very different meanings to different organizations. In general, knowledge management has become more or less a catalog of what is known and where to put or store the knowledge once it becomes known. Processing power, on the other hand, is the horsepower to solve problems and determine the knowledge. Thus, an operational view of processing power can be utilized to determine the horsepower on hand.

Elemental computational power is organized in various ways; parallel, series, arrayed, and/or distributed. Parallel processing improves throughput while processors in series improve dependent process performance. Arrayed processors are organized non-linearly in order to enhance complex decision-making performance. Distributed processing is designed to offload processing demand to localized processors conserving bandwidth, time, and load. An organization must determine how to organize not only the computers but the human staff in order to achieve optimal throughput also known as the operational tempo.

Ideally, an organization should have a virtual backplane against which everything sits. In the ethereal space of this backplane staff and systems can be organized to achieve optimized throughput and adapt to emergent conditions without disrupting the physical plane. In the virtual backplane, information exchanges occur and can be redirected and rearranged without causing disruptions in the physical realm. This has the effect of creating the stability necessary for humans to become self-actualizing and empowered. Humans who dynamically perform their work without interference are empowered. Self-actualization is the achievement of one's full potential through creativity, independence, spontaneity, and a grasp of the real world. In short, humans while acting like a processor are granted the freedom to utilize their talents in solving business problems.

Final Thoughts

Organizations have generally paid little attention to the human mind-brain in the past with the exception of human resource testing. This testing is generally not comprehensive and temporally bound to the hiring process. Human resources have generally sought people of specific character traits. For example, the US Postal exam is actual an IQ test in which they are seeking people who can perform repetitive tasks. Organization can benefit by better managing the human computational element.

Organizations should encourage better diets and offer brain exercises with incentives for heightened brain function. Employers should baseline then seek to improve staff brain function. Through tracked exercises and regular assessments, organizations can observe the computational posture over time. This should feed the organization's ability to cope with and manage emergent conditions or change which results in organizational experience.

The formation of experience starts with data. Data becomes information with context, and complex information relationships become knowledge. Wisdom is an outgrowth of experience and results in quantum leaps of judgment when there is missing information and/or knowledge. Management of transforming data into knowledge is knowledge management which in turn seeks to convey the organizational experience to individuals without retraining. The computational architecture in which data is transformed ultimately into knowledge is based on the organization of computational elements; both human and machine. Dynamically organized computational elements are ideal when loosely coupled in the system and are easily re-organized with emergent conditions.This construct facilitates experiences which can be tracked and recorded.

As the organization conducts routine business, the experiential history can act as a pattern match and alert humans to experiential conditions emerging again. Humans can then assess the circumstances and avoid making errant decisions again or attempt an alternate solution to avoid a recursive trap that cost time and money. Two episodes of Star Trek lend well to this discussion; Cause and Effect and Time Squared.

In Cause and Effect, the Enterprise is trapped in a recursive time loop. For each iteration of the loop, the crew recalls more about the experience until they finally recall enough to break the loop. It takes them 17 days to break the pattern. In Time Squared, The Enterprise is destroyed 6 hours into the future and Captain Picard is cast back in time and out of phase. The Enterprise recursively experiences the same event until Captain Picard broke the loop by preventing his temporally displaced self from completing the recursive loop. He injected an alternative path in the form of surprise. In both episodes, the common theme of repeating the same experience occurred. However, in one episode a little something was retained until enough was learned to break the recursive experience. In the other episode little was retained but through evidence and observation a decision was made to choose an alternate path and prevent a repeat.

What if through a combination of machine and human processing recurrent events would not repeat in an organization. Classically, governmental organizations cycle people every 3 years and the same issues different people repeat every 3 years.  History repeats itself unless the historical experience is known and recalled somehow. Perhaps machine and human computational systems can provide the experience and decision making combined if designed to recognize and act on experience. If not, create the circumstances in which surprise is possible breaking recurring trends.

References:

Gardner, H (1993). Frames of mind: the theory of multiple intelligences. (10th e.d.). Basic Books. New York.

Information Theory Overview

Comment: Originally published August 2014. Some time ago, I became interested in information theory partly due to my career and mostly because I began seeing elements of the theory popping up everywhere in movies, theological commentaries, war fighting etc... I studied the theory off and on purchasing books, watching movies, reading essays, and in general where ever I caught a wisp of the theory.  The interesting thing about truth is that it is self-evident and reveals itself in nature. So I did not have to look far. Although, a curious thing about information is noise or that which is distracting, like a red herring and there is plenty of noise out there. Anyhow, the point in this post is an information theory overview.  I would like to share basic information theory and relate it to the world around us. I will be coming back to this post updating and refining with more citations.

Information Theory
by
JT Bogden, PMP

Information theory is relatively new and is part of probability theory. Like the core disciplines of mathematics and the sciences, information theory has a physical origin with broad spectrum applications. The theory has captured the attention of researchers spawning hundreds of research papers since its inception during the late 1940's. This new discovery has generated interest in deeper research that involves biological systems, the genome, warfare, and many other topical arenas. Claude E. Shannon Ph.D. is the father of generalized information theory as developed during 1948 and theorized:

If the receiver is influenced by information from the transmitter then reproducing the influencing information at the receiver is limited to a probabilistic outcome based on entropy. 
Figure 1: Mathematical Formulation of Entropy (H) in a system
There are several terms in the thesis statement that may be difficult to grasp and the mathematical formulation, Figure 1,  may be overwhelming for some people who wonder how entropy and information linked.  Entropy is an operative concept behind diminishing returns or the rate at which a system dies, decays, or falls apart. Entropy operates under the order as formulated in figure 1. Thus, the phenomena is not random.  Within the context of information theory, entropy is the minimum size of a message before a meaning or value is lost. The notion of probabilistic outcomes involves multiple possible results in which each result has a degree of uncertainty or a possibility that the result may or may not occur. For example, a rolling of the dice is limited to only six possible outcomes or results. The probability of any one outcome occurring is 1 in 6. The uncertainty in rolling the dice is high being 5 to 6 that any specific outcome will not occur.  As for the mathematical formulation, I will just leave that for general knowledge of what it looks like.

The thesis is pointing towards a 'lossy' system and promotes a simplistic communication model, Figure 2.
Figure 2: Simple Information Theory Model
From the thesis, formula, and model more complex related theories and models spawn coupling information theory to biology, quantum physics, electronic communications, crowds, and many other topical subject matter.  All fall back on entropy or the smallest message before it looses its meaning. The big question is so what? We will explore the 'so what' in the next section.

Information Theory Around Us

Most people fail to realize that information theory impacts us on an everyday basis. Aspects of the theory appear in movies, underpin all biological sensory capabilities, and appear in information networks in many ways. Many people philosophize that human and natural existence is purely information based. Let us explore information theory as it is exposed to many people. Most people have some familiarity with the sciences at some level, movies,  and religion.  Let us begin with a survey the sciences.

Atom smashing experiments during the 1970's lead to the discovery that the universe has boundary limits. Physicist Richard Feynman, the father of quantum computing, concluded that matter ceases to exist at 10-32 meters. When matter ceases to exist so does space-time. Matter has dimensions and time's arrow spans dimensionality. When matter no longer exists neither does dimensionality and time is mutually inclusive. What remains are non-local waveforms or electromagnetic waves which are illustrated as strings that vibrate. The region where this occurs is the Planckian realm which is where matter is quantized or discrete having the qualities of a bit of information. Matter and energy are interchangeable based on the Theory of Relativity, Figure 3, and the wave-particle theory of light. Those vibrating waveforms in the Planckian realm slam together in a process of compactness that is not fully understood forming a particle having discrete size, weight, and possesses a positive (+),  neutral (0), or negative (-)  charge.   These particles then begin to assemble in a computational algorithmic manner based on the charge and tri-state logic into more complex particles from the sub-atomic into the physical realm. In the physical realm, complex molecules form such as DNA from which biological life emerges. 
Figure 3:  Theory of Relativity Formula
Energy = Mass x  (Speed of Light)2
The DNA is somewhat unique according to Microbiologist Dr. Matt Ridley. This is because not only did a computational information process arrive at the DNA molecule but injected into the DNA molecule are 4 bits of information ( G, C, A, and T ) which are used by nanites to endow biological life. Nanites are intelligent molecular machines that perform work and made out of amino acids and proteins. These molecular machines have claws, impellers, and other instruments. They communicate, travel, and perform work based on DNA informational instructions. The information process continues as even more information is applied to the DNA strand such as variation of timing, sequencing, and duration under which a gene fires. By varying the timing, sequencing, and duration of a firing gene  then specific features are managed on the life form under gestation.  Dr. Ridley quips the genome is not a blueprint for life but instead a pattern makers template having some sort of Genome Operating Device, a G.O.D (Ridley, 2003). The point here is that there is some sort of intelligent communication ongoing during the endowment of life and development of the natural universe. All of which are the outcome of computational processes and information.

During the 1960's extensive research was conducted into the operation of human biological sensory processes in relation to information theory.  The conclusion of this research determined that the sense of sight, sound, touch, smell, and taste undergoes an electrochemical process in which information is encoded using Fourier Transforms into electrical waveforms. The base Fourier equations are somewhat ominous, Figure 4.
Figure 4: Fourier Transforms Equations
The equations are shown to see what they look like. Extensive mathematical philosophy and practical understanding of how these equations perform is necessary to appreciate these equations. The lay jargon, Fourier transform equations encode and extract information embedded in a waveform.  These waveforms are constructed from the biological sensory devices; eyes, ears, nose, tongue, and skin. Once the information is encoded into a waveform the human brain stores information holographically.  Consider the operation of the eyes attached as part of the brain. The reason for two eyes is that they act symbiotically. One eye is a data source while the other eye acts as a reference source.  When the waveforms from these two sources collide then information is encoded in the constructive and destructive patterns that result. These patterns are then imprinted into the brain material to be recalled on demand as human's think in terms of images and not attributes. The human brain is capable of storing up to 6 terabytes of information. The eye has a curious tie to the quantum realm detecting a photon of light coincidental with the smallest instance of time, Planck's Time, which is of the order of 10-43 second.  This leads to the concept of quantum reality or that human perception is limited to the boundaries of the natural universe.

The human experience is said to be a digital simulation and the universe is computationally organized.  This lends credence to the creative license of writers and authors who imagine storylines such as The Matrix, Timeline, The Lawn Mower Man and countless others.

References:

Knuth, D. (2001). Things a computer scientist rarely talks about. CSLI Publications: Stanford.

Moser, S. and Chen, P. (2012). A student's guide to coding and information theory. Cambridge University Press: United Kingdom.

Reza, F. (1994). Introduction to information theory. Dover Books: New York.

Ridley, M. (2003). Nature via Nurture. New York: Harper Collins Publishers.

Wednesday, May 18, 2016

Spiritual Machines Create Challenges for Project Managers

Comment: This post was originally written August 2014. I have made a few updates and posted again May 2016. What I am going to talk about originated as a discussion from my Masters in Information Technology  program. This  may seem far fetched to many people but is an upcoming debate in the not-so-distant future. Holographic technologies have the potential to cause moral dilemmas for project managers who must implement these systems when they arrive. The early technology will be inanimate and mechanical in nature. As time passes this technology will combine with neural nets and biological computing to create life-like machines that could potentially develop self-awareness. It is never too early to debate the questions and challenges these systems pose.

Spiritual Machines Create Challenges for Project Managers
by
JT Bogden, PMP

Holography was commercially exploited as early as the 1960’s with the GAF viewfinder. As a young boy, I recall placing reels with images into a stereographic view finder looking at the comic book world of Snoopy and other stories of dinosaurs. Later, I explored holography deeper in technical books learning about how data is encoded in the collision patterns between reference and data beams. Science philosophy books explored the holographic universe and how the human eye-brain organ is a holographic system that interprets our world.

Scientists have struggled with the eye-brain to mind dilemma in humans. The brain is the mechanical operation while the mind is spiritual in character. Holographic systems store information in terms of ghostly images unlike conventional storage systems that store information in terms of attributes. According to Michael Talbot’s book “The Holographic Universe” holography’s ethereal images reflect the way the human mind processes reality. The human brain can suffer trauma loosing large areas of tissue but somehow retains unfettered memories and even character. Likewise, a curious quality of holography is that all the information is stored ubiquitously throughout the storage medium defeating divisibility short of catastrophic loss. Any divisible piece contains the complete information set. (Talbot, 1991) Thus, holography has the appearance of retaining the character or essence of the information stored despite failures and imperfections of where the data is embodied.

Current robotic research is developing systems that mimic human sensory and motor capabilities. Software and processing hardware emulates or mimics human neural circuitry to cause human-like actions including those emotional or to make human-like decisions. Both actions are mechanical in character operating based on local action. For example, tracking and catching a baseball in flight or if the baseball hits the robot instead to perform specific emotional responses. The elements of surprise and creativity are more or less spiritual in character and have not yet been mastered by science since they are not local actions that science deals with.  For example, reflecting on the flight of the baseball and describing it as screaming through the air is creative and not a local actions. In fact, self-awareness maybe a requirement to achieve surprise and creativity.

Holography's creates theological concerns since its resilient retention of information is not mechanical. Instead, holographic data storage is based on waveforms or electromagnetic energy patterns also known as light waves. These are often equated to spirituality. There are theological implications for example from the Judeo-Christian Bible makes parallels between light and the absence of light to spiritual existence. For example, in the Bible, Genesis 1.4; "God saw that the light was good, and he separated the light from the darkness.” Holographic ghostly images in storage and computational processing could depart silicon wafers and mechanical storage systems for the amino acids and proteins found in biological processing. Human tinkering could result in challenges by truly spiritual machines. If not careful these biological machines could develop a conscience and become annoyed with natural biological computers also known as humans. In the end, mankind’s technological conduct could potentially manufacture a nemesis. If for all the good in the world there is evil then the human responsibility is to dispense the good and forsake the evil. Holographic storage is the beginning of a computational era that has the potential to elevate or degrade mankind.

"The development of every new technology raises questions that we, as individuals and as a society, need to address. Will this technology help me become the person that I want to be? Or that I should be? How will it affect the principle of treating everyone in our society fairly? Does the technology help our community advance our shared values?" (Shanks, 2005).

The possibility of computational systems not based on silicone but amino acids and proteins, the building blocks of life, is clearly on the horizon and presents some puzzling questions. As these systems advance, project managers implementing these new systems could be faced with significant ethical and moral decisions. Literally, actions such as killing the 'power' on a living machine raises questions about life and the right to exist.  Will man-made biological computers perhaps through genetic engineering develop self-awareness, spirituality, and a moral code of their own? How far will this go? What other moral and ethical issues could arise from the advent of this technology?

Please feel free to comment. I would enjoy hearing from you.

References:

Lewis, C.S., August 2002. The Four Loves, Houghton Mifflin Harcourt, ISBN: 9780156329309

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

Kurzweil, Ray, 1999. “The Age of Spiritual Machines: When Computers Exceed Human Intelligence”, Penguin Books, ISBN: 97801402822023

Shanks, Tom, 2005. Machines and Man: Ethics and Robotics in the 21st Century, The Tech Museum of Innovation Website. Retrieved 21FEB09 from
http://www.thetech.org/exhibits/online/robotics/ethics/index.html

Talbot, Michael, January 1991. The Holographic Universe, Harper Collins Publishers, ISBN 9780060922580

Wednesday, April 20, 2016

Human Centric Computing

Commentary: This post was originally written Dec 2010 and was updated then reposted Dec2013. Chief Executive Officers fear commoditization of their products and services. This is a major indicator of attenuating profit margins and a mature market. Efforts such as "new and improved" are short term efforts to extend a product life cycle. A better solution is to resource disruptive technologies that cause obsolescence of standing products and services, shift the market, and create opportunities for profit. Human centric computing does just that and project manager may find they are involved in implementing these projects of various levels of complexity. Thus, project managers should have a grasp of this technology and even seek solutions in their current projects. 

Human Centric Computing
by 
JT Bogden, PMP

Human centric computing has been around for a long time. Movies for decades have fantasized and romanticized about sentient computers and machines that interfaced with human beings naturally.  More recent movies have taken this to the ultimate end with characters such as Star Trek's Data, Artificial Intelligence A.I.'s character David, or the movie iRobot's character Sonny.   In all cases these machines developed self-awareness or the essence of what is considered to be uniquely human but remained machines.  The movie Bicentennial Man went the opposite direction from a self-aware machine who became human. This is fantasy and there is a practical side to this.

Michael Dertouzos in his book, 'The Unfinished Revolution', discusses early attempts at developing the technologies behind these machines. The current computational technologies are being challenged as the Unfinished Revolution plays out. I am not in full agreement with the common understanding of the Graphical User Interface, GUI, as "a mouse-driven, icon-based interface that replaced the command line interface". This is a technology specific definition that is somewhat limiting and arcane in thinking. A GUI is more akin to a visceral human centric interface in which one form utilizes a mouse and icons. Other forms use touch screens, voice recognition, and holography. Ultimately, the machine interfaces with humans as another human would in the end state.

Human Centric Computing

Humans possess sensory capabilities that are fully information based. Under the auspices of information theory, during the 1960's human sensory has been proven to be consistent with Fourier's Transforms. These are mathematical formulas in which information is represented by signals in terms of time domain frequencies. In lay term, your senses pickup natural events and biologically convert the event to an electrical signals in your nervous system. Those signals have information encoded in them and arrive at the brain where they are processed holographically. The current computational experience touches three of the five senses. The visceral capability provides the greatest information to the user currently because the primary interface is visual and actually part of the brain. The palpable and auditory are the lesser utilized with touch screens, tones, and command recognition. The only reason smell or taste is used is if the machine is fried in a surge which leaves a bad taste in one's mouth. However, all the senses can be utilized since their biological processing is identically the same. The only need is for the correct collection devices or sensors.

Technological Innovations Emerging

If innovations such as these device examples are fully developed and combined with the visceral and palpable capabilities cited earlier truly human centric machines will have merged and the 'face' of the GUI forever will have changed.

Microsoft's new Desktop is literally a desktop that changes the fundamental way humans interact with digital systems by discarding the mouse and keyboard altogether. Bill Gates remarks that the old adage was to place a computer on every desktop. Now Gates remarks that Microsoft is replacing the desktop completely with a computational device. This product increases the utilization of the palpable combined with the visceral in order to sort and organize content then transfer the content between systems with the relative ease of using the finger tips (Microsoft, 2008). For example, a digital camera is set on the surface and the images stored are downloaded then appear as arrayed images on the surface for sorting with your fingertips. Ted Talk's Multi-touch Interface highlights the technology. 

Another visceral and palpable product is the Helio Display. This device eliminates the keyboard and mouse as well. Image appears in three dimensions on a planar field cast into the air using a proprietary technology. Some models permit the use of one’s hands and fingers in order to ‘grab’ holographic objects in mid-air and move them around (IO2, 2007). Another example of this concept is Ted Talk's video Grab a Pixel

On the touch screens of various forms virtual keyboards can be brought up if needed. However, speech software allows for not only speech-to-text translation but also control and instructions.  Speech engines that can provide high quality instructions replacing error tones and help text. Their telephony products are capable of interaction with callers. Their software also comes in 25 languages. (Loquendo, 2008).

There are innumerable human centric projects ongoing. In time, these products will increasingly make it to the market in various forms where they will be further refined and combined into other emerging technologies. One such emerging trend and field is a blending of Virtual Reality and the natural.  The Ted Talks video 'Six Sense' illustrates some of the ongoing projects and efforts to change how we interconnect with systems.

Combining sensory and collection technology with neutral agents may increase the ability to evaluate information bring the computer systems closer to self-awareness and true artificial intelligence.  Imagine a machine capable of intaking an experience then sharing that experience in a human manner. 

Commentary:  Project managers seeking to improve objectives where selection and collection of information can be quickly gathered with out typing or wiping a mouse across the screen, should consider use of these type of products whenever possible.  Although costly now, the cost for these technologies will drop as the new economy sets in. 

References:

Dertouzos, M.L. (2001). The unfinished revolution: human-centered computers and what they can do for us.  (1st ed.), HarperBusiness 

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.

IO2 Staff Writer, 2007. HelioDisplay, Retrieved 25FEB09 from http://www.io2technology.com/

Microsoft Staff Writer, 2008. Microsoft Surface, 2008, Retrieved 25FEB09 from
 http://www.microsoft.com/surface/product.html#section=The%20Product

Loquendo Staff Writers, 2008. Loquendo Corporate Site, Retrieved 25FEB09 from
 http://www.loquendo.com/en/index.htm

Sunday, March 13, 2016

Hip Pocket Tools To Collect Quick Data


Hip Pocket Tools To Collect Quick Data
By
JT Bogden, PMP

As project managers working in IT, having a grasp of simple activities and practices goes a long way in understanding the complexity behind many projects and how things are inter-related. Writing code to quickly gather browser or network information is one of those simple little things that can have a major impact on a project as they provide a wide breadth of information about a web application's users and environment. This information can be useful across a breadth activities within an organization as well.

Let us look at the use of writing mobile code which sense the environmental conditions. Information collected should be used to properly route the web application to code that accounts for the environmental conditions. Sometimes that means simply adapting dynamically to the qualities like screen width. At other times, code has to account for browser differences. For example, some browsers and versions support features like Geolocation and other do not.

I have scripted some mobile code to detect the current environmental conditions as Blogger would permit, Table 1. The Blogger web application does not allow full featured client side javascript and removes or blocks some code statements. In the code, logic detects the environmental conditions in Internet Explorer, Safari, Chrome, Opera, and Firefox routing the results to the tabular output. Geolocation is tricky and does not execute is all browser versions and may not post results or error results in some browsers and versions. Please try reviewing this post in multiple browsers and platforms (iPad, iPod, PC, MacBooks).

MOBILE CODE RESULTS
Device
Screen Resolution
Client Resolution
Java Enabled
Cookies Enabled
Colors
Full User Agent
GEOLOCATION
Table 1: Sniffer Results
(To collect geolocation, please approve in browser)

Embedding code into web applications and storing the results in a database can yield valuable histories. Project managers planning and coordinating projects, whether writing a web application or conducting some other IT related project, must be able to have an understanding of the environmental situation. Almost always there are anomalies. The use of mobile code can provide valuable information back to the project manager before issues arise.

Mobile code is one of the hip pocket tools that can provide important information. Installing and tracking data over time can show progress, effects, and flush out the anomalies before they become problems.

Neural Agents

Comment: Several years ago, I was the leader of an operationalized telecommunication cell. The purpose of the cell was to monitor the effectiveness and readiness of the telecommunications in support of the ongoing operations. The staff regularly turned over due to the operational tempo and I had to train new staff quickly. I did so by preparing a series of technical briefs on topics the cell dealt with. This brief was dealing with  Neural Agents which I have updated and provided additional postings that paint a picture of potential advanced systems. 

Neural Agents
by
JT Bogden, PMP

Figure 1: Agent Smith
The Matrix movie franchise
Neural Agents have this spooky air about themselves as though they are sentient and have clandestine purpose. The movie franchise the 'The Matrix', Figure 1, made use of Agent Smith as a artificial intelligence designed to eliminate zombie processes in the simulation and human simulations that became rogue such as Neo and Morpheus.  In the end, Agent Smith is given freedom that results in him becoming rogue and rebellious attempting to acquire increasing powers over the simulation. 

The notion of artificial intelligence has been around forever. Hollywood began capturing this idea in epic battles between man and machines in the early days of Sci-Fi.  More recently, the movie "AI" highlighted a future where intelligent machines survive humans. Meanwhile, the Star Trek franchise advances intelligent ships using biological processing and has a race of humanoid machines called the Borg.  Given all the variations of neural technologies, the Neural Agent remains a promising technology emerging in the area of event monitoring but not acting quite as provocative as Agent Smith. The latest development in neural agents in support of artificial intelligence. Neural agents, Neugents which are not related to Ted Nugent, are becoming popular in some enterprise networks. 

Companies can optimize their business and improve their analytical support capabilities as this technology enables a new generation of business applications that can not only analyze conditions in business markets, but they can also predict future conditions and suggest courses of action to take. 

Inside the Neugent

Neural agents are small units or agents, containing hardware and software, that are networked.  Each agent has  processors and contain a small amount of local memory. Communications channels (connections) between the units carry data that is encoded usually on independent low bandwidth telemetry. These units operate solely on their local data and input is received from over the connections to other agents. They transmit their processed information over telemetry to central monitoring software or other agents. 

The idea for neugents came from the desire to produce artificial systems capable of “intelligent” computations similar to those of the human brain. Like the human brain, neugents “learn” by example or observations. For example, a child recognizes colors by examples of colors. Neugents work in a similar way: They learn by observation. By going through this self-learning process, neugents can acquire more knowledge than any expert in a field is capable of achieving.

Neugents improve the effectiveness of managing large environments by detecting complex or unseen patterns in data. They analyze the system for availability and performance. By doing this, neugents can accurately “predict” the likelihood of a problem and even develop enough confidence over time that it will happen. Once a neugent has “learned” the system’s history, it can make its predictions based on the analysis, and it will generate an alert, such as: “There is a 90% chance the system will experience a paging file error in the next 30 minutes”.

How Neugents Differ From Older Agents

Conventional or older agent technology requires someone to work out a step-by-step solution to a problem then code the solution. Neugents, on the other hand, are designed to understand and see patterns, to train. The logic behind the neugent is not discrete but instead symbolic.  They assume responsibility for learning then adapt or program themselves to the situation and even self-organize. This process of adaptive learning increases the neugent's knowledge, enabling it to more accurately predict future system problems and even suggest changes.  While these claims sound far reaching, progress has been made in many areas improving adaptive systems. 


Neugents get more powerful as you use them. The more data it collects, the more it learns. The more it learns, the more accurate its predictions. This solution comes from two complimentary technologies: the ability to perform multi-dimensional pattern recognition based on performance data and the power to monitor the IT environment from an end-to-end business perspective.

Systems Use of Neugents and Benefits

Genuine enterprise management is built on a foundation of sophisticated monitoring. Neugents apply to all areas. They can automatically generate lists for new services and products, determine unusual risks and fraudulent practices, and predict future demand for products, which enable businesses to produce the right amount of inventory at the right time. Neugents help reduce the complexity of the Information Technology (IT) infrastructure and applications by providing predictive capabilities and capacities. The logic behind the neugent is not discrete but instead symbolic. 

Neugents have already made an impact on the operations of lots of Windows Server users who have already tested the technology. They can take two weeks of data, and in a few minutes, train the neural network. Neugents can detect if something’s wrong. They have become a ground-breaking solution that will empower IT to deliver service that today’s digital enterprises require.

With business applications becoming more complex and mission-critical, the us of neugents is more necessary to predict then address performance and availability problems before downtime occurs. By providing true problem prevention, Neugents offer the ability to avoid the significant costs associated with downtime and poor performance. Neugents encapsulate performance data and compare it to previously observed profiles. Using parallel pattern matching and data modeling algorithms, the profiles are compared to identify deviations and calculate the probability of a system problem.

Conclusion

Early prediction and detection of critical system states provide administrators an invaluable tool to manage even the most complex systems. By predicting system failures before they happen, organizations can ensure optimal availability. Early predictions can help increase revenue-generating activities as well as minimizing the associated costs due to system downtime. Neugents alleviate the need to manually write policies to monitor these devices.

Neugents provide the best price/performance for managing large and complex systems. Organizations have discovered that defining an endless variety of event types can be exhausting, expensive and difficult to fix. By providing predictive management, Neugents help achieve application service levels by anticipating problems and avoiding unmanageable alarm traffic as well as onerous policy administration.

Monday, May 4, 2015

EDI Overview

Commentary: This post was originally posted 10Mar2011. I have made a few updates then published the post again. EDI is in pervasive use in manufacturing, logistics, banking, and general day-to-day website purchases. Many people have little understanding about EDI. I want to highlight what it is, general implementations, challenges, and benefits.

EDI Overview
by
JT Bogden, PMP

Electronic Data Interchange, EDI, is the pervasive method of conducting business transactions electronically. It is not technology dependent nor driven by technology implementations. Instead, EDI is driven by business needs and is a standard means for communicating business transactions. The process centers on the notion of a document. This document contains a host of information relating to purchase order, logistics, financial, design, and/or Personal Protected Information (PPI). These informational documents are exchanged between business partners and/or customers conducting business transactions. Traditional methodologies used paper which has a lot of latency in that system and is error prone. Replacing the paper based system and call center systems with electronic systems do not change the processes at all since the standard processes remain independent of the implementation or medium. When the processes are conducted via electronic media the latency is compressed out of the system and errors are reduced making the electronic processing far more desirable given the critical need for speed and accuracy in external processes.

Implementing EDI is a strategy-to-task effort that must be managed well due to some of the complexities of the implementation. A seven step phased process highlights an EDI implementation.

Step 1 - Conduct a strategic review of the business.
Step 2 - Study the internal and external business processes.
Step 3 - Design an EDI solution that supports the strategic structure and serves the business needs
Step 4 - Establish a pilot project with selected business partners.
Step 5 – Flex and test the system
Step 6 - Implement the designed solution across the company
Step 7 - Deploy the EDI system to all business partners

The technology used in EDI varies based on the business strategies, Figure 1. In general, EDI services can operate through three general methods; 1) the Value Added Network (VAN), 2) Virtual Private Network (VPN), 2) by Point-to-Point (P2P), or 30 Web EDI. The first two are for small to medium sized EDI installations where a direct association is known, established, and more secure. Web EDI is conducted through a web browser over the World Wide Web and the simplest form of EDI for vary broad based and low value purchases. The technology in use varies slightly and comes down to three forms of secure communications at a central gateway service such as Secure File Transfer Protocol (FTPS), Hypertext Transport Protocol Secure (HTTPS), or AS2 Protocol which is used almost exclusively by Walmart. Web EDI requires no specialized software to be installed and works across International boundaries well. Web EDI is a hub-and-spoke approach and messages to multiple EDI systems through its gateway service. In addition, industry security organizations provide standards and oversight for data in motion and at rest. PCI Data Security Standards is one such organization.

Figure 1: EDI Systems Architecture
Overall, EDI can reduce latency in business transactions and tightly adhere to the organizational strategies without significant adaptation of the organization. Organizations may try to outsource the people, processes, and even the technology as part of their strategy and objectives but processes remain consistent. In conclusion, EDI is a standard that is applied in various forms offering numerous advantages to an organization and business transactions.

Wednesday, January 8, 2014

Personal Data Storage

This post is a departure from beaten track as we will discuss Redundant Arrays of Independent Disks; RAIDs. Having reliable data storage is essential to a personal data storage plan.  Many folks may chuckle at the thought of a personal data storage plan but there are good reasons for having one - especially if you have lost data in the past.  If you store large volumes of digital information such as tax records, medical records, vehicle documents, digital music, movies, photo collections, graphics, and libraries of source code or articles then a storage data plan is essential to reduce the risk of loss. Loss can occur due to disk drive failures, accidental deletions, or hardware controller failures that wipe drives.  Online services offer affordable subscription plans to back up your PC or store data in a cloud but you risk the loss of privacy when using these services. Having local, portable, and reliable data storage is the best approach and a personal data management plan is the centerpiece of the effort.

Personal Data Storage 
by
JT Bogden, PMP

Such a plan should be designed around two points. First, there should be a portable detachable and reliable independent disk drive system. Second, there should be a backup system. We will focus on the first point in this post. 

Figure 1: Completed RAID
After a lot of research, I settled on a barebones SANS Disk Raid TR4UT+(B) model, Figure 1. The device has a maximum capacity of 16 TBs and supports up to USB 3.0 and has an option to operate from a controller card improving data transfer raters over USB 3.0.  Fault tolerance methods of cloning, numerous RAID levels, and JBOD are supported as well.  Thus, the unit is well poised for a long term durable use.  

Since the device is barebones, I had to find drives that are compatible. Fortunately, the device was compatible with 11 different drives ranging from 500MB up to 4TBs across three vendors.  I had to figure out what characteristics mattered and determine which of the drives were optimal for my needs. The approach I used was a spreadsheet matrix, Figure 2. The illustrated matrix is a shortened form and did not consider the 4TB drive as it was cost prohibitive from the start as were several of the other drives. The 3 TB drive was used to breakout or create a spread for the other options. I computed the coefficient of performance, CP, then averaged them for the overall performance. In the end, I selected the Hitachi UltraStar 1TB in this example and purchased 4 of them. They are a high end server drive that are quiet and can sustain high data transfer rates for long periods of time. 

Figure 2: Decision Matrix for Drive Selection and Purchase

Figure 3: Installed Drives 
After selection, purchasing, and installation of the drives, Figure 3, RAID 5 was selected for the drive configuration. RAID 5 permitted hot swappable drives should one fail and provided more disk space than the other RAID modes. RAID 5 is a cost effective mode providing good performance and redundancy. Although, writes are a little slow. 

The final part of the process was to initialize and format the drives. File Allocation Tables, FAT and FAT32 are not viable options as they provide little recovery support.  New Technology File System, NTFS, improves reliability and security among other features. However, there is an emergent file system GUID Partition Table, GPT, which improves upon NTFS and breaks through older limitations. Current versions of Mac OS and MS Windows support this file system on a read and write level. Therefore, in a forward looking expectation of future movement towards this file system, the RAID was initialized then formatted with GPT. The formatting process was slow and took a long time. 

In the end, the RAID unit was accessible by both Windows and the MacBook Pro. All the data and personal information on disparate USB drives, memory sticks, and the local machines were consolidated to the RAID device. For the first time all my music, movies, professional files, and personal data were in one place with the strongest protection. The final cost was less than $650. The cost can be kept down if you shop around for the components: Amazon. It took about 8 hours of direct effort. Although the formatting and files transfers occurred as I did other things. 

While I will still use my memory sticks and a 1 TB portable USB drive with my notebooks, the RAID is the primary storage device. It can be moved relatively easily if I change locations and/or swap it between computers if necessary. The device can also be installed as a serverless network drive and hung off of a wireless router. I prefer not to use it in that manner as the risk of exposure or loss of privacy slightly increases.

Overall, the system is quiet and has a low power drain while in operation with heightened data protection. I encourage others to rethink how they are storing their data and invest in a solid reliable solution.  As the solid state drive come into increasing use, the traditional silver oxide platter drives will drop in price dramatically.  This will enable more folks to build drive arrays like mine at lower costs then convert them later to the solid state systems as those prices drop. 

Virtualizing Computational Workloads

Commentary:  This is a general discussion in which I wrapped up into the discussion an unique use of Virtualization. In the short term, companies can benefit from off loading heightened computational demands. They may desire to purchase computational power for a limited time versus the capital expenditure of purchasing and expanding the systems. The virtualized environment also can solve issues relating to geographically dispersed personnel. Overall, we are a long way from meaningfully and effectively using the excess computational power residing on the web or across an organization. This discussion though hopefully gives some insight on how to use that excess power.


Virtualizing Computational Workloads
by
JT Bogden, PMP  

Virtualize computing can occur over any internetwork system including World Wide Web. The concept centers on distributing the use of excess system resources such as computational power, memory, and storage space in a service-oriented or utilitarian architecture. In simple terms, internet based resource provisioning. Multiple virtualized environments can exist on the same network. Although, the physical hardware can only be assigned to and managed by a single virtualized engine. Each virtualized environment, a cloud, encapsulates a unique group of participating hardware resources which is managed through virtualization; Figure 1. Demand for services are then sent out into the cloud to be processed and the results are returned to the virtual machine.

Figure 1:  The Virtualized Concept


The virtual machine can be as simple as a browser or can be the complete set of applications including the operating system running on a terminal through thin clients such as Citrix.  The cloud service can be as simple as a search service such as google and/or database storage of information. A simple cloud example is SkyDrive, MobileMeTM, and now iCloudTM. iCloudTM offers backup services, storage services, and platform synchronization services to its users over the World Wide Web.

Virtualization

The virtualization concept is one in which operating systems, servers, applications, management, networks, hardware, storage, and other services are emulated in software but to the end user it is completely independent of the hardware or unique technological nuances of system configurations. Examples of virtualization include software such as Fusion or VMWare in which Microsoft's operating system and software run on a Apple MacBook.  Another example of virtualization is the HoneyPot used in computer network defense. Software runs on a desktop computer that gives the appearance of a real network from inside the DMZ to a hacker attempting to penetrate the system. The idea is to decoy the hacker away from the real systems using a fake one emulated in software. An example of hardware virtualization is the soft modem. PC manufacturers found that it is cheaper to emulate some peripheral hardware in software. The problem with this is diminished system performance due to the processor being loaded with the emulation. The JAVA virtual engine is also another example of virtualization. This is a platform independent engine that permits JAVA coders to code identically the same on all platforms supported and the code to function as mobile code without accounting for each platform.

Provisioning In Virtualization

Once hardware resources are inventoried and made available for loading. Provisioning in a virtualized environment occurs in several ways. First, physical resources are provisioned by management rules in the virtualization software usually at the load management tier, Figure 1. Secondly, users of a virtual machine can schedule a number of processors, the amount of RAM required, the amount of disk space, and even the degree of precision required for their computational needs. This occurs in the administration of virtualized environment tier, Figure 1. Thus, idle or excess resources can, in effect, be economically rationed by an end user who is willing to pay for the level of service desired. In this way the end user enters into an operating lease for the computational resources for a period of time. No longer will the end user need to make a capital purchase of his computational resources.

Computational Power Challenges

I have built machines with multi-processors and arrayed full machines to handle complex computing requirements. Multi-processor machines were used to solve processor intensive problem sets such as Computer Aided Design, CAD, demands or high transaction SQL servers. Not only were multiple processors necessary but so were multiple buses and drive stacks in order to marginalize contention issues. The operating system typically ran on one buss while the application ran over several over other busses accessing independent drive stacks. Vendor solutions have progressed with newer approaches to storage systems and servers in order to better support high availability and demand. In another application, arrayed machines were used to handle intensive animated graphics compilations that involve solid modeling, ray tracing, and shadowing on animations running at 32 frames per second. This meant that a 3 minute animation had 5760 frames that needed to be crunched 3 different times. In solving this problem, the load was broken into sets. Parallel machines crunched through the solid model sets handing off to ray tracing machines then to shadowing machines.  In the end the parallel tracks converged into a single machine where the sets were re-assembled into the finished product. System failures limited work stoppages to a small group of frames that could be 're-crunched' then injected into the production flow.  

These kinds of problems sets are becoming more common today as computational demands on computers become more pervasive in society. Unfortunately, software and hardware configurations remain somewhat unchanged and in many cases unable to handle the stresses of complex or high demand computations. Many software packages cannot recognize more than one processor or if they do handle multiple processors the loading is batched and prioritized using a convention like first in first out (FIFO) or stacked job processing. This is fine for a production use of the computational power as given in the examples earlier. However, what if the computational demand is not production oriented but instead sentient processing or manufactures knowledge? I would like to explore an interesting concept in which computational power in the cloud is arrayed in a virtualized neural net.

Arraying for Computational Power in New Ways

Figure 2: Computational Node


One solution is to leverage arcane architectures in a new way. I begin with the creation of a virtual computational node in software, Figure 2, to handle an assigned information process. Then organize hundreds or even tens of thousands of computational nodes on an virtualized backplane, Figure 3. The nodes communicate in the virtual backplane listening for information being passed then process it, and publish the new information to the backplane. A virtualized general manager provides administration of the backplane and is capable of arraying the nodes dynamically in series or parallel to solve computational tasks. The node arrays should be designed using object oriented concepts. Encapsulated in each node is memory, processor power, its own virtual operating system and applications. The nodes are arrayed polymorphically and each node inherits public information.  In this way, software developers can design workflow management methods, like manufacturing flow, that array nodes and use queues to reduce crunch time, avoid bottle necks, and distribute the workload. Mind you that this is not physical but virtual.  The work packages are handed off to the load manager which tasks the physical hardware in the cloud, Figure 3.

Figure 3:  Complex Computational Architecture


This concept is not new. The telecommunications industry uses a variation of this concept for specialized switching applications rather than general use computing. There are also array processors used for parallel processing. Even the fictional story, Digital Fortress by Dan Brown centered on a million processor system. Unfortunately, none of these concepts were designed for general use computing. If arrayed computational architectures were designed to solve complex and difficult information sets then this has the potential for enormous possibilities. For example, arraying nodes to monitor for complex conditions then make decisions on courses of actions and enact the solution.

The challenges of symbolic logic processing can be overcome using arrayed processing to virtualize neural nets.  A combination of sensory arrays for inputs, (node) neural-to-neural (node) processing, and valid pathways or lines of logic would provide the means to complete complex processing and output results that are otherwise difficult to achieve. If enough physical hardware participates the World Wide Web then the web could become an enormous neural processor solving some of the most incredibly complex computational problem sets.

The World Wide Web and Computational Limitations

This architecture within a cloud is limited to developing knowledge or lines of logic. Gaps or breaks in a line of logic may be inferred based on history which is also known as quantum leaps in knowledge or wisdom. Wisdom systems are different than knowledge systems. Knowledge is highly structured and its formation can be automated more easily.  Whereas wisdom is less structured having gaps in knowledge and information. Wisdom relies on inference and intuition in order to select valid information from its absence or out of innuendo,  ambiguity, or otherwise noise. Wisdom is more of an art whereas knowledge is more of a science.

Nonetheless, all the participating computers on the World Wide Web could enable a giant simulated brain. Of course, movies such as The Lawn Mower Man, Demon Seed, The Forbes Project, and War Games go the extra mile making the leap to self-aware machines that conquer the world. For now though, let's just use them to solve work related problems.

References:

Brown, Dan, May 2000. Digital Fortress, St Martin’s Press, ISBN: 9780312263126

Englander, I. (2003). The Architecture of Computer Hardware and Systems Software: An information Technology Approach. (3rd ed.). New York: John Wiley & Sons Inc.