MILLS•DAVIS weblog page 2

What is pattern recognition?

Techniques that distinguish signal from noise.

Sources: Gary Larson, Barbara Catania and Anna Maddalena

Sources: Gary Larson, Barbara Catania and Anna Maddalena

Pattern recognition techniques distinguish signal from noise through statistical analyses, Bayesian analysis, classification, cluster analysis, and analysis of texture and edges. Pattern recognition techniques apply to sensors, data, imagery, sound, speech, language.

Automated classification tools distinguish, characterize and categorize data based on a set of observed features. For example, one might determine whether a particular mushroom is “poisonous” or “edible” based on its color, size, and gill size. Classifiers can be trained automatically from a set of examples through supervised learning. Classification rules discriminate between different contents of a document or partitions of a database based on various attributes within the repository.

Statistical learning techniques construct quantitative models of an entity based on surface features drawn from a large corpus of examples. In the domain of natural language, for example, statistics of language usage (e.g., word trigram frequencies) are compiled from large collections of input documents and are used to categorize or make predictions about new text.

Statistical techniques can have high precision within a domain at the cost of generality across domains. Systems trained through statistical learning do not require human-engineered domain
modeling. However, they require access to large corpora of examples and a retraining step for each new domain of interest.

What is a thesaurus?

A compendium of synonyms and related terms.

Thesaurus lists words in groups of synonyms and related concepts.

Thesaurus lists words in groups of synonyms and related concepts.

A thesaurus organizes terms based on concepts and relationships between them. Relationships commonly expressed in a thesaurus include hierarchy, equivalence, and associative (or related). These relationships are generally represented by the notation BT (broader term), NT (narrower term), SY (synonym), and RT (associative or related). Associative relationships may be more granular in some schemes.

For example, the Unified Medical Language System (UMLS) from the National Library of Medicine has defined over 40 relationships across more than 80 vocabularies, many of which are associative in nature. Preferred terms for indexing and retrieval are identified. Entry terms (or non-preferred terms) point to the preferred terms that are to be used for each concept.

What is taxonomy?

A hierarchical or associative ordering of terms.

Examples of types of taxonomy

Examples of types of taxonomy

A taxonomy is a hierarchical or associative ordering of terms representing categories. A taxonomy takes the form of a tree or a graph in the mathematical sense. A taxonomy typically has minimal nodes, representing lowest or most specific categories in which no sub-categories are included as well as a top-most or maximal node or lattice, representing the maximum or general category.

What are folk taxonomies?

A category hierarchy with 5-6 levels that has its most cognitively basic categories in the middle.

Source: George Lakoff

Source: George Lakoff

In folk taxonomies, categories are not merely organized in a hierarchy from the most general to the most specific, but are also organized so that the categories that are most cognitively basic are “in the middle” of a general-to-specific hierarchy. Generalization proceeds upward from the basic level and specialization proceeds down.

A basic level category is somewhere in the middle of a hierarchy and is cognitively basic. It is the level that is learned earliest. Usually has a short name and is used frequently. It is the highest level at which a single mental image can reflect the category. Also, there is no definitive basic level for a hierarchy – it is dependent on the audience. Most of our knowledge is organized around basic level categories.

What is the Watson Ecosystem?

IBM launches cognitive computing cloud platform.

Cognitive computing is going mainstream

IBM is taking Watson and cognitive computing into the mainstream

The Watson Ecosystem empowers development of “Powered by IBM Watson” applications. Partners are building a community of organizations who share a vision for shaping the future of their industry through the power of cognitive computing. IBM’s cognitive computing cloud platform will help drive innovation and creative solutions to some of life’s most challenging problems. The ecosystem combines business partners’ experience, offerings, domain knowledge and presence with IBM’s technology, tools, brand, and marketing.

IBM offers a single source for developers to conceive and produce their Powered by Watson applications:

  • Watson Developer Cloud — will offer the technology, tools and APIs to ISVs for self-service training, development, and testing of their cognitive application. The Developer Cloud is expected to help jump-start and accelerate creation of Powered by IBM Watson applications.
  • Content Store — will bring together unique and varying sources of data, including general knowledge, industry specific content, and subject matter expertise to inform, educate, and help create an actionable experience for the user. The store is intended to be a clearinghouse of information presenting a unique opportunity for content providers to engage a new channel and bring their data to life in a whole new way.
  • Network — Staffing and talent organizations with access to in-demand skills like linguistics, natural language processing, machine learning, user experience design, and analytics will help bridge any skill gaps to facilitate the delivery of cognitive applications. .These talent hubs and their respective agents are expected to work directly with members of the Ecosystem on a fee or project basis.

How does cognitive computing differ from earlier artificial intelligence (AI)?

Cognitive computing systems learn and interact naturally with people to extend what either humans or machine could do on their own. In traditional AI, humans are not part of the equation. In cognitive computing, humans and machines work together. Rather than being programmed to anticipate every possible answer or action needed to perform a function or set of tasks, cognitive computing systems are trained using artificial intelligence (AI) and machine learning algorithms to sense, predict, infer and, in some ways, think.

Cognitive computing systems get better over time as they build knowledge and learn a domain – its language and terminology, its processes and its preferred methods of interacting. Unlike expert systems of the past which required rules to be hard coded into a system by a human expert, cognitive computers can process natural language and unstructured data and learn by experience, much in the same way humans do. While they’ll have deep domain expertise, instead of replacing human experts, cognitive computers will act as a decision support system and help them make better decisions based on the best available data, whether in healthcare, finance or customer service.

Smart solutions demand strong design think

IBM unveils new Design Studio to transform the way we interact with software and emerging technologies

IBM unveils new Design Studio to transform the way we interact with software and emerging technologies

The era of smart systems and cognitive computing is upon us. IBM’s product design studio in Austin, Texas will focus on how a new era of software will be designed, developed and consumed by organizations around the globe.

In addition to actively educating existing IBM team leads from engineering, design, and product management on new approaches to design, IBM is recruiting design experts and is engaging with leading design schools across the country to bring designers on board, including the d.school: Institute of Design at Stanford University, Rhode Island School of Design, Carnegie Mellon University, North Carolina State University, and Savannah College of Art & Design. Leading skill sets at the IBM Design Studio include Visual Design, Graphic artists, User Experience Designers, Design Developers, including Mobile developers, and Industrial designers.

What is machine learning?

Any process by which a system improves performance from experience — Herbert Simon

Machine learning overview

Machine learning overview

Machine learning refers to the ability of computers to automatically acquire new knowledge, learning from, for example, past cases or experience, from the computer’s own experiences, from exploration, or from direct input of knowledge by the system users. Machine learning enables computer software to adapt to changing circumstances, enabling it to make better decisions than non-AI software.

The diagram presents an overview of machine learning. The center-most area depicts techniques and algorithms for pattern detection. The middle region lists reasoning methodologies. The outer-most part of the diagram identifies application areas and activities where machine learning applies today.

It’s not magic

Don Estes

Don Estes

Don Estes is an IT management and technical consultant with special expertise in large scale legacy modernization projects.

An automated modernization project, also referred to as a “conversion”,  “migration”, “legacy revitalization” or “legacy renewal” project, is inherently different from most projects in which  IT professionals will participate during their careers, and in several different ways.  When this specialized type of project goes awry, it is almost always from a failure to appreciate these differences and to allow for them in the project plan.

Properly controlled, an automated modernization project should be the least risky of any major project, but a failure to implement the proper controls can make it a very risky project indeed.  Automated modernization projects obtain their substantial cost savings and short delivery schedules by extracting highly leveraged results from the automation.  However, it is easy to forget that a lever has two arms, and – improperly implemented – you can find leverage working against you rather than for you in your project.

When there is residual value in a legacy application, an automated modernization project can extract and use that value in a highly cost/effective manner. Of course, in some cases this is futile, but in many if not most projects it has significant technical and financial merit. There are 3 important technical strategies:

  1. When the business rules expressed in a legacy system still fit the business process, but have a problem with software infrastructure (e.g., database, “green screen” interface, language, hardware platform, etc.), there is usually a fast, cheap and low risk way to deal with the problem, applying technology to renovate the code base into supporting the target configuration.
  2. When legacy systems partially fit the current business process but need significant functional expansion or modification, a re-engineering approach may make more sense. This way the original system is reproduced identically in totally new technology, then re-factored according to agile principles to meet the new requirements. Though counterintuitive to some, this approach is faster, cheaper and lower risk than taking a blank sheet of paper and starting over – because at every point in the project you have a fully functional system.
  3. When maintenance costs are high in a legacy application, it is possible to logically restructure the application to reduce the effort of maintenance programming. This is usually very cost/effective. Depending on how bad the code is, maintenance cost reductions of as much as 40% are possible, though this approach has the best results for the worst systems.

Anyone considering a modernization in isolation, and particularly anyone considering a modernization versus a replacement, should carefully weigh the risks. In the projects we have seen, the success rate is very high even for large projects, far more than the replacement approach. It is our firm conviction that if the issues discussed in this essay are adequately taken into account in modernization projects, the success rate will be 100%.

For more information, see Don’s essay on automated modernization: It’s Not Magic

Governance, Risk and Compliance

Playing Jazz in the GRC Club

Playing Jazz in the GRC Club

John Coyne is a preeminent innovator in technology for financial services. He holds patents in transactional AI, object-oriented, and semantic based systems. As a global lead for Governance, Risk and Compliance (GRC), John architects innovative transformations of financial services businesses.

Some problems have such importance to business, are so complex and burdensome, that, if you can solve them, even in part, huge benefits can result. This is the case with regulation. It consumes $ multi-trillions in cost and labor to comply. Regulation is growing faster than the economy. For large companies, this nets out to hundreds of millions of dollars of non-value added expense yearly. What if it were possible to reduce the burden and cost of regulation by 50-90 percent?

Playing Jazz in the GRC Club

In this book John Coyne and Thei Geurts describe the underlying principles, actionable framework, and solution patterns for shrinking compliance costs and burden. They outline Semantic GRC approaches have the potential to turn governance, risk and compliance from a costly cul-de-sac into a proactive and profit enhancing business outcome.

What is a model?

A representation of an actual or conceptual system.

Examples of models

Examples of models

A model is a representation of an actual or conceptual system that involves mathematics,
logical expressions, or computer simulations that can be used to predict how the system might perform or survive under various conditions or in a range of hostile environments.

A simulation is a method for implementing a model. It is the process of conducting experiments with a model for the purpose of understanding the behavior of the system modeled under selected conditions or of evaluating various strategies for the operation of the system within the limits imposed by developmental or operational criteria. Simulation may include the use of analog or digital devices, laboratory models, or “testbed” sites.

Semantic networks are building blocks of knowledge models

Semantic networks are building blocks of knowledge models

A semantic network is comprised of three basic elements:

  • Concepts are any ideas or thoughts that have meaning
  • Relations describe specific kinds of links or relationships between two concepts
  • Instances (of a relationship) consist of two concepts linked by a specific relation.

Relationships in a semantic network go beyond the standard broader than, narrower than, or
related terms of thesauri. They may include specific whole-part relationships, cause-effect, parent-child, and many, many others.

Knowledge models in direct model execution systems have strongly typed concepts and relationships.

Knowledge models in direct model execution systems have strongly typed concepts and relationships.

Why is visualization important?

Patterns provide a 60% faster way to locate, navigate, and grasp meanings.

Examples of information visualization. Source: VisualComplexity

Examples of information visualization. Source: VisualComplexity

Information visualization technologies can enable most users to locate specific information they are looking for as much as 60 percent faster than with standard navigation methods.

Visualization techniques exploit multiple dimensions, e.g.:

  • 1D — Links, keywords lists, audio.
  • 2D — Taxonomies, facets, thesauri, trees, tables, charts, maps, diagrams, graphs, schematics typography, image
  • 2.5D — Layers, overlays, builds, multi-spaces, 2D animation, 2D navigation in time
  • 3D/4D — 3-dimensional models, characters, scenes, 3D animation, virtual worlds, synthetic worlds, and reality browsing.

What is visual language?

Words, images and shapes, tightly integrated into communication units.

Source: Robert Horn

Source: Robert Horn

Visual language is the tight integration of words, images, and shapes to produce a unified communication. It is a tool for creative problem solving, problem analysis, and a way of conveying ideas and communicating about the complexities of our technology and social institutions.

Visual language can be displayed on different media and different size communication units. Visual language is being created by the merger of vocabularies from many different fields as shown in the diagram above, from Robert Horn.

As the world increases in complexity, as the speed at which we need to solve business and social problems increases, as it becomes increasingly critical to have the “big picture” as well as multiple levels of detail immediately accessible, visual language will become more and more prevalent in our lives.

What’s coming next are semantic, knowledge-enabled tools for visual language. Computers will cease being mere electronic pencils, and be used to author, manage, and generate shared executable knowledge by means of patterns expressed through visual language.