Semantic technology

What is augmented reality?

Anything we can do to improve our knowledge of our immediate physical, cognitive, social, and informational context, our personal state, threat awareness, and opportunity awareness, without unduly complicating matters — Clark Dodsworth.

Clark Dodsworth

Clark Dodsworth

Clark Dodsworth is a product strategist for next wave smart devices. According to Clark, any product is a tool, any tool is an interface: whether it’s software or hardware,… whether it involves interaction design, industrial design, visual design, public-space design, design management, or all of them. A tool should fit the user. A tool should adapt to the user.

Clark works with companies to improve tools, evolve tools, add and revise features of them, position them in the marketplace, and find the sweet spot for IP to become new tools.

Context is king

In 2010, Clark presented at the Augmented Reality Event in Santa Clara, CA. His topic was Context is King — AR, Salience, and the Constant Next Scenario. Highlights from this talk follow.

Eight layers of context awareness

Eight layers of context awareness

Context is the interrelated conditions in which something exists or occurs (Merriam Webster). It is the situation within which something exists or happens and that can help explain it (Cambridge Dictionary). It is any information that can be used to characterize the situation of an entity (Dey, 1999). It is the set of environmental states and settings that either determines an application’s behavior or in which and application event occurs and is interesting to the user (Chen, Kotz 2000).

This diagram (above) depicts eight categories of context for smart services. Context-aware applications look at the who, where, what, and when of entities to determine why and how a situation is occurring.

What happens when there’s a software layer that learns you?

Augmented reality to cope with your constant next scenario

Augmented reality to cope with your constant next scenario

Augmented reality (AR) on smartphones, tablets and other platforms is a new and highly malleable set of tools and substrates for them. It is the path to how we can manage our daily work and life in the midst of the tsunami of of information we can get all the time, not to mention the daily social contacts we manage and deal with. The next steps for AR are about helping each user with the constant next scenario in their day, minute by minute…anticipating and supporting their process, without interrupting it. While we’ll get there step-by-step, we need to have this goal in mind, or we will undershoot the opportunity and delay very real efficiencies in our daily lives and work.

The preceding diagram is a data flow concept model of the information you generate as you proceed through your day, as well as the informational data you normally receive (text, voice, email, alerts, etc.), and how those streams should be processed by the three segments of hypersonalization services (App, OS, and Cloud).

Also, it shows the step of selecting the available sensory modes for delivery, which includes dynamically formatting the information for display in whichever mode is most appropriate.

It also indicates that device sensors will tend to be in constant contact with either cloud-based sensor datastreams or peer-to-peer ac-hoc (mesh) sensor nets, or both, to adequately inform you of relevant changes and states of your physical environment. They will not only receive but also contribute to both, in part as a way to pay for access to the aggregate sensor data feeds.

Once these three methods of analytics (salience, live context, and the actual decision-path, have learned enough about you, they begin to do probabilistic data staging and delivery for your next decision(s).

Augmented Context + Information Salience = Value.  

Constantly supporting the next scenario of my daily life

Constantly supporting the next scenario of my daily life

Augmented reality means dynamic, context-driven hyperpersonalization, salience filtering, & scenario modeling in order to adaptively deliver info, highly personalized via N-dimensions of user states, with high space/time precision, and without spiking your cognitive load. There is constant auto discovery of data feeds and sensor feeds. There is constant markerless 2D & 3D feature identification and object recognition linked to motion analysis and evaluation. There is indoor position mapping  seamlessly linked with outdoor GPS position sensing.  There is nuanced gestural and integration with voice as well as  constant audio + visual awareness.

Allostatic control delivers only the data needed, when needed. “Allostasis” is maintenance of control through constant change. This means avoiding user distraction by constantly contextually evaluating what should be displayed to the user, if anything. Without sophisticated software “awareness” to drive what information is inserted into the user’s awareness,  we quickly hit the complexity wall. Providing the right amount of information we need, when we need it, and only the amount we need, in the sensory mode that’s most appropriate at the moment, is the doable goal.

Smart tools fit the user and the task. They adapt to — i.e. learn — the user over time, becoming more useful to the individual the more they’re used. Good augmented reality tools adapt dynamically. Otherwise ongoing management of their services is too distracting. Smart tools must provide their value to us without the amount of user configuring we’ve become accustomed to.

When systems record, analyze, and thus learn from user choices over time, they learn what scenarios you select at each moment. They can then begin to provide information services for the next impending choice situation – a set of scenarios you’ll choose from that may need information-services support. Every day.

The Context War

The Context War

Information vendors and access providers can create new revenue streams by becoming  trusted, high-security vendors of tracking and analysis of user’s paths through location, time, people, tasks, decisions, interests, alerts, ads, priorities, and the publicly shared priorities of his/her friends and associates. Providing a set of analytical services that can be selected and turned on and off by the user, and whose value accretes over time, will become viewed as the most effective way for individuals to parse the information they need to use in their day-to-day work and life. This is closely related to just-in-time learning, and constant on-the-job training, and life-long learning.

Smart systems and devices, context-aware computing, and augmented reality define the next battle field for managed device platforms. It’s all about long-term, sustaining, contextually dynamic, hyper-personalized relationships between users and networked delivered services. As shown in the following diagram, five vendor camps have formed along lines of research and are competing to create the dominant smart platform ecosystem. Contending camps include: Google (Android), Apple (iOS), Facebook, Amazon, and Microsoft.

 

Thanks Clark.

What is the addressable market for 
enterprise-situated knowledge workers?

Potential exists to increase knowledge worker productivity 50X by 2030.

Market for knowledge computing solutions

Market for knowledge computing solutions

This table provides a conservative, order of magnitude estimate of the addressable market (number of individuals, organizations) for five categories of knowledge computing solutions. Markets for smart mobile internet of services and things will be orders of magnitude larger.

What are some principles for multi-agent systems?

Environment is an active process. A flock is not a big bird. Emergent behavior is distributed. Think small.

Hieronymus Bosch

Hieronymus Bosch

Software technologies have evolved from assemblers, to procedure programming, to object-oriented software. Goal-oriented direct model-driven services, semantic agents, and multi-agent systems are the next software paradigm.

Some principles for multi-agent systems include:

  1. Agents should correspond to “things” in a problem domain rather than to abstract functions
  2. Agents should be small in mass, time (able to forget), and scope (avoid global knowledge action)
  3. Multi-agent systems should be decentralized (with no single point of control/failure)
  4. Agents should be neither homogeneous nor incompatible but diverse
  5. Agent communities should include a dissipative mechanism (entropy leak)
  6. Agents should have ways of caching and sharing what they learn about their environment
  7. Agents should plan and execute concurrently rather than sequentially.

What is goal-oriented software engineering?


Declaring knowledge about data, processes, rules, services and goals separately from application code to enable sharable, adaptive, autonomic, and autonomous solutions.

Almost Human

Almost Human

Goal-oriented software engineering separates knowledge (i.e., semantic models) about what to do from how to do it. Goals identify targets for a process, or portion of a process. Plans get selected dynamically to achieve goals. Self management within the business process logic allows dynamic restructuring of goals and plans at run time as well as adaptation of process behavior to prevailing run-time conditions and business contexts.

Autonomous solutions have the ability to independently perform some processing toward achieving or maintaining one or more goal. They are self-contained systems that function independently of other components or systems by acquiring, processing and acting on environmental information.

Autonomic computing systems have the ability to self-regulate a goal/plan hierarchy to make automatic adjustments in real-time in accordance with changing environment using closed control loops without the need for any direct intervention.

What makes concept computing different?


Webs of meanings and knowledge. Constraint-based, direct model execution systems that know and can share what they know. Architectures of learning that perform better with use and scale.

Three stages of semantic solution envisioning

Three stages of semantic solution envisioning

The diagram above depicts three stages of semantic solution envisioning. Business capability exploration focuses on reaching agreement about goals, constraints, and the basic concepts and terms that different groups use. Different stakeholders express their requirements in their own language. Solution exploration defines needed capabilities and concept computing solution patterns. Semantic software design is knowledge-centric, iterative and incremental. Changes in ontology, data, rules, workflow, user experience, etc, can automatically update system functionality, without impacting underlying legacy systems and databases.

The process of semantic solution development is declarative, direct execution model-driven, and knowledge-centric and rather than being procedural, specification waterfall translation-driven, and document centric. Build cycles are fast, iterative, non-invasive. Semantic solution development typically entails less time, cost, and risk to deploy, maintain, and upgrade.

Knowledge is extracted and modeled separately from documents, schemas, or program code so it can be managed integrally across these different forms of expression, shared easily between applications (knowledge-as-a-service), aligned and harmonized across boundaries, and evolved. Requirements, policies, and solution patterns expressed as semantic models that execute directly can be updated with new knowledge as the application runs.

Semantic solutions emerge from a techno-social collaboration that also supports do-it-yourself development. The development process is business and user driven versus IT and developer driven. The collective knowledge developed is interpretable by different human stakeholder as well as machine executable. Different skills are key including domain expertise, concept computing architecture, knowledge modeling, and (smart) user experience design.

How is software’s space vs. time trade-off evolving?

Data-centric, constraint-based declarative processes trump procedural algorithms.

Complex reasoning at scale demands declarative processing

Complex reasoning at scale demands declarative processing

In software, we can trade processing cycles (time) for storage (space). For example, imagine an algorithm where every possible input combination, processing sequence, and output variation had been pre-computed and that the results of all these possible states had been stored. Next time there would be no need to (re)execute the program because all possible inputs and results have already been computed and declared. You simply look up the result by specifying the variables that constrain the space of possible outcomes. Nothing could be faster.

Historically, the cost of computation favored executing algorithms over and over, rather than providing the semantic bandwidth and memory to access and store all possible permutations in a declarative knowledge mesh. That was then.

Today, computing economics have reversed. Massive memory and parallel processing is cheap and getting cheaper. Massive declarative knowledge-based computing is now economical.

But, there’s an even more compelling reason why the shift is happening now — new capabiities.
Declarative systems allow applications that can learn, evolve, and use knowledge in ways the system designer could not foresee.

How is this possible?

  • Declarative languages specify “what” is to be done. Declarative representations are like a glass box with knowledge visible in a format that can be accessed, manipulated, decomposed, analyzed, and added to by independent reasoners.
  • Imperative (procedural) languages, on the other hand, encode “how” to achieve a particular result, step by step. Procedural representations are like a black box with knowledge fixed and locked inside an algorithm. Procedural systems do the same thing every time and cannot learn from experience. Making changes requires rebuilding data stores and rewriting algorithms off-line.

What is scale invariance?

The ability of a system to scale-down, scale-up, scale-out, and scale-in.

Four dimensions of scalability

Four dimensions of scalability

Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner, and its ability to be further empowered to accommodate that growth. For example to handle more users, more data and sources, greater knowledge intensivity of decision making, increasing process dynamics, infrastructure expansion, and system adaptivity.

Architectures for the next stage of the internet will evolve along four key dimensions of scalability:

Scale-down
As Richard Feynman said, “There is plenty of room at the bottom!” Scale-down architecture is about maximizing feature density and performance, and minimizing power consumption. Approaches include nano-technology, atomic transistors, atomic storage, and quantum computing.

Scale-up
Scale-up architecture (or vertical scaling) is about maximizing device capacity. It adds more processing power, memory, storage, and bandwidth on top of a centralized computing architecture. Typically, scaling up uses more expensive, platform-specific, symmetric multiprocessing hardware. It’s like a fork lift to increase capacity and performance.

Scale-out
Scale out architecture (or horizontal scaling) is about maximizing system capacity. It adds multiple instances of the process architecture enabling the system to address vastly more subjects, services, things. Processors perform essentially the same task but address different portions of the total workload, typically using commodity Intel/AMD hardware and platform independent open source software.

Scale-in
Scale in architecture is about maximizing system density and minimizing end-to-end latency. Scale-in architecture differs from the old compute-centric model, where data lives on the disk in a deep storage hierarchy and gets moved to and from the CPU as needed. The new scale-in architecture is data-centric. Multi-threading within and across muti-core processors enables massive parallelism. Data lives in persistent memory. Many CPUs surround and use in-memory data in a shallow flat storage hierarchy where each memory location is addressable by every process thread.

Scale-in architecture is data-centric

Scale-in architecture is data-centric

Scale-in architecture empowers concept computing — a dimension of increasing knowledge intensivity of process, decision-making and user experience. From a big data perspective, it allows addition of new data elements, aggregation of context, and on the fly modification of schemas without the necessity of off-line rebuilds such as are needed to modify data warehouses and NOSQL processes at scale. From a business perspective scaling-in enables “systems that know” where changes to business requirements, policies, laws, logic etc can be managed flexibly and separately from data sources and operations. Scale-in architecture is a prerequisite for cognitive computing and systems that can learn and improve performance with use and scale.

These four dimensions of scalability are key to building smart, high-performing processes of any size, any complexity, any level of knowledge intensivity, and which can learn and grow live to quickly adapt to changing requirements and implement scale-free (or scale-invariant) business models.

Transformational change demands architecture that is different.

Transformation demands different architecture.

Transformation demands different architecture.

Here is a little story to illustrate the point:

A scientist discovered a way to grow the size of a flea by several orders of magnitude. She was terribly excited. After all, a flea can jump vertically more than 30 times its body size. She reasoned that a flea this big would be able leap over a tall building. Perhaps, there could be a Nobel Prize in this.

When the day came to show the world, she pushed the button and sure enough out came this giant flea, over two meters high. But, rather than leaping a tall building, it took one look around and promptly fell over dead. Turns out it couldn’t breath. No lungs. Passive air holes that worked fine for oxygen exchange in a tiny flea were useless for a creature so big.

What is pattern recognition?

Techniques that distinguish signal from noise.

Sources: Gary Larson, Barbara Catania and Anna Maddalena

Sources: Gary Larson, Barbara Catania and Anna Maddalena

Pattern recognition techniques distinguish signal from noise through statistical analyses, Bayesian analysis, classification, cluster analysis, and analysis of texture and edges. Pattern recognition techniques apply to sensors, data, imagery, sound, speech, language.

Automated classification tools distinguish, characterize and categorize data based on a set of observed features. For example, one might determine whether a particular mushroom is “poisonous” or “edible” based on its color, size, and gill size. Classifiers can be trained automatically from a set of examples through supervised learning. Classification rules discriminate between different contents of a document or partitions of a database based on various attributes within the repository.

Statistical learning techniques construct quantitative models of an entity based on surface features drawn from a large corpus of examples. In the domain of natural language, for example, statistics of language usage (e.g., word trigram frequencies) are compiled from large collections of input documents and are used to categorize or make predictions about new text.

Statistical techniques can have high precision within a domain at the cost of generality across domains. Systems trained through statistical learning do not require human-engineered domain
modeling. However, they require access to large corpora of examples and a retraining step for each new domain of interest.

What is a thesaurus?

A compendium of synonyms and related terms.

Thesaurus lists words in groups of synonyms and related concepts.

Thesaurus lists words in groups of synonyms and related concepts.

A thesaurus organizes terms based on concepts and relationships between them. Relationships commonly expressed in a thesaurus include hierarchy, equivalence, and associative (or related). These relationships are generally represented by the notation BT (broader term), NT (narrower term), SY (synonym), and RT (associative or related). Associative relationships may be more granular in some schemes.

For example, the Unified Medical Language System (UMLS) from the National Library of Medicine has defined over 40 relationships across more than 80 vocabularies, many of which are associative in nature. Preferred terms for indexing and retrieval are identified. Entry terms (or non-preferred terms) point to the preferred terms that are to be used for each concept.