alva is now Penta. We are the world’s first comprehensive stakeholder solutions firm. Learn More

Learn More

Hit enter to search or ESC to close

logo-block
alva
Get in touch
logo-block-tablet

What’s Next for AI

Artificial intelligence is in its infancy. The term implies a human-like experience, but in practice we experience narrow applications like movie recommendations or moderately effective uses like chatbots that rarely seem to work as well as a person. Even the term “data scientist” implies research rather than practical engineering.

To make a big difference for business, artificial intelligence needs to become simpler to develop. Organizations should view artificial intelligence as the substrate for business processes rather than part of IT. Algorithms need to learn from fewer examples and use less power. Software needs to communicate conversationally like a person. Finally, artificial intelligence must move from improving specific decisions to understanding and enhancing entire business processes-or even businesses.

Artificial Intelligence as a Business Capability

Our senior team members have created predictive models, machine learning, and now deep learning applications for about twenty years. Each generation increases quantitative, software, and data center complexity.

Much complexity stems from approaching analytics as technology rather than a business capability. As more businesses compete on AI, data and analytics will increasingly become a business imperative shared by all functional areas. This will force AI researchers, technologists and vendors to simplify rather than create technical tools usable only by engineers. CFOs will regard data as a strategic asset managed as carefully as a balance sheet item. Product managers will select and compete on algorithms. Sales and marketing will actively manage and source data.

AI models are generally trained to replicate how humans categorize or act on data. Depending on many factors, models can achieve perhaps 70-90% accuracy compared to human decisions in training data. This means models can be exquisitely sensitive to training and validation data. Building the best training data often cross-functionally involves customer service, marketing, and finance rather than relegated as an IT task.

Treating data as a competitive differentiator will accelerate existing trends to consolidate and standardize data. Business analytics started with reporting on operational data stores tied to a specific task or process. Data then came together with data warehouses using highly structured and somewhat complex and brittle ways to organize data across a business. As applications and types of data proliferated, businesses began to organize unstructured stores termed data lakes that often store information before a use is identified. The latest generation of enterprise data tools provide categorization and workflow to manage data lakes.

The imperative to standardize data is beginning to extend between organizations. Many benefits to standardizing across an industry accrue to customers and regulators rather than individual companies, so mandates are often driven by government. Examples include electronic healthcare records, securities trading post-trade clearing and settlement, and sensitive supply chains such as food and medical devices. Recently, the National Institute of Science and Technology (NIST), a US government agency, provided guidelines across the US government to standardize data architecture to spur artificial intelligence.

Easier to Build AI

There is some irony that building automated intelligence requires a manual, labor-intensive process. Building AI can be distilled to sourcing data, selecting algorithms, training models, and plugging into business processes. Currently, each step is manual.

Tools are evolving rapidly. Some, like Azure ML and Alteryx, stress visual assembly to reduce coding. Orange and RapidMiner move toward automating workflows. DataRobot can select algorithms, though on specific use cases and using a narrow range of options.

However, we need intelligent tools that replicate human tasks. Software should recognize and integrate data types. For example, US zip codes and UK postal codes often play similar roles in models. Similarly, software should relate the “shape” of data to the type of business question to answer in order to select and tailor algorithms in models.

As an example, the author’s former team provided consumer analytics to a leading housewares and furniture company. The client sent 1300 (!) variables every time a consumer abandoned an online shopping cart. Data included consumer attributes such as presence of children in the household, behaviors such as their responses to prior campaigns, and even some third-party data such as weather and fuel prices thought to correlate with buying behaviors. The reality is that most variables had no predictive power, while a few mattered but only for a limited period of time. We built tools to visualize relationships in data to show the client why we discarded nearly all of their data. Businesses need AI tools smart enough to automatically understand data and perform lower value data science tasks.

Improved Deep Learning

Most artificial intelligence currently is implemented using deep learning, which uses neural nets that attempt to replicate how nervous systems operate. Neural nets can be thought of similarly to spreadsheets: an input, such as a picture, is turned into many parts and then fed into a series of cells at the top of the spreadsheet. At each layer (i.e. deep), the cells apply a weight multiplied to data from the prior cell. For a picture, this process may first recognize edges, then eyes and mouth, and then a face.

To date, developers have started with the equivalent of empty spreadsheets. However, our brains pass inputs through complex maps of relationships. Data scientists are using a similar technique with neural nets. Transfer learning pre-trains neural nets with maps of the data used in a business decision.

For example, alva pre-trains neural nets with a map of how words relate to other (specifically, the ULMFiT word embedding model). We use Wikipedia as a corpus, or body of text, to first train the neural nets with maps of how words relate to each other.

Next, we use human-scored examples to train the neural nets to assess sentiment in the forms of text relevant to corporate reputation intelligence. Benefits of transfer learning include better precision and learning from a smaller number of examples.

Using human-scored examples to train a neural net is termed supervised learning. Creating training data can be an expensive and somewhat unpredictable process. This is particularly true for natural language applications because people do not wholly agree on the meaning of language. An alternative approach, called unsupervised learning, does not use examples scored by people. Instead, unsupervised learning finds patterns within data. While eliminating having people score training data greatly simplifies development, the models require very large data sets.

A notable recent example is OpenAI, an AI research institute founded by Elon Musk and now backed with a $1B investment from Microsoft. OpenAI used unsupervised learning to essentially predict words, phrases, and sentences that tend to follow a given word or phrase. They used the system to create false news items of such high quality that the organization decided not to release the code. (OpenAI also offers commercially available software that pairs ULMFiT transfer learning with smaller supervised learning.)

Alternatives to Deep Learning

Reinforcement learning is also gaining commercial adoption. Reinforcement learning uses models of many agents that each seek to maximize rewards within a changing environment. Reinforcement learning progresses through iterations where agents make a series of decisions interacting with their environment to optimize reaching a goal. Initial applications center on robotics and finance, but we can expect RL applications to spread.

Deep learning and reinforcement learning are improving but need to learn from fewer examples. Many studies, and experience, show toddlers can learn lessons from few examples. AI algorithms need to evolve similarly.

Custom Computer Chips

Artificial intelligence models need simpler data center infrastructure. In the last decade, model training workloads have moved away from computers’ CPUs to graphical processing units (GPUs), which began as video cards in computers. GPUs have thousands of small, simple logical units acting in parallel. They fit well with many data science workloads. However, not all models work well with GPUs and require special configuration in AWS and Azure. Developers creating other forms of software generally don’t target specific infrastructure. This should apply to artificial intelligence as well.

Business will be more affected by the trend to create custom processors for artificial intelligence. This holds the potential for massive scalability to address very complex needs like autonomous driving or real time voice translation. Companies including Google and Tesla have created proprietary ASICs (application-specific integrated circuits). However, this is a very capital-intensive effort. Microsoft and Intel have taken an intermediate route, using chips called FPGAs (field programmable gate arrays) that can be configured by software to form custom chips. FPGAs operate slower than ASICs but are substantially less expensive and can be reprogrammed. We can expect more mid-sized businesses to use FPGAs.

AI needs to consume less power. As a thought experiment, our brains use about 20 watts to power about 100 billion neurons. A laptop computer typically uses 90 watts for a system with about 1.8 billion transistors in a CPU. Our brains operate somewhat slower but with millions of parallel threads, while computers operate more quickly on perhaps 8-12 threads. Our brains consume less power while operating with many orders of magnitude more parallelism.

Quantum computing may offer the ultimate disruption. Quantum computing essentially provides enormous power to solve complex optimization problems, such as routing a fleet of trucks through multiple destinations. More broadly, a very powerful quantum computer could optimize an entire neural network.

Digital Twins for Business Processes

GE Digital, Formula 1 teams, and other organizations have built digital twins of complex machines in order to model performance. This technique will be standard with essentially all machines emitting data. The concept can apply further to processes and even entire businesses. Today, business analytics generally track results such as finance and sales reporting. The difference with a digital twin is to comprehensively capture inputs to a business and identify how they translate to results.

For example, rather than solely capturing sales activities in CRM, a digital twin quantifies how email, calls, meetings, and digital interactions have prompted prior prospects to progress through a sales funnel. The digital twin then applies this logic to current prospects.

Pervasive Natural Language Interfaces

We already use natural language interfaces with our phones and smart speakers. This works by recognizing and mapping language to commands sent to specific service or object. Similarly, an AI that understands how data relates within a digital twin can provide a conversational interface to business processes. In effect, we’ll soon speak to Sales or have a conversation with a business.

In conclusion, we can expect that businesses will increasingly view AI as a cross-functional competitive imperative rather than part of IT. Manual development process will accelerate through tools that automatically recognize and find relationships in data, as well as more powerful algorithms that learn from fewer examples. Integrating business data will evolve from data warehouses and data lakes to models that continually quantify how inputs to business processes generate results. This will form digital twins. The practically foreseeable horizon for business artificial intelligence will involve human-like conversations with business processes.

Author: Jeremy Lehman.

See more articles

Related posts

Be part of the
Stakeholder Intelligence community

To join our Stakeholder Intelligence community simply complete the form below.