AI in the workplace needs a lot more accountability, less secrecy

Artificial intelligence has had more than a few false dawns. Academic research started in the field in the 1950s, and an AI villain helped catapult the movie 2001: A Space Odyssey to fame in 1968. But while a lot of valuable AI computing methodologies have been conceived and demonstrated over the last 60 years, until very recently, AI was essentially just a marketing buzzword, rather than a tool being used in the corporate world.

But things have changed. AI is now really playing a more significant role in business and government IT systems. Elements of AI technology are now move viable because of the vastly expanded datasets amassed by large organisations.

Machine learning is an important element of the current leap forward in AI. It isn’t a new idea, and encompasses a number of technology approaches. Machine learning is however fundamentally different to traditional computer software methods used to implement decision making.

Put simply, traditional computer programs can be documented and designed using logic-based flowcharts, and essentially specify exactly how a decision is to be determined and outcomes are calculated. These computer programs (or software components) can be readily understood by humans, and the reasoning is reproducible and auditable.

Machine learning on the other hand usually sets up an AI “black box” that accepts specific information as inputs, and provides specific outputs. The AI black box needs to be trained. Training occurs by providing it with a lot of data, and determining whether it is providing the desired answer. The black box is essentially learning from trial and error, being taught with externally determined carrots and sticks.

This machine learning computation method is already used in some parts of very popular consumer services, most notably in products offered by Google and Facebook. The scale and pervasiveness of the data collected by these enterprises is clearly a factor that has influenced their decision to implement elements of machine learning.

This is because the amount of training effort and input data required to teach a machine learning system can be very significant. The AI black box may also require a very large number of computations to process each decision, and for many tasks is computationally less efficient (slower, uses more energy, needs more memory, etc) than traditional human created computer code.

Our pop culture portrays AI as being similar to human reasoning. Machine learning in particular, depends on the input dataset and the quality of the training. It is very possible to incorrectly train, through poor data or skewing of data. These systems can potentially produce unexpected results, or brittle decisions. If the training datasets are poorly designed, or have not been exposed to inputs that reflect current real world demographics, decisions can be catastrophically impaired.

It is also not usually possible for a human to simply examine a machine learning black box to verify the integrity of the decision making. To verify that the black box is functioning correctly, you essentially need to check the AI’s homework for every possible scenario of inputs. If the system is setup to continuously learn, then it could work correctly for a time, and then drift into producing erroneous decisions without any warning at some future point.

On the other hand, machine learning can be very useful when focused on a very narrow task with tightly constrained inputs, when the validity of the outcomes can easily be evaluated. Particularly if the steps to solve the problem cannot be identified easily using digital logic. Analysis of images is a very good example of a type of problem that is already being widely solved with machine learning techniques. Decisions that involve forecasting based on statistical probabilities are also good candidates for machine learning.

This creates some very significant governance and commissioning concerns for any system that incorporates machine learning into decision making.

In commissioning scenarios in particular, ownership (or access) to the training data set, and ongoing training of the black box, are going to be critically important considerations. In the same way that we value educational curriculum and standardised testing to ensure a known level of competency for people, each AI component that handles critical decisions may end up needing careful ongoing oversight.

Traditional computer software applications have shaped workflows in business and government for decades. But very few organisations have experienced workflows which incorporate machine learning into their operations. Where AI has been embedded, there have already been a number of significant community concerns raised around privacy.

2019 has already seen multiple whistle-blower reports relating to corporate use of AI technologies. Human contractors have been reported as being tasked with listening to recordings of private conversations captured while the public has been using widely used AI powered services, including Google Assistant, Apple Siri, Amazon Alexa, and Facebook Messenger Chat. In each case, the companies involved have essentially indicated that the contractors were verifying the quality of AI decisions.

It is clear that the verification of AI work will be a major ongoing issue for large organisations that implement the technologies. The recent AI scandals have been discussed primarily in relation to privacy concerns, without considering the outcomes of the actual decisions those AI systems are making.

Stakeholder and public trust can easily be shattered, particularly when decision making processes cannot be properly explained. If people are going to trust AI powered organisations, leaders will need to consider improving managerial visibility and governance of their AI dependent technology systems.