You are here: HomeWebbersOpinionsArticles2023 07 16Article 1805549

Opinions of Sunday, 16 July 2023

Columnist: Bright Simons

Run a social org? You have a rendezvous with AI

Bright Simons is the author Bright Simons is the author

Michael Porter, a famous thinker on 20th Century business strategy, has argued that economic outcomes in the pursuit of social change are becoming the new norm for business.

Whilst waiting for capitalism to reinvent itself and more social value to be created and shared alongside each output of economic production, today’s socially biased organisations must survive and hold the fort.

Around the world, donations are falling, from Malaysia to Hungary; the “capacity crunch” is biting; talent is decamping, with a whopping 45% of employees in the US and 51% of UK third sector fundraisers looking to jump ship; and missions are drifting.

It is called the “starvation cycle”, a phenomenon whereby many non-profits, charities, NGOs, social enterprises and other social organisations are denied core resources to build capacity, and are thus also denied a path to sustainability. Even in rich, progressive, societies like New Zealand, the result is a decline in survival rate by 45% in the past 6 years.

The root cause of most of these challenges is constrained capacity. The Innovation Network, a best practices research body, conducted a series of studies into the capacity of non-profits and other social organisations to manage program evaluation between 2010 and 2016. The findings revealed very worrying resource constraints.

For instance, only 8% of organisations can afford dedicated evaluation staff, down from 25% in 2012. Yet, for their funders and other major stakeholders, impact evaluation is the most critical yardstick for trust-building and an absolute necessity to sustain a supporting relationship. But when 84% of nonprofits spend less than the recommended levels on evaluation (up from 76% in 2010), corner-cutting is inevitable.

The inability of social organisations to retain the right staff and invest in the systems needed for critical functions like evaluation feed into severe fundraising limitations, which perpetuate persistent funding shortfalls. For example, 87% of nonprofits surveyed by the Innovation Network do not benefit from pay-for-performance opportunities, with 43% lacking the capacity altogether. 79% of respondent organisations say lack of personnel time is the biggest barrier.

Meanwhile, competition for grants and other flexible funding is intensifying at a serious pace. According to the Society for Non-profits, only 7.5% of grant applications succeed.

At first glance, capacity crunch and personnel time constraints of this nature seem perfectly made for digital intervention. A whole software industry has been built on the productivity needs of business. Social organisations, it would seem, only need to reach. Unfortunately, only 16% of such organisations can be classified as “digitally mature”. A Hewlett Foundation “field scanning” report on social organisations’ capacity issues reinforced this fact by placing at the very top of its list of critical gaps: “Technology access, digital security, and overcoming the digital divide.”

The spectacular rise of ChatGPT in the public consciousness in the last few months heralds a new era of digital technology. One where the significant barriers of classical technology adoption have been substantially lowered, paving the way for social organisations to transcend many of their current limitations.

That ChatGPT promises to democratise access to Artificial Intelligence (AI), the most productivity-focused segment of the digital spectrum is an added bonanza. Imagine the prospect of evaluation reports, fundraising pitches, annual statement drafts and complex project financial analysis all generated in a few minutes by non-specialist interns. Imagine the impact on non-profit capacity.

Even more intriguingly, ChatGPT may offer capabilities for actual service delivery, not just for internal organisational development. For example, a recent McKinsey report by the global consultancy claimed that “[t]hrough an analysis of about 160 AI social impact use cases,” they have “identified and characterized ten domains where adding AI to the solution mix could have large-scale social impact. These range across all 17 of the United Nations Sustainable Development Goals and could potentially help hundreds of millions of people worldwide.”

Consider the case, for instance, of Liberia, which has just 300 doctors at home for a population of 5.3 million (up from 25 at home in 2000 when the population was ~3 million). There are two psychiatrists, six ophthalmologists, eleven pediatricians, and zero – yes a grand zero – urologists. Imagine the good that an organisation like Last Mile Health can do if an AI system could equip its community health workers to perform at near specialist-level. It is at this point that it gets a bit more complicated.

The idea of getting an AI system to perform specialist-level tasks, hopefully at lower cost and with greater productivity, is an old pursuit, most prominently in a branch of AI called “expert systems”. The government of Japan, for instance, tried for two decades (from 1982 onwards) to take expert systems mainstream by dramatically improving their interface with ordinary humans to thoroughly lacklustre effect. Their use has been confined to enterprise ever since.

However, in 1992, a Management Information Systems expert called Sean Eom surveyed enterprise deployments of expert systems and found strong concentration in operations optimisation and finance, the same areas of emphasis presented above for social organisations today.


Results of Sean Eom’s Survey (1992)

Today, the original expert systems model is no longer very fashionable. Competing branches of AI like machine learning and natural language processing (NLP) have all but taken over. ChatGPT belongs to the NLP branch of AI, in particular a domain known as “large language models” (LLMs), which is fast catching up with machine learning as the dominant form of AI-based productivity enhancement.


The positioning of Expert Systems vs the NLP Bloom that led to LLMs, Transformers and eventually ChatGPT.
Source: Chethan Kumar (2018)


Many of the most promising AI toolkits for addressing the capacity crunch of social organisations, and by implication the starvation cycle, are LLM-based, and the signs suggest that this trend will intensify because of ChatGPT’s cultural influence.

It is thus critical for social organisation leaders to understand the limitations of the LLM-paradigm and develop their early-stage AI strategies cognisant of those limitations. Below we outline a simple high-level rubric to help leaders think through their organisational situation.

Do you have an “Expert Network”?

In a recent essay, I emphasised the point that LLMs and similar big data-driven systems are about statistical averages. They take snapshots of internet-scale caches of data and then make safe bets as to the most likely answer to a query. The best experts are, however, top-notch for the very reason that they generate insights that are often skewed against the average. They exhibit positive bias.

A social organisation must thus identify the most critical knowledge domains underlying their expertise and unique contributions to society, and adopt a low-cost means to curate a wide enough digital network of experts whose work touches on those domains. Web-crawlers and other simple data miners can be used to track the bets, predictions, assessments and scenarios regularly generated by this network.

Leaders should, however, bear in mind that this is not about an “advisory council” and there is no need for a formal relationship to exist with these experts at this stage.

Are you training “Knowledge Analysts”?

All LLMs are prompt-dependent. For ChatGPT or similar systems to generate the right sequence of answers and then piece them together, someone skilled in conversing with bots is required. A good knowledge analyst enjoys the “insight loop” game where learning grows by subtle shifts in repeated articulations.

For example, a detailed review of ChatGPT’s recent performance on a Wharton MBA test emphasised the critical importance of “hints” from a human expert in refining the bot’s responses. Because these are uncharted waters, organisations must experiment to find the right personnel fits and alignments.

Do you have a “Knowledge Base”?

Systems like ChatGPT are all about presentation. They are trying to write like an averagely educated human. Their trainers scour the open internet and other easily accessible databases to assemble large samples of human writing to drive this meta-mimicry effort.

For LLMs to become truly useful, besides spouting plain vanilla generics (with the occasional giant blooper), they will need access to more specialised sources of data with built-in reinforcement loops to emphasise what matters most.

In a 2019 working paper, I broke down the generic structure of modern computing into “data”, “algorithms” and “integrations”, and explained why integrations are the real driver of value.

Integrating into all manner of random data troves will however eventually constrain automation efficiency due to vastly different rules of access.

Social organisations can address this issue by formalising the stock of data they and their partners carry, specifying rules of access, and enabling secure integration to create trusted knowledge bases on which open-source LLM algorithms can train to deliver bespoke content for services. By so doing, they address the ancient “knowledge bottleneck” problem in expert systems – mimicking platforms like ChatGPT.

Are you open to Architectural Redesign?

Effective use of LLM-type AI does require the organisation to increase its knowledge-centricity, its overall vigilance about where unique insights, real expertise and learning loops are embedded into its processes.

Doing this effectively may see generalist knowledge analysts acquire greater importance than some hallowed specialists, even threatening certain hierarchies essential to the integrity of the organisation.

Such power shifts and socio-political imperatives require serious strategic commitment from the most senior management.

Are you conscious of the emerging Service Provision Network?

The current approach to building LLM training sets is to take data from anywhere and mash it without regard for intellectual property rights. Already, several Big Data-AI companies like Stable Diffusion are facing lawsuits for trying to externalize their costs. And ChatGPT has been excoriated for using sweatshops in Africa.

Going back to the Data-Algorithm-Integrations (DAI) framework introduced earlier, leaders need to pay attention not to source systems without careful disaggregation of those three nodes.

Emerging AI platforms targeting the social sector like FundWriter.ai come into organisational premises with pre-set models based on which they generate outputs like reports. Apart from the embarrassment of potential plagiarism, IP infringement is possible. The DAI framework favours the approach being taken by platforms like Levity which provide more tools for organisations to develop their own proprietary knowledgebases.

Creating knowledge-pools with likeminded organisations around the world under trusted licensing frameworks and implementing IP risk screening layers should consume far less resources than would have been the case just a few years ago. This is due to the fast-dropping costs of Integration-as-a-Service platforms that now enable large organisational networks to integrate data resources and even workflows that generate data.

The data privacy, cybersecurity, and infrastructure concerns arising in these contexts are also often taken care of by these systems out of the box though organisations should always engage the vendors at every step of deployment.

Final Word

Whilst LLM-based, expert system mimicking, platforms like ChatGPT are still in their infancy, they offer strong hints of how AI will transform organisations. Due to strong capacity constraints, social organisations have not taken full advantage of the current digital boom. They have a duty to their tens of millions of beneficiaries not to let the coming AI bonanza pass them by.