By Laurie Styron, Executive Director of CharityWatch, as originally published in Taxation of Exempts (July/August 2021), a Thomson Reuters journal.
Criticizing the use of financial ratios as a measure of a charity’s performance, and thereby its worthiness for donations, is de rigueur among nonprofit fundraisers, trade associations, and some of the charities they represent. Overhead should not be a dirty word for donors, many argue, because investing in things like technology and staff training is essential for effectively carrying out a charity’s mission and maximizing its impact.
While
this seems like a reasonable argument on the surface, it is not a totally
honest one. A wide chasm exists between what the word “overhead” is commonly
understood to mean in layperson’s terms versus how overhead is defined, allocated, and reported in a charity’s IRS Forms 990—the documents
from which charity financial efficiency ratios are often derived. Many expenses
cited in arguments against overhead ratios are not even considered overhead for
purposes of these computations. In addition, measuring program effectiveness in
a meaningful way is a notoriously difficult, time-consuming, and expensive
task.
While
an individual charity might possess the high level of expertise and other
resources necessary to invest in a substantive, ongoing review of its own
impact results, no third-party charity rater or trade association has the
expertise or resources necessary to verify the self-reported impact claims of
even a small fraction of the hundreds of thousands of public charities
registered with the IRS today. For this reason, financial efficiency measurements continue to play an
important role in helping donors identify worthy charities to support.
Before
donating to charity, many donors want to know what portion of their donation
the nonprofit will spend on programs versus overhead. And this makes sense.
Unsurprisingly, those who most want to deemphasize financial efficiency ratios
as a means of identifying worthy charities to support are not the donors who
are giving the money, but the nonprofits that are asking for it.
What
is ”The Overhead Myth?”
Charities
and nonprofit fundraisers often reference two industry-endorsed public letters,
collectively entitled The Overhead Myth, when advocating against the degree to which donors should focus on how
efficiently their contributions will be spent. The letter addressed to
nonprofits criticizes the “spiral of donor demands” that help to perpetuate a
“Nonprofit Starvation Cycle” in which nonprofits underinvest in core costs.
Nonprofits should stop reinforcing “funders’ confusion,” according to The
Overhead Myth, by employing “effective performance management systems”
instead of highlighting their financial efficiency ratios as a core
accomplishment in their fundraising pitches.
Busting
the Myths
While
it is unquestionably true that a charity’s financial efficiency is not the only
variable a donor should consider when making giving decisions, suggesting that
donors who want to know how efficiently their money will be spent are in a
state of “confusion” sends a pretty clear message that the industry would
prefer for donors to stop asking about it altogether. It is time to bust the
myths of The Overhead Myth.
Myth that Charity Overhead Includes Any Expense Other Than Grants or Direct Program Costs
One
myth that needs busting is the common misconception that charity overhead
includes all spending on things like salaries and benefits, training of program
staff, rent and mortgage payments, utilities, conferences, travel, technology,
and basically any expense other than grants or direct program costs. Nonprofit
executives and their accounting staff know this is not true. Donors, the general
public, and many other nonprofit staff members do not.
Charities
exploit this misunderstanding when they argue that overhead ratios are a
reductive, Machiavellian tool designed to prevent them from effectively
carrying out their missions. A charity will describe, for example, how a
competent and well-trained program staff, or upgraded computers and software
used in its programs, have significantly improved its ability to effectively
deliver on its goals.
Of
course, these charities conveniently fail to mention that direct and indirect
program spending is not considered overhead and is already included in program
expense, not overhead expense, in its financial reporting. Meaning, the
overhead ratios charities are telling donors to largely ignore do not even
include most of the expenses they are citing as being mission critical.
When
a charity falsely claims it is being unfairly judged on the basis that common
program expenses are considered overhead, this is at best a reflection of
ignorance about very basic nonprofit financial reporting rules. At worst, it is
an intentional bait and switch tactic intended to manipulate donors into
thinking that material amounts of its program spending are reported as
overhead, and that donors should ignore a charity’s financial efficiency ratios
on this basis.
In
their annual tax Forms 990, charities are required to allocate operating
expenses among the three categories of program, management and general
(M&G), and fundraising. Direct cost reporting is straightforward. For
example, grant expenses and program staff salaries are allocated 100% to
program, Officers & Directors (O&D) insurance 100% to M&G, and
professional fundraising fees 100% to fundraising. Indirect costs are allocated
based on which of the three functions they serve, with employee time used as a
typical allocation base for many types of expenses.
For
example, if a charity executive spends 50% of their time carrying out the
charity’s programs, 20% on accounting and management functions, and 30% on
fundraising activities, their salary and benefits will be allocated among those
three functions commensurately. Of the expenses cited, the ones that would be
counted as overhead are the O&D insurance, fundraising fees, and only 50%
of the executive’s salary and benefits. None of the salaries and benefits of
the program staff would be included in the charity’s reported overhead
spending.
Myth
that a Charity’s Financial Efficiency is of Little Value
Another
myth that needs busting is the idea that a charity’s financial efficiency is of
little value because its ability to achieve its end goals is ultimately what
matters. If a charity spends 65% of its budget on fundraising and management
expenses and only 35% on its programs, this high overhead spending should not
matter, some charity fundraisers argue, if this gives it the ability to raise
enough funds to ultimately cure cancer or eliminate world hunger.
This
line of reasoning has three major flaws. First, citing the success of an
extreme outlier (such as a charity with high overhead curing cancer) and
suggesting that this outlier is statistically representative of the success
that will occur for the entire data set (all charities) is an extrapolation
error. A charity achieving an impact goal that drastically improves life as we
know it is an outlier event, so suggesting that all charities can justify
unreasonably high overhead costs on the basis that such an event might occur is
logically flawed. Charitable giving in 2019 amounted to about $450 billion. If nonprofits in aggregate maintained a 40% to 65% overhead ratio when spending
these funds, for example, that would amount to between $180 billion and $292.5
billion spent on fundraising and management expenses.
This
leads us to the second flaw in this reasoning, which is that if charities with
high overhead were up front with donors about how little of their donations
will be spent on the charity’s programs, many donors would refuse to give. Any
system that relies on either intentionally misleading donors or withholding
decision-critical information from them is not an ethical system.
Finally,
a charity’s ability to maintain reasonable overhead spending is an important variable
that affects the balance of resources it has available to spend on maximizing
its program impact. Financial efficiency may not guarantee any specific outcome
for any individual charity, but neither does financial inefficiency. Of the
two, investing the majority of the $450 billion in annual giving in program
activities and encouraging charities to keep their overhead spending reasonable
certainly makes achieving outcome goals more likely.
Some
for-profit charity consultants and professional fundraisers are particularly
fond of engaging in mental gymnastics to try to convince donors that near
unlimited spending on fundraising, a component of overhead, is good for
charities and for the nonprofit sector on the whole. One way they do this is by
citing return on investment (ROI) ratios commonly used to measure the
performance of for-profit companies. Stock market returns historically average
only about 10% per year, so a charity spending $50, $60, or even $70 to raise each $100 in public
support, they posit, is an ROI that should be cause for celebration. When this
position is challenged on the grounds that improving fundraising efficiency
would free up more resources to be used on programs, the common response from
fundraisers is that growing the total pie, rather than using the existing pie
more efficiently, is a better solution.
However,
the nonprofit sector is not the stock market, and nonprofit organizations are
not for-profit businesses. Giving has remained steady at about 2% of gross
domestic product since the mid-twentieth century. Because charitable giving is a relatively
fixed pie, fundraising and overhead costs necessarily eat into the resources
available to be spent on charities’ programs. Growing the giving pie enough to
make up for the unnecessary waste many self-interested fundraisers advocate for
is not only highly unlikely, it is a morally bankrupt breach of donors’ trust.
An
investor measures the success of their investment based only on how much money
it generates, and those returns inure to the benefit of the investor. A donor
measures the success of their donation based on the extent to which it is
efficiently and effectively used to forward the cause the donor is intending to
support. The psychological benefit of money well spent by a charity may inure
to the donor, but the real benefit inures to the cause, which might be
expressed in terms of animals rescued, homeless people housed, scholarships
awarded, or environment protected.
A
charity may need to steadily increase its revenue to account for inflation or
engage in capital campaigns to expand the scope and scale of its programs, but
the mere act of generating as much revenue as possible from one year to the
next is not how the success of a charity is measured. Those charities that are
unable to keep their fundraising costs reasonable should get out of the way and
let more efficient charities working in the same cause put the nonprofit
sector’s limited resources to better use.
Donors
should not ignore fundraising overhead and be happy with low returns on their
investments based on a pipe dream some fundraisers have of one day growing the
total giving pie large enough to compensate for unnecessary inefficiencies.
Inefficiencies that, despite what many fundraisers may say, are designed to
benefit the fundraisers, not charities.
Deemphasizing
the importance of quantitative measurements like financial efficiency and
replacing them with amorphous and largely qualitative ones that attempt to
measure impact is attractive to charities with high overhead, in part, because
impact measurements are so much easier to manipulate. When a charity hires consultants with
expertise in its specific cause area or invests in other research for the
purpose of evaluating and improving the impact of its programs, this can help
the charity to operate more effectively.
Alternatively,
when impact evaluations of questionable quality and objectivity are instead
conducted primarily for the purpose of circulating the results to funders or
promoting them in online charity databases for public view, they quickly lose
their value as self-evaluation tools and become little more than an extension
of a charity’s marketing strategy. Many charities are unlikely to broadly
circulate reports reflecting that they utterly failed at meeting their impact
goals when it is so easy to simply move the goal post and instead claim that
goals were met or exceeded.
Unlike
financial efficiency, program impact is notoriously difficult to measure and
objectively convey for purposes of comparing the effectiveness of one charity
against another working in the same cause.
For example, one charity committed to addressing world hunger and food
insecurity may help fewer total people and distribute fewer pounds of food each
year compared to a different hunger charity because it primarily operates in
war-torn regions or those with extremely limited infrastructure. A second
charity may be able to help twice as many people and distribute twice as much
food due to working in parts of the world with more reliable food supply and
distribution channels. Either charity could use the quantifiers of number of
people served or pounds of food distributed to compare how its own impact has
changed over time, but a donor could not fairly use these measures to compare
the two charities against one another for purposes of deciding which charity is
more worthy of their contributions.
Deeply
analyzing the programs of a number of nonprofits working in the same cause and
determining which ones are having the most impact can be done, but doing this
in a meaningful way typically requires consulting with top experts in the cause
area and conducting in-depth reviews of the charities’ programs over many years.
Academic and research institutions, large foundations, or independent impact
evaluation organizations like GiveWell, for example, may be equipped to invest
the time and resources necessary to periodically produce high quality white
papers, effectiveness studies, or thoughtful recommendations on a limited
number of nonprofits or causes.
Scaling
up this process to provide ongoing, high quality impact reports on tens of
thousands of charities would require resources far beyond what any of these
institutions could provide. Even spending as little as three hours of analysis
time per charity in a cursory attempt to verify the accuracy and completeness
of the impact data charities report about themselves would require 30,000 hours
of analysis time if a charity rater wanted to publish data on as few as 10,000
charities.
Online Databases And Their Limitations
A
number of large, online databases and crowdsourcing websites exist that
encourage charities to upload information about themselves, such as descriptions
of their programs or self-conducted impact evaluations, as a means of improving
their ratings—practices more in line with those of an industry trade
association than those of an independent rating or watchdog organization. In
some cases, a charity’s simple act of adding data about itself to these
websites results in a near immediate rating improvement. Meaning, the adding of
data is essentially treated as an end unto itself—the data in many cases has
not been adequately scrutinized by the charity rater before being published and
incorporated into a charity’s rating profile.
While
these websites add an element of convenience for donors by housing large
volumes of charity information in one centralized place, this data is of
limited use to donors if what they are seeking is independently vetted
information that goes beyond what is already available on the websites of most
charities. Incorporating a rigorous vetting of charities’ self-reported impact
reports and other information into this process would be very difficult given
the volume of data involved, which may include profiles on tens of thousands,
if not hundreds of thousands of nonprofits.
Databases
of this size have historically been used in statistical modeling for purposes
of identifying trends, making predictions, or drawing conclusions about data
subsets within a relevant range and prescribed margins of error. Data
clearinghouses exist as a mechanism for supplying large amounts of raw data to
academic or research institutions with the time and expertise necessary to
analyze and convert that raw data into useful information. In more recent
history, online aggregator and crowdsourcing websites have provided the public
with a way to organize and share information that users generally understand
has not been vetted for accuracy, completeness, or comparability.
Each
of these methods of providing data to the public has its strengths and
drawbacks and respective fitness for a particular purpose. Unfortunately, some
publishers of online charity databases seem unclear about exactly which type of
information source they are trying to be at any given moment and are not
inclined to clearly communicate the contextual limitations of the data being
presented. Some simultaneously present themselves as charity watchdogs and
independent raters for donors on the one hand, and additional marketing
avenues, fundraising vehicles, and trade associations for charities on the
other. Donors are encouraged to disregard overhead ratios at the same time
nonprofits are encouraged to spend more on overhead and are actively taught how
to game the very rating systems that purport to be overseeing them.
It
is understandable that charities like having the ability to quickly improve
their own ratings without having to change much, if anything, about how they
are actually operating. Even for efficient and effective organizations with
great reputations, competition for donations is fierce. Not taking every
opportunity to improve donor-facing data, especially when that data could rank
high in search engine results, could almost be considered a sign of
incompetence in this information era.
However,
a tool in the wrong hands quickly becomes a weapon. Charity rating systems that
are too easy to game can be misused by bad actors within the nonprofit sector
to give donors a false sense of security that their donations will be used
efficiently and effectively when this may not be the case.
The
hidden danger is that as more and more charities get in on this ratings game,
the good charities will start to become indistinguishable from the bad ones.
The nonprofit sector suffers from painfully little oversight and practical
mechanisms to quickly weed out bad actors before they are able to bilk tens of
millions of dollars in some cases from unassuming donors. The financial data
charities report in their annual tax filings is easy to manipulate, and if not
properly analyzed in conjunction with audited financial statements before being
incorporated into charity ratings, only exacerbates a donor’s inability to
understand which charities are worthy of their support. If everybody gets a trophy, having a trophy
does not make you special anymore. The bad players become indistinguishable
from the good ones.
The
nonprofit sector’s constant promotion of the idea that overhead ratios should
practically be ignored threatens to diminish one of the most effective tools
the average donor has to avoid charity scams and predatory fundraisers. The
Overhead Myth letter addressed to donors does concede that bad actors
within the sector exist when it states, “At the extremes the overhead ratio can
offer insight: it can be a valid data point for rooting out fraud and poor
financial management.” But this letter does a disservice to donors by failing
to convey the scale at which these “extremes” exist.
For
example, the Federal Trade Commission (FTC), in conjunction with 46 agencies in
39 jurisdictions, filed a Complaint (Federal Trade Commission, et al. v.
Associated Community Services, Inc., et al.) in 2021 in which it accused a number of for-profit charity fundraisers of
making “abusive, unsolicited, [and] deceptive fundraising calls to hundreds of
millions of Americans. Through more than 1.3 billion fundraising calls to more
than 67 million unique telephone numbers, Defendants sought to extract money
from donors by making deceptive claims about practically nonexistent charitable
programs. Defendants knowingly duped generous Americans into donating tens of
millions of dollars to nonprofit organizations…” An FTC press release citing
the Complaint states that “the defendants conducted an invasive robocall
onslaught and kept the lion’s share of the more than $110 million of consumers’
contributions – as much as 90 cents out of every dollar donated.” If there is a lesson to be taken from this, it is that donors should be
encouraged to pay more attention to overhead ratios before donating to
charities, not less.
We
should all be willing to concede that some individual donors and institutional
funders do focus too much on overhead ratios. Donating to one charity over
another solely because one spends 21% of its budget on overhead and the other
spends 23% is not a reliable method for identifying which charities are making
the biggest impact in forwarding their respective causes. Foundation and
corporate donors may be inclined to legally restrict their donations for a
specific program purpose, sometimes forcing grant recipients to scramble for
unrestricted funding from other sources to cover the overhead costs related to
fulfilling the conditions of these restricted grants.
Conclusion
The
Overhead Myth letter addressed to donors gets it right when it says that
“focusing on overhead without considering other critical dimensions of a
charity’s financial and organizational performance does more damage than good.”
But overhead ratios do serve as an important starting point for donors as a
means of narrowing down which charities working in a particular cause are using
their resources efficiently. Deemphasizing the importance of ratios used for
this purpose also does more damage than good given that the more subjective
measures of effectiveness cannot replace the function these ratios serve in
weeding out bad actors and poorly performing charities.
Foundations
and wealthy individuals looking to make large grants to improve or solve a very
specific problem tend to treat their funding decisions more like business
plans. Such donors may rightly fund their own impact studies of charities
working in a particular cause prior to making any major grant decisions. Others
may set aside funding within their grants so that grantees can continuously
evaluate and improve the effectiveness with which they carry out related
programs.
While this is a good practice for large funders, the average donor
has neither the time, interest, nor financial resources to approach their
giving decisions this way. The public is bombarded with large volumes of
charity solicitations that often contain emotionally charged messages and
images designed to elicit visceral responses from potential donors that cause
them to give quickly and generously.
While
large grantmaking foundations may have a full-time staff whose entire jobs
consist of reviewing grant proposals and screening potential grant recipients,
the average individual donor does not have the ability to vet charities in a
similar way each time they encounter a charity solicitation. High quality
financial efficiency ratios can play a significant role in helping these donors
avoid contributing to predatory fundraisers and charities that will misuse
their donations.
What
could have been a meaningful public discourse on understanding the limitations
of charity financial efficiency ratios and mitigating their use as the sole
deciding factor in donors’ giving decisions has instead been appropriated by
nonprofit trade associations and big data websites in ways that prioritize the
desires of nonprofits over the needs of donors. A charity’s desire to give the
appearance of being effective and making an impact has been centered at the
expense of the average donor’s ability to understand how to incorporate
properly analyzed financial data into the overall picture of how well a charity
is operating. And that is no myth.