Free speech and a free press have long been recognized as hallmarks of a functioning democracy; in fact, both are enshrined in the First Amendment of the U.S. Constitution. That said, it’s quite unlikely that James Madison and his collaborators foresaw an era in which the widespread dissemination of information — factually accurate or otherwise — could be achieved on a near-global scale and almost instantaneously. The challenges this reality presents are complex and formidable.
While the internet has often been heralded as the “great democratizer,” offering the potential to spread democratic ideals and expand intellectual enlightenment, more nefarious use cases have emerged. Social media platforms have been exploited by political operatives, who have used divisive and misleading content to sow seeds of discord and influence election outcomes. The Cambridge Analytica scandal showed that illicit data harvesting techniques had been employed to psychologically profile and target social media users with often-incendiary political advertising. Meanwhile, some state authorities have used social media platforms to engage in mass surveillance, or drastically influence political discourse by restricting, filtering, banning or censoring online networks.
Of course, we can’t lay the blame entirely at the feet of social media. A working paper by University of Chicago economists recently highlighted the role of traditional media in spreading potentially dangerous misinformation, too: it found that areas with greater exposure to a popular cable news show that had downplayed the threat of the novel coronavirus subsequently experienced a greater number of COVID-19 cases and deaths. It’s also clear that news reporters face new challenges in the digital age: a 2018 report from the Data & Society Research Institute explored the conundrum that journalists face when confronted with how best to cover “bigoted, dehumanizing and manipulative” messages circulating online. Amplification of such ideas, even when seeking to debunk or dismiss them, can often fuel their adoption.
As we’ve seen in the context of the COVID-19 pandemic, a lack of exposure to coherent and trustworthy information can yield chaotic results. A survey of respondents across all 50 U.S. states found that individuals obtaining news from social media are more inclined to believe inaccurate information regarding coronavirus conspiracies, risk factors, death rates, and preventative treatments. This can erode faith in global public health organizations, making it more difficult to contain the disease, and in some cases producing tragic consequences.
Meanwhile, the spread of misinformation surrounding other topics has led to violence, unrest, and even a resurgence of previously eradicated infectious diseases. As misinformation goes unchecked, and inflammatory posts deliberately pit certain groups against one another, Americans grow more divided than ever. Quite aside from making for less cohesive and harmonious societies, such misinformation can actually lead to material harms — take, for example, the role of hateful and misleading social media content in precipitating serious human rights abuses in Myanmar.
Perhaps unsurprisingly, there has been a growing public unease in recent years regarding the spread of misinformation online and its negative effects on society. A Pew Research survey, released earlier this year, revealed that most Americans think social media companies wield excessive power and influence in our political landscape, with around half believing that major technology firms should be subject to greater regulation. While content sharing and social media platforms have come under mounting pressure to tackle the spread of misinformation, and have responded with a series of measures designed to flag or remove inaccurate or hateful posts, the debate around their culpability (Are they impartial forums for free speech or content publishers?) continues. This poses a regulatory risk for these firms that doesn’t appear to be going away anytime soon.
In the absence of an imminent regulatory solution, investors are expressing growing interest in the ethical and financial implications of harmful online content, and which companies may be complicit in its distribution. Recently, the nonprofit Sustainability Accounting Standards Board (SASB), which works with businesses to establish reporting standards around financially material sustainability issues, indicated forthcoming scrutiny of companies’ content moderation practices. A press release announced that — in part owing to investor concern, and indicators that content moderation practices can yield significant financial impact — it will be evaluating technology companies’ efforts to manage pernicious user-generated content, political advertisements and other third-party content that’s hosted on their platforms.
If you are keen to divert money away from companies that could be doing more to curb misinformation and preserve democracy, there are data points you can turn to. First, you might want to consider a company’s known media ethics events, spanning incidents that relate to a breach in editorial or general media ethics, and violations of content restrictions. These ethical breaches can present financially material risks, potentially harming a company’s reputation, leading to fines, advertiser/consumer boycotts and legal trouble that can affect companies’ balance sheets for years to come.
Investors might also evaluate the quality of a firm’s media ethics program, and corresponding editorial guidelines, which can shed light on a company’s commitment to upholding ethical standards during the content creation and dissemination process. Robust measures should place emphasis on content’s transparency, fairness, independence, plurality and inclusiveness of viewpoints—and should be promoted from the top down. Organizations tend to have fewer ethical lapses when leadership actively adopts ethics programs, and ensures opportunities for education and problem-solving exercises surrounding ethical best practices.
Many companies still have a long way to go when it comes to instituting strong content governance measures, though. A 2017 analysis of 74 conventional and social media firms found that just 16% had adequate or strong content governance measures in place, while 61% failed to provide satisfactory disclosures about relevant policies. With a record number of Americans now expressing zero confidence in traditional news media, and with distrust of internet platforms precipitating a public “techlash,” content governance metrics will have greater financial importance.
By enhancing awareness of media ethics issues, and the steps that companies are taking to address them, we can collectively make more informed decisions about where to invest our time and money. With great power comes great responsibility, and media companies of all kinds have both the power and the responsibility to ensure that the content they disseminate or host is fair, accurate, and inclusive. What’s at stake? Potentially, democracy as we know it.
Originally published on MarketWatch
Jay Lipman, a co-founder of Ethic, is driven by the need to address climate and environmental risks with the resources to which we each have unique access. He has been ranked among the Forbes 30 Under 30: Social Entrepreneurs. Born in the UK, he now lives in San Francisco. Previously, he managed the capital of ultra-high-net-worth investors in Deutsche Bank's cross-asset capital markets structuring and sales team.