The open information ecosystem

Media are no longer the deliverers of information. The information has already been delivered. So the question now for journalists is how — and whether — we add value to that stream of information. In this matter, as in our current crisis, we have much to learn from medicine. In microcosm, the impact of the new, open information ecosystem […] The post The open information ecosystem appeared first on BuzzMachine .

Media are no longer the deliverers of information. The information has already been delivered. So the question now for journalists is how — and whether — we add value to that stream of information.

In this matter, as in our current crisis, we have much to learn from medicine.

In microcosm, the impact of the new, open information ecosystem is evident in the COVID-19 pandemic as scientists grapple with an avalanche of brand new research papers, which appear — prior to peer review and publication — on so-called preprint servers, followed by much expert discussion on social media. Note that the servers carry the important caveat that their contents “should not be reported in news media as established information.”

Almost to a scientist, the experts I’ve been following on my COVID Twitter list welcome this new availability of open information, for it gives them more data more quickly with more opportunity to discuss the quality and importance of researchers’ findings with their colleagues — and often to provide explanation and context for the public. So far, I’ve seen only one scientist suggest putting preprints behind a wall — and then I saw other scientists argue the point.

Clearly, low-quality information presents a problem. There is the case of the hydroxychloroquine paper with a tiny number of patients and no controls that got into the head and out of the mouth of Donald Trump. But many, many scientists objected to and pointed to the problems with that paper, as they should. The weakness in that chain, as in many, is Trump.

A better example of what’s occurring today is the reaction to a SARS-CoV2 antibody study in Santa Clara County, California — which matters because we still do not know how reliable our counts of infected patients is. I am not nearly qualified to understand it. But as soon as the paper was posted, I saw a string of thoughtful, informed threads from scientists in the field pointing out issues with the study: See Drs. Natalie Dean, Howard Forman, Trevor Bedford, John Cherian, and grateful reaction to all of them from a scientist all the others respect, Dr. Marc Lipsitch. All of them responded within one day. That is peer review at the speed of the internet.

The tone of their criticism is respectful and backed up with reasoning and citations. Because science. One example, about the paper’s conclusion regarding the infection fatality rate (IFR):

This paper has NOT been peer-reviewed. The data seem robust; the analysis seems appropriate (leave to others to judge); the conclusions re: IFR are a bridge too far.

Credit to @medrxivpreprint for providing access to preprints.

2/xhttps://t.co/TuAVdVyYey

— (((Howard Forman))) (@thehowie) April 18, 2020

To make this open, rapid system of information functional, scientists are, with admirable dispatch, adapting new methods and models, which brings many requirements:

First: The information needs to be open, of course, and that is happening as SARS-CoV2 papers are being published by journals outside their high and pricey paywalls. Preprint servers are free. Note also that the EU just announced the establishment of a Europe-wide platform for open sharing of both papers and data on the pandemic. #OpenScience is a movement.

Second: There needs to be some means to sort and discover all this work. Seeing that need, up popped this index to preprints that clusters and maps them around topics; and the Covid Open Research Dataset, which tries to provide organization; and a writer who summarizes 87 pieces of original research published in a week. The volume is heaven-sent but crushing. As a delightfully wry medical blogger named Richard Lehman writes: “Five weeks ago, when I began writing these reviews, everyone was aghast at the challenge of covid-19 and thrilled how it was dynamizing all the usual slow processes of medical knowledge exchange. In the intervening century, we have become more weary and circumspect.”

Secondly a lot of papers are coming out, I mean a lot ! To date > 4000 papers from Pubmed search and 1885 #preprint on #bioRxiv and #medRxiv on a virus we knew nothing about 5 months ago is truly incredible. So there are a lot of research going on and it is a challenge to keep up pic.twitter.com/ywoLcHRdd1

— Dr Gaetan Burgio, MD, PhD. (@GaetanBurgio) April 19, 2020

Third: Of course, there need to be mechanisms to review and monitor quality of the papers. That’s happening almost instantaneously through medical social media, as illustrated above. And there are papers about the papers, cited by Dr. Gaetan Burgio in the thread above. One analyzes the 239 papers on COVID-19 released in the first 30 days of the crisis, separating research science, basic science, and clinical reports. “It is very much like everyone would like to have a go at #COVID19 & we end up with a massive ‘publication pollution,’” Burgio tweets. Some are good, he says, some atrocious; some come from relevant researchers, some not. That is why the swift and clear peer review and some level of vetting is important. And that leads to…

Fourth: There needs to be a means for experts to judge experts, for credentialing and review of those rendering judgment of the research. That, too, is happening in the public conversation. In the process of maintaining my own COVID Twitter list of epidemiologists, virologists, infectious-disease physicians, and researchers, it becomes clear by their citations and comments whom they respect. It also becomes clear whom many respect less. This is the question: Whom do you trust? On what basis? For which questions?

But when the question moves from science to personality, things can get uneasy. One case: Dr. Eric Feigl-Ding has been getting much Twitter traffic and TV airtime for his tweets. Some scientists made a point of telling me that he does not have credentials and experience as relevant as others’. Then followed a deftly critical Chronicle of Higher Education piece about him, which he in a DM to me called a hit piece. I’ll leave this to others, more qualified than I am, to judge.

True, there is an ever-present risk of credentialed disciplines endorsing only the members of their tribe. But that credentialing is an institution that has long been central to the academe and science, necessary to certify credibility in an educated and enlightened society. The granting of degrees and appointments is the best system we have for determining expertise. Especially in these anti-intellectual, science-denying, cognition-impaired times, it is vital that we maintain and support it.

I have been arguing to editors, producers, bookers, and reporters that they should be doing a better job asking the right questions of the most relevant and experienced experts — not, for example, asking a spine surgeon about virology, not giving op-ed space to armchair epidemiologists. This means that journalists — and internet platforms, too — need to make judgments about who to quote and promote and who not to. To quote my friend Siva Vaidhyanathan: “I wish journalists were more discriminating when assessing expertise worthy of informing the public. Knowing academic ranks, positions, journals would help. More scientific expertise in the newsroom would be best.” This gets us to:

Fifth: Both scientists and journalists must do a better job explaining science. I’m working with Connie Moon Sehat in our NewsQ project (funded — full disclosure — by Facebook) to formulate definitions of quality in news, starting with science news, so those definitions can be used by platforms to make better judgments in their promotion of content. This will end up with measurable standards — for example, whether reporting on a preprint includes views from multiple scientists who are not its authors and what the credentials of the quoted experts are.

I hear scientists worry about how well they communicate with the public. That’s why they are sent to take training in science communication (“scicomm”). But I tell them it’s not the scientists who should change, but the journalists, who must learn how to grapple with open information themselves.

Before I explore some of the lessons for journalism and media, let me make clear that — as ever — none of this is new. In science, says a paper by Mark Hooper, the accepted historical narrative has been that peer review began with the first science journals in 1665. Or when the Royal Society “published a collection of refereed medical articles for the first time” in 1731. Or when the Royal Society formalized the process of using independent referees in 1832. Or during the Cold War when peer review became “a requirement for scientific legitimacy.” The term “peer review” was not used until the 1960s and 1970s.

But Hooper contends that the practice of peer review — which he defines as “1) organized systems for facilitating review by peers; 2) in the context of publishing practices; 3) to improve academic works; 4) to provide quality control for academic works” — began much, much earlier. Cicero received editorial review from Atticus, his publisher and editor, in the first century B.C. (A lovely full circle, as Petrarch’s rediscovery of Cicero’s letters to Atticus is marked as a foundational moment of the Renaissance.) Scholia — “comments inscribed in the margins of ancient and medieval works” — were so valuable that scribes made margins larger to accommodate them. Pre-publication censorship in the sixteenth to eighteenth centuries was also a form of academic review, Hooper says.

All of which is to say that every form of media is an adaptation of another. Peer review by experts has been a need long fulfilled by different, available means. As I wrote the other day, news was poetry, song, official decrees, cheap broadsides, single-subject pamphlets, and handwritten newsletters before it became newspapers.

So what does this open information ecosystem portend for news? Well, again, we don’t deliver news or information anymore. It is delivered already: via blogs, social media, direct connection from officialdom and companies to the public, scientific papers, open databases, and means yet unimagined in the vast public conversation opened up by the net. Nobody depends on us to bring them information. We are no longer the deliverer or the gatekeeper.

But this open information ecosystem does bring many demands — just as that in medicine — and therein lie many opportunities.

First: How do we help make information open, breaking the seals of governments and companies and other information sources to the public? How do we aid transparency? Maybe that’s one of our new jobs: transparency-as-a-service (more on that idea another day). And we must ask: Can we function in an open information ecosystem when our information is ever-more frequently closed behind paywalls?

Second: How do we make information more discoverable and organized? Google, of course, did that with search, but that’s only a beginning, as I’m sure Google itself would agree: a miraculous but still-crude layer of automated organization it is constantly improving. Who will master the challenge of sorting wheat?

Third: How do we create mechanisms to review the credibility and quality of information and disinformation? Note how Medium is, to its credit, grappling with making judgments on the credibility of COVID-19 content while other platforms all but throw up their hands at the impossibility of judgment at scale. Ultimately, this task will depend upon:

Fourth: How do we judge the expertise of those we call upon for judgment?

Fifth: How do we amplify their expertise, adding context, explanation, verification, and perspective in the public conversation?

Expertise is key. The problem here is that experts are much easier to find, certify, and judge in medicine than in other fields. Because science. Is there such a thing as an expert in politics? Half the world thinks it’s them. Another problem is that academically certified experts are becoming scarcer — and less heed is paid to them — in this era than elevates idiocy. One more problem is the methodology of journalism, which is built to regurgitate events and opinions around those already in power without accountability for outcomes. Imagine — as one of my former students, Elisabetta Tola, is— journalism in the scientific method, beginning with a hypothesis, seeking data to test it, calling on experts to challenge it, and recognizing — as scientists do and journalists do not — that knowledge does not come in the form of a final word but instead as a process, a conversation.

It is no longer our job to tell finished stories. In the economic aftermath of this crisis, that is a legacy luxury that will die along with the old business models that supported it. Get over it. Adapt. Survive by adding value to the free flow of information in the open ecosystem that is our new normal. Or die.


UPDATE: The day after writing this came an all-too-perfect example of what I’m trying to warn against in this post. The New York Times gave space to a controversial and contrarian preprint without getting differing views from scientists, without providing the context that this scientist’s views have been used by the it’s-just-flu, open-up-now COVID deniers. Shameful editing. In my thread, see also the last link to an example of good reporting from the San Jose Mercury News.

🧵: An unfortunately terrible example of COVID reporting in The New York Times, in which they give practically unhindered voice to a preprint paper by a controversial scientist who's being exploited by the it's-just-flu denialists, Fox News, Coulter… 1/https://t.co/L7MDnAeaz7

— Jeff Jarvis (@jeffjarvis) April 22, 2020

I want to thank Drs. Gregg Gonsalves, Krutika Kuppalli, Angela Rasmussen, and Emma Hodcroft for talking with me about their experience with their new information ecosystem — preprints and social media — when I interviewed them for their guidance on how journalism should over the crisis. You can watch those interviews here.

The post The open information ecosystem appeared first on BuzzMachine.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.