Cardiff University
Browse
- No file added yet -

Content analysis of fact checking articles in the UK: data

Download (165.08 kB)
dataset
posted on 2024-09-18, 11:38 authored by Stephen CushionStephen Cushion

To understand the types of fact-checking articles that were published during this period, we conducted a content analysis of election fact-checking articles on BBC Reality Check, Channel 4 FactCheck, and FullFact.

The content analysis examined these sites from the official start of the 2019 election campaign, 6 November 2019, to Polling Day, 12 December 2019, including weekends. A research team of two coders coded all articles within the election campaign timeframe, generating 238 items from Reality Check, FactCheck and Full Fact. In total, Reality Check published 112 articles, 93 of which were election-related; FactCheck published 23 articles, all of them were election-related and Full Fact published 123 articles, 97 of which were election related. One full article was used as a unit of analysis and coded according to a number of variables including whether an item was election or non-election related, the election topic category (NHS, Economy, Brexit/EU, etc.), the format/type of article (fact-checking, analysis, explainer, etc.) and overall outcome of the fact-checking/analysis (challenged, verified, unclear). These variables achieved high inter-coder reliability scores according to Cohen’s Kappa coefficient.

The content analysis study was designed to develop a nuanced analytical framework that critically examined whether any claims were challenged in each fact-check article. This included investigating individual claims scrutinised in the articles. However, our analysis went further by considering the kind of sources cited when examining these claims and evaluating the degree to which they were challenged or not. Variables included whether each individual article investigated one or multiple claims, the author of the claim (that is the source making the claim under scrutiny, largely political sources), the sources used to scrutinise/challenge the claim (e.g., politician, government department, think tank, etc.) and the extent to which a claim was challenged (explicit, partial/implicit, validation, no challenge). Again, these variables achieved credible to robust inter-coder reliability scores according to Cohen’s Kappa coefficient.

Funding

Countering disinformation: enhancing journalistic legitimacy in public service media

Arts and Humanities Research Council

Find out more...

History

Specialist software required to view data files

Excel

Usage metrics

    School of Journalism, Media and Culture

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC