Better Definition of Terms Is Key to Rating Online News

Better Definition of Terms Is Key to Rating Online News

October 3, 2018 Uncategorized 0

What percentage of the online news that you post each week delivers “high-impact” information? How many of the articles you produce reflect “enterprise? Getting accurate answers to these questions requires realistic definitions of the terms “high impact” and “enterprise”

When scoring these factors, I usually end up criticizing current effort—settling for descriptive evaluations such as “high,” “medium” and “low.” To some degree that makes me guilty of “management by adjectives,” a practice deplored in articles and workshops addressing editorial performance issues.

My goal in this article is to make a case for more accurate definitions of the terms “high impact” and “enterprise.” These are the most heavily weighted items in the eight-factor scoring system I currently use. But first, let’s consider the nature of the challenge involved. Our delivery of high-impact content actually is acceptable. We’re pretty good at gathering articles that reflect universal interest. But we could be better.

Phase VII was the first study in which I attempted to arrive at a realistic definition of “high impact.” For each site examined, I reviewed 10 articles posted on the date of review. The tentative standard I used was that at least seven of the articles, or 70%, would pass muster as being “high” in universality. Of 50 sites I analyzed, just 12 measured up to this standard. Fourteen managed only bottom-of-the-barrel impact, scoring less than 40%.

My revised e-news scoring system establishes “universality” as a reasonable yardstick for “high” achievement. Instead of offering high, medium, or low categories, I am taking a yes/no approach, where “yes” earns a full score of 20 points, and “no” = zero. (For more about “universality, see this earlier article on the topic.)

Defining “enterprise” is much more complicated. For now, my revised e-news scoring will award a “high” enterprise rating to articles that include a combination of direct quotes from end-users with excerpts from curated sources. There will be three-tier scoring—high/medium/low—in which points are awarded based on the number of end-user direct quotes (as opposed to quotes lifted from PR announcements). In most cases, an article cannot earn maximum score if no direct quotes came from a personal interview.

Still to be worked out is how to allocate points depending on category of source interview. As the vice president of editorial at a multi-title publishing firm, I scheduled periodic in-house training sessions on the topic of understanding sources. Key point: sources fall into industry and non-industry categories. Many writers rely totally on industry sources because they may be more approachable. But non-industry categories—often ignored—may be gold mines of information obtained while devoting time to the specific industry an editor covers.

I am still working on the best definitions to apply to enterprising e-news evaluation. Meanwhile, as was true of previous studies, Phase VII results suggest that B2B effort to connect with end-user sources leaves much to be desired. For example, of the 500 articles Phase VII collectively reviewed, only 46 (9%) earned “high” ratings. At the other end of the totem pole, 241 articles (48%) earned zero ratings.

For further information about revised eight-factor scoring, see this recent article.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.