When Algorithms Fail
Least you doubt that companies rely too much on technology to provide automated content on the Internet, consider this screen scrape from today’s Daily Beast:
Press releases on natural menopause relief and Rebekah Brooks and news stories on Cathie Black and a new Ambassador to Egypt appear to me to have only one thing in common: photos of women.
Really? In 2011 it’s okay to lump together any mention of women into a related category?
Obviously, the Daily Beast doesn’t care if the stories it suggests you click on are related. They are simply shoving more material at you in the hopes that you’ll see something — anything — you like and click. Perhaps their model includes getting money if you click on one of the “Paid Distribution” articles. I assumed that “paid distribution” meant that the writer published the story through PRWeb or other press release service, but maybe the payment to the Beast is more direct.
Whatever the reason, the Daily Beast sure cheapens its brand in my opinion. Employing bad algorithms to trick your readers — whether by intention or by sloppy quality control — says bad things about you. If you cut corners in your content display, how about in your reporting?
Two final snaps:
- If your refresh the page, you get different related stories, but they are equally awful. My second visit:
The memory-loss woman’s photo is particularly gruesome, IMHO! - The “Related Stories” appeared at the bottom of a news article I found through Google news. The topic and headline of the master story that these women are supposedly related to: “Former Joint Chiefs Chairman Dies” on the passing of General John Shalikashvili.