Feedreader.com issue creating duplicate content of your site and blog


Wy Feedreader.com creates duplciate posts and Google kicks your site out of the search results?


More Google search updates were noticed recently and yet more problems occur for many webmasters all over the world. A missive problem has been spotted with one of the most popular feed readers - feedreader.com. It seems like Google is taking all RSS burned articles in feedreader.com as the primary source of original content and kicks out of the search the actual sites where the content was first submitted and published.



What is actually happening?

Actually we made a further research on this and this issue was first reported to Google more than a year ago and believe it or not the most popular search engine did nothing about that.

The problem continues with lack of support from the feedreader.com side who seem to ignore all feed removal requests. But that could be considered quite normal as they are making huge money on someone else's efforts. It is also quite annoying that Google support seems to be ignoring this as well even though this has been reported from hundreds of webmasters and site owners across the world.

Actually what happens as a consequence is that your website content is taken as duplicate to the one that shows up in your feedburner.com feed and your pages are kicked way back in the search results.

Our blog testproductreview.com also suffered from that and most of our top performing articles with original content either disappeared from Google search results or dropped significantly so traffic was no longer generated from these.





So, is there a way to block feedreader.com from showing your own content and restore your site's ranking?

Well, we are still not quite sure if there is complete resolution to this but we promise to get to the bottom of this and update you with all the info we have. We recently reported this to Google but it seems very hard to make any of the useless Google vendors do something about the issue with feedreader.com.

Currently we are using something like a workaround which is not completely resolving the case with the duplicate content. What we do for our blog is to insert a simple code which basically redirects the visitors form all iframe feed readers directly to the original article. The code that you can insert somewhere between the opening <head>  and the closing </head> tag is <script language='javascript' type='text/javascript'>

if (window!= top) top.location.href = location.href;

</script>.

As I said this is not a complete resolution as search engines like Google will still index the duplicate feedreader.com pages but we will do our best to find a solution which will be helpful for all webmasters affected by this copyright and search engine indexing problem.

Until then we wish you good luck and don't give up on your blogs.

Post a Comment

0 Comments