Monday, 11 July 2016

Content Scrapers – How to Find Out Who is Stealing Your Content & What to Do About It

If you have been blogging for a while, chances are you are familiar with content scrapers. Content scrapers are websites that steal your content for their own blogs without your permission. Some content scrapers will just copy the content off of your blog, but most use automated software that takes the content from your RSS feed and posts your content to their site like it is a new post.

In this post, we are going to look at some potential link building benefits to content scrapers, how to find out what sites are scraping your content, and what you can do if you want to either benefit from the linking standpoint or have them take it down.

Linking Benefits of Content Scrapers

Last week, I was happy to see that I was listed in ProBlogger’s 20 Bloggers to Watch in 2012. Within 24 hours, I received a notification in my WordPress dashboard that a page on my blog had been linked to in the post on ProBlogger’s site.

After receiving the original notification from the ProBlogger post, I also received another 18 trackbacks from sites that had stolen the content in their post verbatim. Trackbacks are WordPress’ way of letting you know that another website has linked to a post on your blog. In this case, these 18 sites had posted the content exactly like the original post – with the links back to my blog still intact.

It was then that I started contemplating the potential link building benefits of content scrapers. These are not by any means quality links – the highest Google PageRank was a PR 2 domain, many were stealing content in a variety of languages, and one even had the nerve to use some kind of redirection script to take away the link juice of outgoing links! So while these links didn’t have the same authority that the original post had, they still count as links.

How to Catch Content Scrapers

Unfortunately, unless you want to continuously search for your post titles in Google, you’ll only be able to easily track down sites that keep your in-content links active. If you want to know what websites are scraping your content, here are a few tips to sniff them out.

Copyscape

Copyscape is a simple search engine that allows you to enter the URL of your content to find out if there are duplicates of it on the Internet. You can get a few results using their free search, or you can pay for a premium account to check up to 10,000 pages on your site and more.

Trackbacks

The first way is through your trackbacks in WordPress (as shown in the image above). Many of these will show up in the spam folder if you use Akismet. The key to getting trackbacks to appear from content scrapers is to always include links to other posts in your content. Be sure those links have great anchor text too, if you’re going for a little extra link juice. And even if you are not, internal linking with strong anchor text is good for your on-site optimization too!

Anyone thinking about link building benefits at this point is probably noting the sheer volume of links from these sites, some of which are content scrapers. Essentially any site that is linking to a lot of your posts that isn’t a social network, social bookmarking site, or a die-hard fan who just loves linking to you is potentially a content scraper. You’ll have to go to their website to be sure. To find your links on their site, click on one of the domains to see the details of what pages on your site they are linking to specifically.

You can see here that they are just blatantly copying my posts titles. When I visited one of the links, sure enough, they are copying my entire posts in their full glory onto their site.

Google Alerts

If you don’t post often or want to keep up with any mentions of your top blog posts on other websites, you can create a Google Alert using the exact match for your post’s title by putting the title in quotation marks.

I deliver all of my Google Alerts to an RSS feed so I can manage them in Google Reader, but you can also have them delivered regularly by email. You’ll even get an instant preview of the types of results you will get.

How to Get Credit for Scraped Posts

If you use WordPress, then you definitely want to try out the RSS footer plugin. This plugin allows you to place a custom piece of text at the top or bottom of your RSS feed content.
As you can see, even if you aren’t using it for the purpose of getting credit back to your posts when content thieves steal it, you can still use it for a little extra bit of advertising with the possible benefit of people who subscribe to your RSS feed clicking through to your website or social profiles. And when someone does scrape your content from your RSS feed, it shows up there too

So in the event that someone finds your scraped content, they will hopefully notice the credit before assuming it was created by the blog that stole it. If you don’t have WordPress, you can simply include a note at the top or bottom of your content that includes the same information.

How to Stop Content Scrapers

If you’re not interested in anyone copying your content, then you have a few options to choose from. You can start by contacting the site that is stealing your content and sending them a notice that you want all of your content removed immediately. You can do this through the site’s contact form, email address, or post it to any social accounts they list.

If there is no contact information on the website stealing your content, you can do a Whois Lookup to (hopefully) find out who owns the domain.

If it is not privately registered, you should find an administrative contact’s email address. If not, you should at least see the domain registrar which, in this case, is GoDaddy and/or the hosting company for the website which, in this case, is HostGator. You can try to contact both companies (HostGator has a DMCA form and GoDaddy has an email) and let them know that the domain in question is stealing copyrighted content in hopes that the website will be suspended or removed.

You can also visit the DMCA and use their takedown services to remove anyone who is copying your photos, video, audio, blog, or other content. They even offer a WordPress plugin to incorporate a DMCA protected badge on your site to warn potential thieves.

Have you ever dealt with content scrapers and thieves? Do you leave it alone for the link benefits, or do you fight back? What other tools, services, or other preventative tactics do you use to block content scrapers? Please share your thoughts and experiences in the comments!

Source URL : https://blog.kissmetrics.com/content-scrapers/

Web Scraping Best Practices

Extracting data from the World Wide Web has several challenges as more webmasters are working day and night to lower cases of scraping and crawling of their data in order to survive in the competitive world. There are various other problems you may face when web scraping and most of them can be avoided by adapting and implementing certain web scraping best practices as discussed in this article.

Have knowledge of the scraping tools

Acquiring adequate knowledge of hurdles that may be encountered during web scraping, you will be able to have a smooth web scraping experience and be on the safe side of the law. Conduct a thorough research on the types of tools you will use for scraping and crawling. Firsthand knowledge on these tools will help you find the data you need without being blocked.

Proper proxy software that acts as the middle party works well when you know how to work around HTTP and HTML protocols. Use tools that can change crawling patterns, URLs and data retrieved even when you are crawling on one domain. This will help you abide to the rules and regulations that come with web scraping activities and escaping any legal issues.

Conduct your scraping activities during off-peak hours

You may opt to extract data during times that less people have access for instance over the weekends, during late night hours, public holidays among others. Visiting a website on several instances to retrieve the same type of data is a waste of bandwidth. It is always advisable to download the entire site content to your computer and thereafter you can access it whenever need arises.

Hide your scrapping activities

There is a thin line between ethical and unethical crawling hence you should completely evade being on the top user list of a particular website. Cover up your track as best as you can by making use of proxy IPs to avoid any legal problems. You may also use multiple IP addresses or VPN services to conceal your scrapping activities and lower chances of landing on a website’s blacklist.

Website owners today are very protective of their data and any other information existing under their unique url. Be keen when going through the terms and conditions indicated by websites as they may consider crawling as an infringement of their privacy. Simple etiquette goes a long way. Your web scraping efforts will be fruitful if the site owner supports the idea of sharing data.

Keep record of your activities

Web scraping involves large amount of data.Due to this you may not always remember each and every piece of information you have acquired, gathering statistics will help you monitor your activities.

Load data in phases

Web scraping demands a lot of patience from you when using the crawlers to get needed information. Take the process in a slow manner by loading data one piece at a time. Several parallel request to the same domain can crush the entire site or retrace the scrapping attempts back to your local machine.

Loading data small bits will save you the hustle of scrapping afresh in case that your activity has been interrupted because you will have already stored part of the data required. You can reduce the loading data on an individual domain through various techniques such as caching pages that you have scrapped to escape redundancy occurrences. Use auto throttling mechanisms to increase the amount of traffic to the website and pause for breaks between requests to prevent getting banned.

Conclusion

Through these few mentioned web scraping best practices you will be able to work around website and gather the data required as per clients’ request without major hurdles along the way. The ultimate goal of every web scraper is to be able to access vital information and at the same time remain on the good side of the law.

Source URl : http://nocodewebscraping.com/web-scraping-best-practices/

Sunday, 10 July 2016

How to Avoid the Most Common Traps in Web Scraping?

A lot of industries are successfully using web scraping for creating massive data banks of applicable and actionable data which can be used on every day basis for further business interests as well as offer superior services to the customers. However, web scraping does have its own roadblocks and problems.

Using automated scraping, you could face many common problems. The web scraping spiders or programs present a definite picture to their targeted websites. Then, they use this behavior for making out between the human users as well as web scraping spiders. According to those details, a website can employ a certain web scraping traps for stopping your efforts. Here are some of the most common traps:

How Can You Avoid These Traps?

Some measures, which you can use to make sure that you avoid general web scraping traps include:

• Begin with caching pages, which you already have crawled and make sure that you are not required to load them again.
• Find out if any particular website, which you try to scratch has any particular dislikes towards the web scraping tools.
• Handle scraping in moderate phases as well as take the content required.
• Take things slower and do not overflow the website through many parallel requests, which put strain on the resources.
• Try to minimize the weight on every sole website, which you visit to scrape.
• Use a superior web scraping tool that can save and test data, patterns and URLs.
• Use several IP addresses to scrape efforts or taking benefits of VPN services and proxy servers. It will assist to decrease the dangers of having trapped as well as blacklisted through a website.

Source URL :http://www.3idatascraping.com/category/web-data-scraping

Thursday, 7 July 2016

Scraping the Royal Society membership list

To a data scientist any data is fair game, from my interest in the history of science I came across the membership records of the Royal Society from 1660 to 2007 which are available as a single PDF file. I’ve scraped the membership list before: the first time around I wrote a C# application which parsed a plain text file which I had made from the original PDF using an online converting service, looking back at the code it is fiendishly complicated and cluttered by boilerplate code required to build a GUI. ScraperWiki includes a pdftoxml function so I thought I’d see if this would make the process of parsing easier, and compare the ScraperWiki experience more widely with my earlier scraper.

The membership list is laid out quite simply, as shown in the image below, each member (or Fellow) record spans two lines with the member name in the left most column on the first line and information on their birth date and the day they died, the class of their Fellowship and their election date on the second line.

Later in the document we find that information on the Presidents of the Royal Society is found on the same line as the Fellow name and that Royal Patrons are formatted a little differently. There are also alias records where the second line points to the primary record for the name on the first line.

pdftoxml converts a PDF into an xml file, wherein each piece of text is located on the page using spatial coordinates, an individual line looks like this:

<text top="243" left="135" width="221" height="14" font="2">Abbot, Charles, 1st Baron Colchester </text>

This makes parsing columnar data straightforward you simply need to select elements with particular values of the “left” attribute. It turns out that the columns are not in exactly the same positions throughout the whole document, which appears to have been constructed by tacking together the membership list A-J with that of K-Z, but this can easily be resolved by accepting a small range of positions for each column.

Attempting to automatically parse all 395 pages of the document reveals some transcription errors: one Fellow was apparently elected on 16th March 197 – a bit of Googling reveals that the real date is 16th March 1978. Another fellow is classed as a “Felllow”, and whilst most of the dates of birth and death are separated by a dash some are separated by an en dash which as far as the code is concerned is something completely different and so on. In my earlier iteration I missed some of these quirks or fixed them by editing the converted text file. These variations suggest that the source document was typed manually rather than being output from a pre-existing database. Since I couldn’t edit the source document I was obliged to code around these quirks.

ScraperWiki helpfully makes putting data into a SQLite database the simplest option for a scraper. My handling of dates in this version of the scraper is a little unsatisfactory: presidential terms are described in terms of a start and end year but are rendered 1st January of those years in the database. Furthermore, in historical documents dates may not be known accurately so someone may have a birth date described as “circa 1782? or “c 1782?, even more vaguely they may be described as having “flourished 1663-1778? or “fl. 1663-1778?. Python’s default datetime module does not capture this subtlety and if it did the database used to store dates would need to support it too to be useful – I’ve addressed this by storing the original life span data as text so that it can be analysed should the need arise. Storing dates as proper dates in the database, rather than text strings means we can query the database using date based queries.

ScraperWiki provides an API to my dataset so that I can query it using SQL, and since it is public anyone else can do this too. So, for example, it’s easy to write queries that tell you the the database contains 8019 Fellows, 56 Presidents, 387 born before 1700, 3657 with no birth date, 2360 with no death date, 204 “flourished”, 450 have birth dates “circa” some year.

I can count the number of classes of fellows:

select distinct class,count(*) from `RoyalSocietyFellows` group by class

Make a table of all of the Presidents of the Royal Society

select * from `RoyalSocietyFellows` where StartPresident not null order by StartPresident desc

…and so on. These illustrations just use the ScraperWiki htmltable export option to display the data as a table but equally I could use similar queries to pull data into a visualisation.

Comparing this to my earlier experience, the benefits of using ScraperWiki are:

•    Nice traceable code to provide a provenance for the dataset;

•    Access to the pdftoxml library;

•    Strong encouragement to “do the right thing” and put the data into a database;

•    Publication of the data;

•    A simple API giving access to the data for reuse by all.

My next target for ScraperWiki may well be the membership lists for the French Academie des Sciences, a task which proved too complex for a simple plain text scraper…

Sources URL :                             http://yellowpagesdatascraping.blogspot.in/2015/06/scraping-royal-society-membership-list.html