Wednesday, 27 August 2014

Extract data from Web Scraping C#


I am MVC ASP.NET developer.

I have received the contents from any url, i.e. http, https etc. using WebRequest class.

I have received all the content of that particular url. (for now I took http://google.com)

My next step is to extract buttons, header, footer, colors, text etc.

Here is my code for now:

public ActionResult GetContent(UrlModel model) //model having a string URL
which is entered in a text box and method hits using submit button.
{
    //WebRequest request = WebRequest.Create(model.URL);

    WebRequest request = WebRequest.Create(model.URL);

    request.Credentials = CredentialCache.DefaultCredentials;

    WebResponse response = request.GetResponse();

    Stream dataStream = response.GetResponseStream();

    StreamReader reader = new StreamReader(dataStream);

    string responseFromServer = reader.ReadToEnd();
    ViewBag.Response = responseFromServer;

    reader.Close();
    response.Close();
    return View();
}

Can someone help me with writing the code ?

Also do suggest me with some techniques of data extraction in C#.



Source: http://stackoverflow.com/questions/21901162/extract-data-from-web-scraping-c-sharp

Scrapy, scraping price data from StubHub


I've been having a difficult time with this one.

I want to scrape all the prices listed for this Bruno Mars concert at the Hollywood Bowl so I can get the average price.

http://www.stubhub.com/bruno-mars-tickets/bruno-mars-hollywood-hollywood-bowl-31-5-2014-4449604/

I've located the prices in the HTML and the xpath is pretty straightforward but I cannot get any values to return.

I think it has something to do with the content being generated via javascript or ajax but I can't figure out how to send the correct request to get the code to work.

Here's what I have:

from scrapy.spider import BaseSpider
from scrapy.selector import Selector

from deeptix.items import DeeptixItem

class TicketSpider(BaseSpider):
    name = "deeptix"
    allowed_domains = ["stubhub.com"]
    start_urls = ["http://www.stubhub.com/bruno-mars-tickets/bruno-mars-hollywood-hollywood-bowl-31-5-2014-4449604/"]

def parse(self, response):
    sel = Selector(response)
    sites = sel.xpath('//div[contains(@class, "q_cont")]')
    items = []
    for site in sites:
        item = DeeptixItem()
        item['price'] = site.xpath('span[contains(@class, "q")]/text()').extract()
        items.append(item)
    return items

Any help would be greatly appreciated I've been struggling with this one for quite some time now. Thank you in advance!


Source: http://stackoverflow.com/questions/22770917/scrapy-scraping-price-data-from-stubhub

How do you scrape AJAX pages?


Overview:

All screen scraping first requires manual review of the page you want to extract

resources from. When dealing with AJAX you usually just need to analyze a bit more

than just simply the HTML.

When dealing with AJAX this just means that the value you want is not in the initial

HTML document that you requested, but that javascript will be exectued which asks the

server for the extra information you want.

You can therefore usually simply analyze the javascript and see which request the

javascript makes and just call this URL instead from the start.

Example:

Take this as an example, assume the page you want to scrape from has the following

script:

<script type="text/javascript">
function ajaxFunction()
{
var xmlHttp;
try
  {
  // Firefox, Opera 8.0+, Safari
  xmlHttp=new XMLHttpRequest();
  }
catch (e)
  {
  // Internet Explorer
  try
    {
    xmlHttp=new ActiveXObject("Msxml2.XMLHTTP");
    }
  catch (e)
    {
    try
      {
      xmlHttp=new ActiveXObject("Microsoft.XMLHTTP");
      }
    catch (e)
      {
      alert("Your browser does not support AJAX!");
      return false;
      }
    }
  }
  xmlHttp.onreadystatechange=function()
    {
    if(xmlHttp.readyState==4)
      {
      document.myForm.time.value=xmlHttp.responseText;
      }
    }
  xmlHttp.open("GET","time.asp",true);
  xmlHttp.send(null);
  }
</script>

Then all you need to do is instead do an HTTP request to time.asp of the same server

instead. Example from w3schools.


Sporce: http://stackoverflow.com/questions/260540/how-do-you-scrape-ajax-pages

using Perl to scrape a website


I am interested in writing a perl script that goes to the following link and extracts the number 1975: https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3ACalifornia%20%2Bevent_place_level_2%3A%22San%20Diego%22%20%2Bbirth_year%3A1923-1923~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219

That website is the amount of white men born in the year 1923 who live in San Diego County, California in 1940. I am trying to do this in a loop structure to generalize over multiple counties and birth years.

In the file, locations.txt, I put the list of counties, such as San Diego County.

The current code runs, but instead of the # 1975, it displays unknown. The number 1975 should be in $val\n.

I would very much appreciate any help!

#!/usr/bin/perl

use strict;

use LWP::Simple;

open(L, "locations26.txt");

my $url = 'https://familysearch.org/search/collection/results#count=20&query=%2Bevent_place_level_1%3A%22California%22%20%2Bevent_place_level_2%3A%22%LOCATION%%22%20%2Bbirth_year%3A%YEAR%-%YEAR%~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219';

open(O, ">out26.txt");
 my $oldh = select(O);
 $| = 1;
 select($oldh);
 while (my $location = <L>) {
     chomp($location);
     $location =~ s/ /+/g;
      foreach my $year (1923..1923) {
                 my $u = $url;
                 $u =~ s/%LOCATION%/$location/;
                 $u =~ s/%YEAR%/$year/;
                 #print "$u\n";
                 my $content = get($u);
                 my $val = 'unknown';
                 if ($content =~ / of .strong.([0-9,]+)..strong. /) {
                         $val = $1;
                 }
                 $val =~ s/,//g;
                 $location =~ s/\+/ /g;
                 print "'$location',$year,$val\n";
                 print O "'$location',$year,$val\n";
         }
     }

Update: API is not a viable solution. I have been in contact with the site developer. The API does not apply to that part of the webpage. Hence, any solution pertaining to JSON will not be applicbale.



Source: http://stackoverflow.com/questions/14654288/using-perl-to-scrape-a-website

Tuesday, 26 August 2014

Data Scraping using php


Here is my code

    $ip=$_SERVER['REMOTE_ADDR'];

    $url=file_get_contents("http://whatismyipaddress.com/ip/$ip");

    preg_match_all('/<th>(.*?)<\/th><td>(.*?)<\/td>/s',$url,$output,PREG_SET_ORDER);

    $isp=$output[1][2];

    $city=$output[9][2];

    $state=$output[8][2];

    $zipcode=$output[12][2];

    $country=$output[7][2];

    ?>
    <body>
    <table align="center">
    <tr><td>ISP :</td><td><?php echo $isp;?></td></tr>
    <tr><td>City :</td><td><?php echo $city;?></td></tr>
    <tr><td>State :</td><td><?php echo $state;?></td></tr>
    <tr><td>Zipcode :</td><td><?php echo $zipcode;?></td></tr>
    <tr><td>Country :</td><td><?php echo $country;?></td></tr>
    </table>
    </body>

How do I find out the ISP provider of a person viewing a PHP page?

Is it possible to use PHP to track or reveal it?

Error: http://i.imgur.com/LGWI8.png

Curl Scrapping

<?php
$curl_handle=curl_init();
curl_setopt( $curl_handle, CURLOPT_FOLLOWLOCATION, true );
$url='http://www.whatismyipaddress.com/ip/132.123.23.23';
curl_setopt($curl_handle, CURLOPT_URL,$url);
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, Array("User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.15) Gecko/20080623 Firefox/2.0.0.15") );
curl_setopt($curl_handle, CURLOPT_CONNECTTIMEOUT, 2);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl_handle, CURLOPT_USERAGENT, 'Your application name');
$query = curl_exec($curl_handle);

curl_close($curl_handle);
preg_match_all('/<th>(.*?)<\/th><td>(.*?)<\/td>/s',$url,$output,PREG_SET_ORDER);
echo $query;
$isp=$output[1][2];

$city=$output[9][2];

$state=$output[8][2];

$zipcode=$output[12][2];

$country=$output[7][2];
?>
<body>
<table align="center">
<tr><td>ISP :</td><td><?php echo $isp;?></td></tr>
<tr><td>City :</td><td><?php echo $city;?></td></tr>
<tr><td>State :</td><td><?php echo $state;?></td></tr>
<tr><td>Zipcode :</td><td><?php echo $zipcode;?></td></tr>
<tr><td>Country :</td><td><?php echo $country;?></td></tr>
</table>
</body>

Error: http://i.imgur.com/FJIq6.png

What's is wrong with my code here? Any alternative code , that i can use here.

I am not able to scrape that data as described here. http://i.imgur.com/FJIq6.png

P.S. Please post full code. It would be easier for me to understand.



Source: http://stackoverflow.com/questions/10461088/data-scraping-using-php

PDF scraping using R


I have been using the XML package successfully for extracting HTML tables but want to extend to PDF's. From previous questions it does not appear that there is a simple R solution but wondered if there had been any recent developments

Failing that, is there some way in Python (in which I am a complete Novice) to obtain and manipulate pdfs so that I could finish the job off with the R XML package

Extracting text from PDFs is hard, and nearly always requires lots of care.

I'd start with the command line tools such as pdftotext and see what they spit out. The problem is that PDFs can store the text in any order, can use awkward font encodings, and can do things like use ligature characters (the joined up 'ff' and 'ij' that you see in proper typesetting) to throw you.

pdftotext is installable on any Linux system



Source: http://stackoverflow.com/questions/7918718/pdf-scraping-using-r

Monday, 25 August 2014

Php Scraping data from a website

I am very new to programming and need a little help with getting data from a website and passing it into my PHP script.

The website is http://www.birthdatabase.com/.

I would like to plug in a name (First and Last) and retrieve the result. I know you can query the site by passing the name in the URL, but I am having problems scraping the results.

http://www.birthdatabase.com/cgi-bin/query.pl?textfield=FIRST&textfield2=LAST&age=&affid=

I am using the file_get_contents($URL) function to get the page but need help after that. Specifically, I would like to scrape only the results from a certain state if there are multiple results for that name.



You need the awesome simple_html_dom class.

With this class you can query the webpage's DOM in a similar way to jQuery.

First include the class in your page, then get the page content with this snippet:

$html = file_get_html('http://www.birthdatabase.com/cgi-bin/query.pl?textfield=' . $first . '&textfield2=' . $last . '&age=&affid=');

Then you can use CSS selections to scrape your data (something like this):

$n = 0;
foreach($html->find('table tbody tr td div font b table tbody') as $element) {
    @$row[$n]['tr']  = $element->find('tr')->text;
    $n++;
}

// output your data
print_r($row);



Source: http://stackoverflow.com/questions/15601584/php-scraping-data-from-a-website

Obtaining reddit data

I am interested in obtaining data from different reddit subreddits. Does anyone know

if there is a reddit/other api similar like twitter does to crawl all the pages?


Yes, reddit has an API that can be used for a variety of purposes such as data

collection, automatic commenting bots, or even to assist in subreddit moderation.

There are a few places to discover information on reddit's API:

    github reddit wiki -- provides the overview and rules for using reddit's API

(follow the rules)
    automatically generated API docs -- provides information on the requests needed to

access most of the API endpoints
    /r/redditdev -- the reddit community dedicated to answering questions both about

reddit's source code and about reddit's API

If there is a particular programming language you are already familiar with, you

should check out the existing set of API wrappers for various languages. Despite my

bias (I am the package maintainer) I am quite certain PRAW, for python, has support

for the largest number of reddit API features.



Source: http://stackoverflow.com/questions/14322834/obtaining-reddit-data

Saturday, 23 August 2014

Scraping data in dynamic sites

I'm trying to scrape data from our local government. What I want is address from kids adoption offices. Here, in Brazil, all adoptions go through the government. So I have the URL of one office, there are 2 or 3 thousands more. But if I can manage to get one, the others will be easy. I made many attempts, bellow I show three.

The problem could be related to a Javascript (Ajax maybe) that refresh the page.

Note: I am not a PHP developer.

First attempt

echo '<html><head></head><body>';
echo '<h1>Scraper PHP GET 1</h1>';

echo ini_get("allow_url_fopen");
echo ini_get("allow_url_fopen");

// I used this url for test
//$url = 'http://www.portaldaadocao.com.br';

//This is the URL that I really want
$url = 'http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?transacao=CONSULTA&vara=2673';

$html = file_get_contents($url);
var_dump($html);

echo '</body></html>';

// Output
// 11
// Warning:
file_get_contents(http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?
transacao=CONSULTA&vara=2673) [function.file-get-contents]: failed to open stream: HTTP
request failed! HTTP/1.1 404 Not Found in /home/rsl/www/sc01_get.php on line 14
// bool(false)

Second attempt

echo '<html><head></head><body>';
echo '<h1>Scraper PHP CURL 3</h1>';

// I used this url for test
//$url = 'http://www.portaldaadocao.com.br';

//This is the URL that I really want
$url = 'http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?transacao=CONSULTA&vara=2673';

$curl = curl_init($url);
@curl_setopt($curl, CURLOPT_POSTFIELDS, "foo");
@curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
@curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "POST");;

$html=@curl_exec($curl);

if (!$html) {
    echo "<br />cURL error number:" .curl_errno($curl);
    echo "<br />cURL error:" . curl_error($curl);
    exit;
}
else{
   echo '<br>begin HTML[';
    echo  $html;
   echo '<br>]end html ';
}
echo '</body></html>';

// Output
// 1

third attempt

function curl($url){
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
    curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.6 (KHTML, like Gecko) Chrome/16.0.897.0 Safari/535.6');
    curl_setopt($ch, CURLOPT_HEADER, true);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
    curl_setopt($ch, CURLOPT_COOKIEJAR, "cookie.txt");
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
    curl_setopt($ch, CURLOPT_REFERER, "http://www.windowsphone.com");

    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

echo '<html><head></head><body>';
echo '<h1>Scraper PHP CURL 5</h1>';

// I used this url for test
//$url = 'http://www.portaldaadocao.com.br';

//This is the URL that I really want
$url = 'http://www.cnj.jus.br/cna/Controle/ConsultaPublicaBuscaControle.php?transacao=CONSULTA&vara=2673';

$curl = curl_init($url);
@curl_setopt($curl, CURLOPT_POSTFIELDS, "foo");
@curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
@curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "POST");;

$html=@curl($curl);


if (!$html) {
    echo "<br />cURL error number:" .curl_errno($curl);
    echo "<br />cURL error:" . curl_error($curl);
    exit;
}
else{
    echo '<br>begin HTML[';
    echo  $html;
    echo '<br>]end html ';
}
echo '</body></html>';

// Output
// cURL error number:0
// cURL error:

If the pages are really ajax based meaning the information that you need to scrape is loaded or shown through javascript execution, you will need another approach. You would need to automate with a real browser. You can go the Selenium route which can be written in a number of languages or use CasperJS with Javascript as the programming language.



Source: http://stackoverflow.com/questions/24611046/scraping-data-in-dynamic-sites

Friday, 22 August 2014

What is the right way of storing screen-scraping data?

i'm working on a web site. it is scraping product details(names, features, prices etc.) from various web sites, processing and displaying them. i'am considering to run update script on each day and keep data fresh.

    scrape data
    process them
    store on database
    read(from db) and display them

i'am already storing all the data in a sql schema but i'm not sure. After each update, all the old records are vanishing. if the scraped new data comes corrupted somehow, there is nothing to show.

so, is there any common way to archive the old data? which one is more convenient: seperate sql schemas or xml files? or something else?

Source: http://stackoverflow.com/questions/13686474/what-is-the-right-way-of-storing-screen-scraping-data

Scraping dynamic data

I am scraping profiles on ask.fm for a research question. The problem is that only the top most recent questions are viewable and I have to click "view more" to see the next 15.

The source code for clicking view more looks like this:

<input class="submit-button-more submit-button-more-active" name="commit" onclick="return Forms.More.allowSubmit(this)" type="submit" value="View more" />

What is an easy way of calling this 4 times before scraping it. I want the most recent 60 posts on the site. Python is preferable.

You could probably use selenium to browse to the website and click on the button/link a few times. You can get that here:

    https://pypi.python.org/pypi/selenium

Or you might be able to do it with mechanize:

    http://wwwsearch.sourceforge.net/mechanize/

I have also heard good things about twill, but never used it myself:

    http://twill.idyll.org/



Source: http://stackoverflow.com/questions/19437782/scraping-dynamic-data

Monday, 18 August 2014

Scraping data from National weather services

Hi I want to get the latest weather data from the link below, meaning the very first row of data, I am trying to scrape the data but have not been successful in doing so. Can anyone please help, or if you know any better way point me to that direction thanks!

this is what i have,

$data = file_get_contents('http://w1.weather.gov/obhistory/KIKV.html');

$regex = '/<td>(.+?) </td>/';

preg_match($regex,$data,$match);

var_dump($match);

echo $match[1];

i get an error on '/(.+?) /' this line that "Unknown modifier 't'" ? Any suggestion

1 Answer

Although parsing html with a regex is usually a very bad idea, the error you are receiving, is caused by the / character. You are using that character as the delimiter for your pattern, so you need to escape it or use another character:

$regex = '/<td>(.+?) <\/td>/';

or
$regex = '#<td>(.+?) </td>#';

Source:http://stackoverflow.com/questions/11943420/scraping-data-from-national-weather-services

Tuesday, 12 August 2014

Digging Up Dollars With Data Mining - An Executive's Guide

Introduction

Traditionally, organizations use data tactically - to manage operations. For a competitive edge, strong organizations use data strategically - to expand the business, to improve profitability, to reduce costs, and to market more effectively. Data mining (DM) creates information assets that an organization can leverage to achieve these strategic objectives.

In this article, we address some of the key questions executives have about data mining. These include:
  •     What is data mining?
  •     What can it do for my organization?
  •     How can my organization get started?
Business Definition of Data Mining
Data mining is a new component in an enterprise's decision support system (DSS) architecture. It complements and interlocks with other DSS capabilities such as query and reporting, on-line analytical processing (OLAP), data visualization, and traditional statistical analysis. These other DSS technologies are generally retrospective. They provide reports, tables, and graphs of what happened in the past. A user who knows what she's looking for can answer specific questions like: "How many new accounts were opened in the Midwest region last quarter," "Which stores had the largest change in revenues compared to the same month last year," or "Did we meet our goal of a ten-percent increase in holiday sales?"

We define data mining as "the data-driven discovery and modeling of hidden patterns in large volumes of data." Data mining differs from the retrospective technologies above because it produces models - models that capture and represent the hidden patterns in the data. With it, a user can discover patterns and build models automatically, without knowing exactly what she's looking for. The models are both descriptive and prospective. They address why things happened and what is likely to happen next. A user can pose "what-if" questions to a data-mining model that can not be queried directly from the database or warehouse. Examples include: "What is the expected lifetime value of every customer account," "Which customers are likely to open a money market account," or "Will this customer cancel our service if we introduce fees?"

The information technologies associated with DM are neural networks, genetic algorithms, fuzzy logic, and rule induction. It is outside the scope of this article to elaborate on all of these technologies. Instead, we will focus on business needs and how data mining solutions for these needs can translate into dollars.

Mapping Business Needs to Solutions and Profits

What can data mining do for your organization? In the introduction, we described several strategic opportunities for an organization to use data for advantage: business expansion, profitability, cost reduction, and sales and marketing. Let's consider these opportunities very concretely through several examples where companies successfully applied DM.

Expanding your business: Keystone Financial of Williamsport, PA, wanted to expand their customer base and attract new accounts through a LoanCheck offer. To initiate a loan, a recipient just had to go to a Keystone branch and cash the LoanCheck. Keystone introduced the $5000 LoanCheck by mailing a promotion to existing customers.

The Keystone database tracks about 300 characteristics for each customer. These characteristics include whether the person had already opened loans in the past two years, the number of active credit cards, the balance levels on those cards, and finally whether or not they responded to the $5000 LoanCheck offer. Keystone used data mining to sift through the 300 customer characteristics, find the most significant ones, and build a model of response to the LoanCheck offer. Then, they applied the model to a list of 400,000 prospects obtained from a credit bureau.

By selectively mailing to the best-rated prospects determined by the DM model, Keystone generated $1.6M in additional net income from 12,000 new customers.

Reducing costs: Empire Blue Cross/Blue Shield is New York State's largest health insurer. To compete with other healthcare companies, Empire must provide quality service and minimize costs. Attacking costs in the form of fraud and abuse is a cornerstone of Empire's strategy, and it requires considerable investigative skill as well as sophisticated information technology.

The latter includes a data mining application that profiles each physician in the Empire network based on patient claim records in their database. From the profile, the application detects subtle deviations in physician behavior relative to her/his peer group. These deviations are reported to fraud investigators as a "suspicion index." A physician who performs a high number of procedures per visit, charges 40% more per patient, or sees many patients on the weekend would be flagged immediately from the suspicion index score.

What has this DM effort returned to Empire? In the first three years, they realized fraud-and-abuse savings of $29M, $36M, and $39M respectively.

Improving sales effectiveness and profitability: Pharmaceutical sales representatives have a broad assortment of tools for promoting products to physicians. These tools include clinical literature, product samples, dinner meetings, teleconferences, golf outings, and more. Knowing which promotions will be most effective with which doctors is extremely valuable since wrong decisions can cost the company hundreds of dollars for the sales call and even more in lost revenue.

The reps for a large pharmaceutical company collectively make tens of thousands of sales calls. One drug maker linked six months of promotional activity with corresponding sales figures in a database, which they then used to build a predictive model for each doctor. The data-mining models revealed, for instance, that among six different promotional alternatives, only two had a significant impact on the prescribing behavior of physicians. Using all the knowledge embedded in the data-mining models, the promotional mix for each doctor was customized to maximize ROI.

Although this new program was rolled out just recently, early responses indicate that the drug maker will exceed the $1.4M sales increase originally projected. Given that this increase is generated with no new promotional spending, profits are expected to increase by a similar amount.

Looking back at this set of examples, we must ask, "Why was data mining necessary?" For Keystone, response to the loan offer did not exist in the new credit bureau database of 400,000 potential customers. The model predicted the response given the other available customer characteristics. For Empire, the suspicion index quantified the differences between physician practices and peer (model) behavior. Appropriate physician behavior was a multi-variable aggregate produced by data mining - once again, not available in the database. For the drug maker, the promotion and sales databases contained the historical record of activity. An automated data mining method was necessary to model each doctor and determine the best combination of promotions to increase future sales.

Getting Started

In each case presented above, data mining yielded significant benefits to the business. Some were top-line results that increased revenues or expanded the customer base. Others were bottom-line improvements resulting from cost-savings and enhanced productivity. The natural next question is, "How can my organization get started and begin to realize the competitive advantages of DM?"

In our experience, pilot projects are the most successful vehicles for introducing data mining. A pilot project is a short, well-planned effort to bring DM into an organization. Good pilot projects focus on one very specific business need, and they involve business users up front and throughout the project. The duration of a typical pilot project is one to three months, and it generally requires 4 to 10 people part-time.

The role of the executive in such pilot projects is two-pronged. At the outset, the executive participates in setting the strategic goals and objectives for the project. During the project and prior to roll out, the executive takes part by supervising the measurement and evaluation of results. Lack of executive sponsorship and failure to involve business users are two primary reasons DM initiatives stall or fall short.

In reading this article, perhaps you've developed a vision and want to proceed - to address a pressing business problem by sponsoring a data mining pilot project. Twisting the old adage, we say "just because you should doesn't mean you can." Be aware that a capability assessment needs to be an integral component of a DM pilot project. The assessment takes a critical look at data and data access, personnel and their skills, equipment, and software. Organizations typically underestimate the impact of data mining (and information technology in general) on their people, their processes, and their corporate culture. The pilot project provides a relatively high-reward, low-cost, and low-risk opportunity to quantify the potential impact of DM.

Another stumbling block for an organization is deciding to defer any data mining activity until a data warehouse is built. Our experience indicates that, oftentimes, DM could and should come first. The purpose of the data warehouse is to provide users the opportunity to study customer and market behavior both retrospectively and prospectively. A data mining pilot project can provide important insight into the fields and aggregates that need to be designed into the warehouse to make it really valuable. Further, the cost savings or revenue generation provided by DM can provide bootstrap funding for a data warehouse or related initiatives.

Source:http://ezinearticles.com/?Digging-Up-Dollars-With-Data-Mining---An-Executives-Guide&id=6052872

Saturday, 2 August 2014

Automated SEO Tools Can Keep You Out of the SERPs

Every day, thousands of newcomers enter the world of internet marketing. They join all of the popular forums, follow popular advice, and purchase a bunch of tools to automate the process. While all three of these things are full of potential problems and pitfalls, it is the automated tools that have the potential to cause the greatest harm.

Why do People Buy Automated SEO Tools?

Automated SEO tools make some very grand promises. First, they promise to eliminate all of the hard work and effort that is needed to succeed with making money online. Next, they promise to give you an "edge" over your competitors. Finally, they promise to do it all for less than outsourcing.

But most of these tools don't work properly, meaning that any money you spent is money wasted. If you can't use it, you can't get any sort of return on your investment.

The few that do work properly, though, typically use techniques that are frowned upon by the search engines. They violate the terms of service, as well as commonly accepted web etiquette.

For instance, Scrape Box is a tool that is designed to find blogs on which you can comment and generate back links. It's a valuable tool if you're using it to automate the finding of relevant blogs. After that, though, it can only get you in trouble.

Just like most of these tools, Scrape Box is going to guide you through the process of creating a generic comment template. It will then assist you with "spinning" that comment, so that you can have hundreds (or even thousands) of "unique" versions.

What ends up happening, though, is that you end up with comments that resemble gibberish more than anything else. The tool then pushes these comments to blogs with links back to your website. And that's when everything goes downhill.

Search Engine Algorithms vs Automated SEO Tools

The most recent batch of algorithm updates, like Panda and Penguin, are designed to help Google better identify spam that is used for nothing other than search engine optimization. When you use Scrape Box to build and publish your content, you're spamming the web.

When Google's spiders crawl the content on these blogs and identify them as spam, they'll also note the fact that a link was sent back to your site. If you have too many of these, it will raise a red flag and your site may end up deindexed.

If you're lucky, you won't be penalized that harshly. Instead, all of your links will end up devalued. That isn't much better, though, because then the original cash investment in the tool, as well as all of the time you spent setting everything up, will be for nothing.

All of the popular SEO automation tools focus on spamming the web to build back links. This includes:

    Tools designed to automate blog comments

    Tools designed to automate article directory submissions

    Tools designed to submit your site to social bookmarking directories

    Tools designed to spin your content for uniqueness

The end results are disappointing, at best. You spend money on a tool that pushes out thousands of back links that never really help your SEO campaign. Ultimately, your site develops a back link profile that is so damaged that no amount of hard, legitimate work can ever overcome it.

Slow and Steady Wins the Race

The allure and appeal of these tools is simple to understand. They eliminate all of the hard work that would go into building a real back link profile. The problem, though, is easy to understand.

If you were to perform these tasks manually it would take a long time. But the quality of the back links you receive will far outweigh anything the automated tools could do for you.

Sure, you might not be able to manually generate a thousand back links in one day. But Google knows that, and when you do so you immediately set off a red flag. That's not how you build a solid, long lasting ranking.

By handling each step in your SEO campaign manually you will build a more natural and diverse back link footprint over time. Your rankings won't come quickly, but they'll stick for a very long time.

Source:http://ezinearticles.com/?Automated-SEO-Tools-Can-Keep-You-Out-of-the-SERPs&id=7427357