Package | Description |
---|---|
org.apache.nutch.scoring | |
org.apache.nutch.scoring.link | |
org.apache.nutch.scoring.opic | |
org.apache.nutch.scoring.tld |
Top Level Domain Scoring plugin.
|
org.apache.nutch.scoring.urlmeta |
URL Meta Tag Scoring Plugin
|
Modifier and Type | Method and Description |
---|---|
CrawlDatum |
ScoringFilter.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount)
Distribute score value from the current page to all its outlinked pages.
|
CrawlDatum |
AbstractScoringFilter.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount) |
CrawlDatum |
ScoringFilters.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount) |
float |
ScoringFilter.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort)
This method prepares a sort value for the purpose of sorting and
selecting top N scoring pages during fetchlist generation.
|
float |
AbstractScoringFilter.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort) |
float |
ScoringFilters.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort)
Calculate a sort value for Generate.
|
float |
ScoringFilter.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore)
This method calculates a Lucene document boost.
|
float |
AbstractScoringFilter.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore) |
float |
ScoringFilters.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore) |
void |
ScoringFilter.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Set an initial score for newly discovered pages.
|
void |
AbstractScoringFilter.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
ScoringFilters.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Calculate a new initial score, used when adding newly discovered pages.
|
void |
ScoringFilter.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Set an initial score for newly injected pages.
|
void |
AbstractScoringFilter.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
ScoringFilters.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Calculate a new initial score, used when injecting new pages.
|
void |
ScoringFilter.passScoreAfterParsing(org.apache.hadoop.io.Text url,
Content content,
Parse parse)
Currently a part of score distribution is performed using only data coming
from the parsing process.
|
void |
AbstractScoringFilter.passScoreAfterParsing(org.apache.hadoop.io.Text url,
Content content,
Parse parse) |
void |
ScoringFilters.passScoreAfterParsing(org.apache.hadoop.io.Text url,
Content content,
Parse parse) |
void |
ScoringFilter.passScoreBeforeParsing(org.apache.hadoop.io.Text url,
CrawlDatum datum,
Content content)
This method takes all relevant score information from the current datum
(coming from a generated fetchlist) and stores it into
Content metadata. |
void |
AbstractScoringFilter.passScoreBeforeParsing(org.apache.hadoop.io.Text url,
CrawlDatum datum,
Content content) |
void |
ScoringFilters.passScoreBeforeParsing(org.apache.hadoop.io.Text url,
CrawlDatum datum,
Content content) |
void |
ScoringFilter.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked)
This method calculates a new score of CrawlDatum during CrawlDb update, based on the
initial value of the original CrawlDatum, and also score values contributed by
inlinked pages.
|
void |
AbstractScoringFilter.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked) |
void |
ScoringFilters.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked)
Calculate updated page score during CrawlDb.update().
|
Modifier and Type | Method and Description |
---|---|
CrawlDatum |
LinkAnalysisScoringFilter.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount) |
float |
LinkAnalysisScoringFilter.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort) |
float |
LinkAnalysisScoringFilter.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore) |
void |
LinkAnalysisScoringFilter.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
LinkAnalysisScoringFilter.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
LinkAnalysisScoringFilter.passScoreAfterParsing(org.apache.hadoop.io.Text url,
Content content,
Parse parse) |
void |
LinkAnalysisScoringFilter.passScoreBeforeParsing(org.apache.hadoop.io.Text url,
CrawlDatum datum,
Content content) |
void |
LinkAnalysisScoringFilter.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked) |
Modifier and Type | Method and Description |
---|---|
CrawlDatum |
OPICScoringFilter.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount)
Get a float value from Fetcher.SCORE_KEY, divide it by the number of outlinks and apply.
|
float |
OPICScoringFilter.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort)
|
float |
OPICScoringFilter.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore)
Dampen the boost value by scorePower.
|
void |
OPICScoringFilter.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Set to 0.0f (unknown value) - inlink contributions will bring it to
a correct level.
|
void |
OPICScoringFilter.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
OPICScoringFilter.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked)
Increase the score by a sum of inlinked scores.
|
Modifier and Type | Method and Description |
---|---|
CrawlDatum |
TLDScoringFilter.distributeScoreToOutlink(org.apache.hadoop.io.Text fromUrl,
org.apache.hadoop.io.Text toUrl,
ParseData parseData,
CrawlDatum target,
CrawlDatum adjust,
int allCount,
int validCount) |
CrawlDatum |
TLDScoringFilter.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount) |
float |
TLDScoringFilter.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort) |
float |
TLDScoringFilter.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore) |
void |
TLDScoringFilter.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
TLDScoringFilter.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum) |
void |
TLDScoringFilter.passScoreAfterParsing(org.apache.hadoop.io.Text url,
Content content,
Parse parse) |
void |
TLDScoringFilter.passScoreBeforeParsing(org.apache.hadoop.io.Text url,
CrawlDatum datum,
Content content) |
void |
TLDScoringFilter.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked) |
Modifier and Type | Method and Description |
---|---|
CrawlDatum |
URLMetaScoringFilter.distributeScoreToOutlinks(org.apache.hadoop.io.Text fromUrl,
ParseData parseData,
Collection<Map.Entry<org.apache.hadoop.io.Text,CrawlDatum>> targets,
CrawlDatum adjust,
int allCount)
This will take the metatags that you have listed in your "urlmeta.tags"
property, and looks for them inside the parseData object.
|
float |
URLMetaScoringFilter.generatorSortValue(org.apache.hadoop.io.Text url,
CrawlDatum datum,
float initSort)
Boilerplate
|
float |
URLMetaScoringFilter.indexerScore(org.apache.hadoop.io.Text url,
NutchDocument doc,
CrawlDatum dbDatum,
CrawlDatum fetchDatum,
Parse parse,
Inlinks inlinks,
float initScore)
Boilerplate
|
void |
URLMetaScoringFilter.initialScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Boilerplate
|
void |
URLMetaScoringFilter.injectedScore(org.apache.hadoop.io.Text url,
CrawlDatum datum)
Boilerplate
|
void |
URLMetaScoringFilter.updateDbScore(org.apache.hadoop.io.Text url,
CrawlDatum old,
CrawlDatum datum,
List<CrawlDatum> inlinked)
Boilerplate
|
Copyright © 2014 The Apache Software Foundation