public class ReduceWork extends BaseWork
vectorColumnNameMap, vectorColumnTypeMap, vectorScratchColumnTypeMap
opProps, opTraits, statistics, vectorMode
Constructor and Description |
---|
ReduceWork() |
ReduceWork(String name) |
Modifier and Type | Method and Description |
---|---|
void |
configureJobConf(org.apache.hadoop.mapred.JobConf job) |
Set<Operator<?>> |
getAllRootOperators() |
TableDesc |
getKeyDesc() |
ObjectInspector |
getKeyObjectInspector() |
int |
getMaxReduceTasks() |
int |
getMinReduceTasks() |
boolean |
getNeedsTagging() |
Integer |
getNumReduceTasks()
If the number of reducers is -1, the runtime will automatically figure it
out by input data size.
|
Operator<?> |
getReducer() |
Map<Integer,String> |
getTagToInput() |
List<TableDesc> |
getTagToValueDesc() |
ObjectInspector |
getValueObjectInspector() |
String |
getVectorModeOn() |
boolean |
isAutoReduceParallelism() |
void |
replaceRoots(Map<Operator<?>,Operator<?>> replacementMap) |
void |
setAutoReduceParallelism(boolean isAutoReduceParallelism) |
void |
setKeyDesc(TableDesc keyDesc)
If the plan has a reducer and correspondingly a reduce-sink, then store the TableDesc pointing
to keySerializeInfo of the ReduceSink
|
void |
setMaxReduceTasks(int maxReduceTasks) |
void |
setMinReduceTasks(int minReduceTasks) |
void |
setNeedsTagging(boolean needsTagging) |
void |
setNumReduceTasks(Integer numReduceTasks) |
void |
setReducer(Operator<?> reducer) |
void |
setTagToInput(Map<Integer,String> tagToInput) |
void |
setTagToValueDesc(List<TableDesc> tagToValueDesc) |
addDummyOp, addSortCols, getAllLeafOperators, getAllOperators, getDummyOps, getMapRedLocalWork, getName, getSortCols, getTag, getVectorColumnNameMap, getVectorColumnTypeMap, getVectorScratchColumnTypeMap, isGatheringStats, setDummyOps, setGatheringStats, setMapRedLocalWork, setName, setTag, setVectorColumnNameMap, setVectorColumnTypeMap, setVectorScratchColumnTypeMap
clone, getOpProps, getStatistics, getTraits, getVectorMode, setOpProps, setStatistics, setTraits, setVectorMode
public ReduceWork()
public ReduceWork(String name)
public void setKeyDesc(TableDesc keyDesc)
keyDesc
- public TableDesc getKeyDesc()
public ObjectInspector getKeyObjectInspector()
public ObjectInspector getValueObjectInspector()
public String getVectorModeOn()
public Operator<?> getReducer()
public void setReducer(Operator<?> reducer)
public boolean getNeedsTagging()
public void setNeedsTagging(boolean needsTagging)
public void replaceRoots(Map<Operator<?>,Operator<?>> replacementMap)
replaceRoots
in class BaseWork
public Set<Operator<?>> getAllRootOperators()
getAllRootOperators
in class BaseWork
public Integer getNumReduceTasks()
public void setNumReduceTasks(Integer numReduceTasks)
public void configureJobConf(org.apache.hadoop.mapred.JobConf job)
configureJobConf
in class BaseWork
public void setAutoReduceParallelism(boolean isAutoReduceParallelism)
public boolean isAutoReduceParallelism()
public void setMinReduceTasks(int minReduceTasks)
public int getMinReduceTasks()
public int getMaxReduceTasks()
public void setMaxReduceTasks(int maxReduceTasks)
Copyright © 2017 The Apache Software Foundation. All rights reserved.