Package | Description |
---|---|
org.apache.hadoop.hdfs.server.blockmanagement |
Modifier and Type | Field and Description |
---|---|
static DatanodeStorageInfo[] |
DatanodeStorageInfo.EMPTY_ARRAY |
Modifier and Type | Method and Description |
---|---|
protected DatanodeStorageInfo |
BlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<org.apache.hadoop.fs.StorageType,Integer> storageTypes) |
protected DatanodeStorageInfo |
BlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<org.apache.hadoop.fs.StorageType,Integer> storageTypes,
boolean fallbackToNodeGroupAndLocalRack)
choose local node of localMachine as the target.
|
DatanodeStorageInfo[] |
BlockUnderConstructionFeature.getExpectedStorageLocations()
Create array of expected replica locations
(as has been assigned by chooseTargets()).
|
Modifier and Type | Method and Description |
---|---|
Collection<DatanodeStorageInfo> |
BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first,
Collection<DatanodeStorageInfo> second,
Map<String,List<DatanodeStorageInfo>> rackMap)
Pick up replica node set for deleting replica as over-replicated.
|
Modifier and Type | Method and Description |
---|---|
static void |
DatanodeStorageInfo.decrementBlocksScheduled(DatanodeStorageInfo... storages)
Decrement the number of blocks scheduled for each given storage.
|
static void |
DatanodeStorageInfo.incrementBlocksScheduled(DatanodeStorageInfo... storages)
Increment the number of blocks scheduled for each given storage
|
void |
BlockUnderConstructionFeature.setExpectedLocations(org.apache.hadoop.hdfs.protocol.Block block,
DatanodeStorageInfo[] targets)
Set expected locations
|
static org.apache.hadoop.hdfs.protocol.DatanodeInfo[] |
DatanodeStorageInfo.toDatanodeInfos(DatanodeStorageInfo[] storages) |
static String[] |
DatanodeStorageInfo.toStorageIDs(DatanodeStorageInfo[] storages) |
static org.apache.hadoop.fs.StorageType[] |
DatanodeStorageInfo.toStorageTypes(DatanodeStorageInfo[] storages) |
Modifier and Type | Method and Description |
---|---|
protected void |
BlockPlacementPolicyWithNodeGroup.chooseFavouredNodes(String src,
int numOfReplicas,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> favoredNodes,
Set<org.apache.hadoop.net.Node> favoriteAndExcludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<org.apache.hadoop.fs.StorageType,Integer> storageTypes)
choose all good favored nodes as target.
|
protected DatanodeStorageInfo |
BlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<org.apache.hadoop.fs.StorageType,Integer> storageTypes) |
protected DatanodeStorageInfo |
BlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<org.apache.hadoop.fs.StorageType,Integer> storageTypes,
boolean fallbackToNodeGroupAndLocalRack)
choose local node of localMachine as the target.
|
protected void |
BlockPlacementPolicyWithNodeGroup.chooseRemoteRack(int numOfReplicas,
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxReplicasPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<org.apache.hadoop.fs.StorageType,Integer> storageTypes)
Choose numOfReplicas nodes from the racks
that localMachine is NOT on.
|
Collection<DatanodeStorageInfo> |
BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first,
Collection<DatanodeStorageInfo> second,
Map<String,List<DatanodeStorageInfo>> rackMap)
Pick up replica node set for deleting replica as over-replicated.
|
Collection<DatanodeStorageInfo> |
BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first,
Collection<DatanodeStorageInfo> second,
Map<String,List<DatanodeStorageInfo>> rackMap)
Pick up replica node set for deleting replica as over-replicated.
|
Collection<DatanodeStorageInfo> |
BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first,
Collection<DatanodeStorageInfo> second,
Map<String,List<DatanodeStorageInfo>> rackMap)
Pick up replica node set for deleting replica as over-replicated.
|
Constructor and Description |
---|
BlockUnderConstructionFeature(org.apache.hadoop.hdfs.protocol.Block blk,
HdfsServerConstants.BlockUCState state,
DatanodeStorageInfo[] targets) |
Copyright © 2018 Apache Software Foundation. All Rights Reserved.