I
- Vertex idV
- Vertex dataE
- Edge datapublic class NettyWorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable> extends Object implements WorkerClient<I,V,E>, ResetSuperstepMetricsObserver
WorkerClient
and implements them
using the available WritableRequest
objects.Constructor and Description |
---|
NettyWorkerClient(org.apache.hadoop.mapreduce.Mapper.Context context,
ImmutableClassesGiraphConfiguration<I,V,E> configuration,
CentralizedServiceWorker<I,V,E> service,
Thread.UncaughtExceptionHandler exceptionHandler)
Only constructor.
|
Modifier and Type | Method and Description |
---|---|
void |
authenticate()
Authenticates, as client, with another BSP worker, as server.
|
void |
closeConnections()
Closes all connections.
|
FlowControl |
getFlowControl() |
CentralizedServiceWorker<I,V,E> |
getService() |
PartitionOwner |
getVertexPartitionOwner(I vertexId)
Lookup PartitionOwner for a vertex.
|
void |
newSuperstep(SuperstepMetricsRegistry metrics)
Starting a new superstep.
|
void |
openConnections()
Make sure that all the connections to workers and master have been
established.
|
void |
sendWritableRequest(int destTaskId,
WritableRequest request)
Send a request to a remote server (should be already connected)
|
void |
setup(boolean authenticate)
Setup the client.
|
void |
waitAllRequests()
Wait until all the outstanding requests are completed.
|
public NettyWorkerClient(org.apache.hadoop.mapreduce.Mapper.Context context, ImmutableClassesGiraphConfiguration<I,V,E> configuration, CentralizedServiceWorker<I,V,E> service, Thread.UncaughtExceptionHandler exceptionHandler)
context
- Context from mapperconfiguration
- Configurationservice
- Used to get partition mappingexceptionHandler
- handler for uncaught exception. Will
terminate job.public void newSuperstep(SuperstepMetricsRegistry metrics)
ResetSuperstepMetricsObserver
newSuperstep
in interface ResetSuperstepMetricsObserver
metrics
- SuperstepMetricsRegistry being used.public CentralizedServiceWorker<I,V,E> getService()
public void openConnections()
WorkerClient
openConnections
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
public PartitionOwner getVertexPartitionOwner(I vertexId)
WorkerClient
getVertexPartitionOwner
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
vertexId
- id to look up.public void sendWritableRequest(int destTaskId, WritableRequest request)
WorkerClient
sendWritableRequest
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
destTaskId
- Destination worker idrequest
- Request to sendpublic void waitAllRequests()
WorkerClient
waitAllRequests
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
public void closeConnections() throws IOException
WorkerClient
closeConnections
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
IOException
public void setup(boolean authenticate)
WorkerClient
setup
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
authenticate
- whether to SASL authenticate with server or not:
set by giraph.authenticate configuration option.public void authenticate()
WorkerClient
authenticate
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
public FlowControl getFlowControl()
getFlowControl
in interface WorkerClient<I extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,E extends org.apache.hadoop.io.Writable>
Copyright © 2011-2020 The Apache Software Foundation. All Rights Reserved.