@NotThreadSafe public class LongDiffArray extends Object implements org.apache.hadoop.io.Writable
trim()
function
to compact in-memory representation after all updates are done.
Compacting object is expensive so should only be done once after bulk update.
Compaction can also be caused by serialization attempt or
by calling iterator()
Constructor and Description |
---|
LongDiffArray() |
Modifier and Type | Method and Description |
---|---|
void |
add(long id)
Add a value
|
void |
initialize()
Initialize array
|
void |
initialize(int capacity)
Initialize with a given capacity
|
Iterator<org.apache.hadoop.io.LongWritable> |
iterator()
Returns an iterator that reuses objects.
|
void |
readFields(DataInput in) |
void |
remove(long id)
Remove a given value
|
void |
setUseUnsafeSerialization(boolean useUnsafeSerialization)
Set whether to use unsafe serailization
|
int |
size()
The number of stored ids
|
void |
trim()
This function takes all recent updates and stores them efficiently.
|
void |
write(DataOutput out) |
public void setUseUnsafeSerialization(boolean useUnsafeSerialization)
useUnsafeSerialization
- use unsafe serializationpublic void initialize(int capacity)
capacity
- capacitypublic void initialize()
public void add(long id)
id
- id to addpublic void remove(long id)
id
- id to removepublic int size()
public Iterator<org.apache.hadoop.io.LongWritable> iterator()
public void write(DataOutput out) throws IOException
write
in interface org.apache.hadoop.io.Writable
IOException
public void readFields(DataInput in) throws IOException
readFields
in interface org.apache.hadoop.io.Writable
IOException
public void trim()
Copyright © 2011-2020 The Apache Software Foundation. All Rights Reserved.