Interface | Description |
---|---|
ParquetTest |
A helper trait that provides convenient facilities for Parquet testing.
|
Class | Description |
---|---|
AppendingParquetOutputFormat |
TODO: this will be able to append to directories it created itself, not necessarily
to imported ones.
|
CatalystArrayContainsNullConverter |
A
parquet.io.api.GroupConverter that converts a single-element groups that
match the characteristics of an array contains null (see
ParquetTypesConverter ) into an
ArrayType . |
CatalystArrayConverter |
A
parquet.io.api.GroupConverter that converts a single-element groups that
match the characteristics of an array (see
ParquetTypesConverter ) into an
ArrayType . |
CatalystConverter | |
CatalystGroupConverter |
A
parquet.io.api.GroupConverter that is able to convert a Parquet record
to a org.apache.spark.sql.catalyst.expressions.Row object. |
CatalystMapConverter |
A
parquet.io.api.GroupConverter that converts two-element groups that
match the characteristics of a map (see
ParquetTypesConverter ) into an
MapType . |
CatalystNativeArrayConverter |
A
parquet.io.api.GroupConverter that converts a single-element groups that
match the characteristics of an array (see
ParquetTypesConverter ) into an
ArrayType . |
CatalystPrimitiveConverter |
A
parquet.io.api.PrimitiveConverter that converts Parquet types to Catalyst types. |
CatalystPrimitiveRowConverter |
A
parquet.io.api.GroupConverter that is able to convert a Parquet record
to a org.apache.spark.sql.catalyst.expressions.Row object. |
CatalystPrimitiveStringConverter |
A
parquet.io.api.PrimitiveConverter that converts Parquet Binary to Catalyst String. |
CatalystStructConverter |
This converter is for multi-element groups of primitive or complex types
that have repetition level optional or required (so struct fields).
|
CatalystTimestampConverter | |
DefaultSource |
Allows creation of Parquet based tables using the syntax:
|
FileSystemHelper | |
FilteringParquetRowInputFormat |
We extend ParquetInputFormat in order to have more control over which
RecordFilter we want to use.
|
InsertIntoParquetTable |
:: DeveloperApi ::
Operator that acts as a sink for queries on RDDs and can be used to
store the output inside a directory of Parquet files.
|
MutableRowWriteSupport | |
ParquetFilters | |
ParquetRelation |
Relation that consists of data stored in a Parquet columnar format.
|
ParquetRelation2 |
An alternative to
ParquetRelation that plugs in using the data sources API. |
ParquetRelation2.PartitionValues | |
ParquetRelation2.PartitionValues$ | |
ParquetTableScan |
:: DeveloperApi ::
Parquet table scan operator.
|
ParquetTestData | |
ParquetTypeInfo |
A class representing Parquet info fields we care about, for passing back to Parquet
|
ParquetTypesConverter | |
Partition | |
PartitionSpec | |
RowReadSupport |
A
parquet.hadoop.api.ReadSupport for Row objects. |
RowRecordMaterializer |
A
parquet.io.api.RecordMaterializer for Rows. |
RowWriteSupport |
A
parquet.hadoop.api.WriteSupport for Row ojects. |
TestGroupWriteSupport |