Type Mapping
| Scala | Avro | Beam | BigQuery | Bigtable7 | Datastore | Parquet | Protobuf | TensorFlow |
|---|---|---|---|---|---|---|---|---|
Unit |
null |
x | x | x | Null |
x | x | x |
Boolean |
boolean |
BOOLEAN |
BOOL |
Byte |
Boolean |
BOOLEAN |
Boolean |
INT643 |
Char |
int3 |
BYTE |
INT643 |
Char |
Integer3 |
INT323 |
Int3 |
INT643 |
Byte |
int3 |
BYTE |
INT643 |
Byte |
Integer3 |
INT329 |
Int3 |
INT643 |
Short |
int3 |
INT16 |
INT643 |
Short |
Integer3 |
INT329 |
Int3 |
INT643 |
Int |
int |
INT32 |
INT643 |
Int |
Integer3 |
INT329 |
Int |
INT643 |
Long |
long |
INT64 |
INT64 |
Long |
Integer |
INT649 |
Long |
INT64 |
Float |
float |
FLOAT |
FLOAT643 |
Float |
Double3 |
FLOAT |
Float |
FLOAT |
Double |
double |
DOUBLE |
FLOAT64 |
Double |
Double |
DOUBLE |
Double |
FLOAT3 |
CharSequence |
string |
STRING |
x | x | x | x | x | x |
String |
string |
STRING |
STRING |
String |
String |
BINARY |
String |
BYTES3 |
Array[Byte] |
bytes |
BYTES |
BYTES |
ByteString |
Blob |
BINARY |
ByteString |
BYTES |
ByteString |
x | BYTES |
x | ByteString |
Blob |
x | ByteString |
BYTES |
ByteBuffer |
bytes |
BYTES |
x | x | x | x | x | |
| Enum1 | enum |
STRING16 |
STRING3 |
String |
String3 |
BINARY/ENUM9 |
Enum | BYTES3 |
BigInt |
x | x | x | BigInt |
x | x | x | x |
BigDecimal |
bytes4 |
DECIMAL |
NUMERIC6 |
Int scale + unscaled BigInt |
x | LOGICAL[DECIMAL]9,14 |
x | x |
Option[T] |
union[null, T]5 |
Empty as null |
NULLABLE |
Empty as None |
Absent as None |
OPTIONAL |
optional10 |
Size <= 1 |
Iterable[T]2 |
array[T] |
ITERABLE |
REPEATED |
x | Array |
REPEATED13 |
repeated |
Size >= 0 |
| Nested | record |
ROW |
STRUCT |
Flat8 | Entity |
Group | Message |
Flat8 |
Map[K, V] |
map[V]15 |
MAP |
x | x | x | x | map<K, V> |
x |
java.time.Instant |
long11 |
DATETIME, INT64, ROW17 |
TIMESTAMP |
x | Timestamp |
LOGICAL[TIMESTAMP]9 |
x | x |
java.time.LocalDateTime |
long11 |
ROW, INT6417 |
DATETIME |
x | x | LOGICAL[TIMESTAMP]9 |
x | x |
java.time.OffsetTime |
x | x | x | x | x | LOGICAL[TIME]9 |
x | x |
java.time.LocalTime |
long11 |
INT32, INT6417 |
TIME |
x | x | LOGICAL[TIME]9 |
x | x |
java.time.LocalDate |
int11 |
INT6417 |
DATE |
x | x | LOGICAL[DATE]9 |
x | x |
org.joda.time.LocalDate |
int11 |
INT6417 |
x | x | x | x | x | x |
org.joda.time.DateTime |
int11 |
DATETIME, INT64, ROW17 |
x | x | x | x | x | x |
org.joda.time.LocalTime |
int11 |
INT32, INT6417 |
x | x | x | x | x | x |
java.util.UUID |
string4 |
ROW18 |
x | ByteString (16 bytes) | x | FIXED[16] |
x | x |
(Long, Long, Long)12 |
fixed[12] |
x | x | x | x | x | x | x |
- Those wrapped in
UnsafeEnumare encoded as strings, see enums.md for more - Any subtype of
Iterable[T] - Unsafe conversions,
import magnolify.$MODULE.unsafe._ - Avro logical types (doc)
UNIONof[NULL, T]and defaults toNULL(doc)- Fixed precision of 38 and scale of 9 (doc)
- All Scala types are encoded as big endian
ByteStringfor Bigtable - Nested fields are encoded flat with field names joined with
., e.g.level1.level2.level3 - More information on Parquet logical type schemas can be found here. Time types are available at multiple precisions; import
magnolify.parquet.logical.micros._,magnolify.avro.logical.millis._, ormagnolify.avro.logical.nanos._accordingly. - See protobuf.md for more
- Logical types available at micro- or milli-second precision; import
magnolify.avro.logical.micros._ormagnolify.avro.logical.millis._accordingly. BigQuery-compatible conversions are available inmagnolify.avro.logical.bigquery._. - Special tuple used to represent Duration in the Avro spec. This has not been made implicit in Magnolify; import
AvroType.afDurationimplicitly to enable - If
magnolify.parquet.ParquetArray.AvroCompat._is imported, array fields use the nested, Avro-compatible schema format:required group $FIELDNAME (LIST) { repeated $FIELDTYPE array ($FIELDSCHEMA); }. - Parquet’s Decimal logical format supports multiple representations, and are not implicitly scoped by default. Import one of:
magnolify.parquet.ParquetField.{decimal32, decimal64, decimalFixed, decimalBinary}. - Map key type in avro is fixed to string. Scala Map key type must be either
StringorCharSequence. - Beam logical Enumeration type
- See beam.md for details
- Beam logical UUID type
0.8.0