Skip to content

maxlength metadata not working with pyspark #137

@dokipen

Description

@dokipen

I made sure the table doesn't exist, then run the following:

df = sqlCtx.createDataFrame(sc.parallelize([Row(value="a"*2048)]))
df.schema.fields[0].metadata['maxlength'] = 4096
df.write.format("com.databricks.spark.redshift") \
        .options(url=pgconn,
                 dbtable="tmptable",
                 tempdir=TMPDIR) \
        .save(mode='append')

The maxlength metadata is ignored and the column is created with character varying(256) type. Any ideas?

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions