diff --git a/README.md b/README.md index e9ab74f6..6f308693 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ $SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.12:0.5.0 This package allows reading XML files in local or distributed filesystem as [Spark DataFrames](https://spark.apache.org/docs/1.6.0/sql-programming-guide.html). When reading files the API accepts several options: * `path`: Location of files. Similar to Spark can accept standard Hadoop globbing expressions. -* `rowTag`: The row tag of your xml files to treat as a row. For example, in this xml ` ...`, the appropriate value would be `book`. Default is `ROW`. At the moment, rows containing self closing xml tags are not supported. +* `rowTag`: The row tag of your xml files to treat as a row. For example, in this xml ` ...`, the appropriate value would be `book`. Default is `ROW`. * `samplingRatio`: Sampling ratio for inferring schema (0.0 ~ 1). Default is 1. Possible types are `StructType`, `ArrayType`, `StringType`, `LongType`, `DoubleType`, `BooleanType`, `TimestampType` and `NullType`, unless user provides a schema for this. * `excludeAttribute` : Whether you want to exclude attributes in elements or not. Default is false. * `treatEmptyValuesAsNulls` : (DEPRECATED: use `nullValue` set to `""`) Whether you want to treat whitespaces as a null value. Default is false