Currently Exposed support the following data types in table definition:
integer
- translates to DBINT
short
- translates to DBSMALLINT
long
-BIGINT
float
-FLOAT
decimal
-DECIMAL
with scale and precisiondouble
-DOUBLE PRECISION
bool
-BOOLEAN
char
-CHAR
varchar
-VARCHAR
with lengthtext
-TEXT
enumeration
-INT
ordinal valueenumerationByName
-VARCHAR
customEnumeration
- see additional sectionblob
-BLOB
binary
-VARBINARY
with lengthuuid
-BINARY(16)
reference
- a foreign key
The exposed-java-time
extension (org.jetbrains.exposed:exposed-java-time:$exposed_version
) provides additional types:
date
-DATETIME
time
-TIME
datetime
-DATETIME
timestamp
-TIMESTAMP
duration
-DURATION
Note: some types are different for specific DB dialect.
The exposed-json
extension (org.jetbrains.exposed:exposed-json:$exposed_version
) provides additional types (see how to use):
json
-JSON
jsonb
-JSONB
Note: Databases store JSON values either in text or binary format, so Exposed provides two types to account for any potential differences, if they exist, for example:
- PostgreSQL:
json()
maps toJSON
, whilejsonb()
maps toJSONB
. - SQLite: No native JSON type so
json()
maps to TEXT, whilejsonb()
throws. - MySQL: JSON type only supports binary format, so
json()
andjsonb()
both map to JSON. - Oracle: Exposed does not currently support the JSON binary format of Oracle 21c; only text format
json()
can be used.
Some of the databases (e.g. MySQL, PostgreSQL, H2) supports explicit ENUM types. Because keeping such columns in sync with kotlin enumerations using only jdbc metadata could be a huge challenge, Exposed doesn't provide a possibility to manage such columns in an automatic way, but that doesn't mean that you can't use such column types. You have two options to work with ENUM database types:
- Use existing ENUM column from your tables
- Create column from Exposed by providing raw definition SQL
In both cases, you should use
customEnumeration
function (available since version 0.10.3)
As a jdbc-driver can provide/expect specific classes for Enum type, you must provide from/to transformation functions for them when defining a customEnumeration
.
For such enum private enum class Foo { Bar, Baz }
you can use provided code for your database:
H2
val existingEnumColumn = customEnumeration("enumColumn", { Foo.values()[it as Int] }, { it.name })
val newEnumColumn = customEnumeration("enumColumn", "ENUM('Bar', 'Baz')", { Foo.values()[it as Int] }, { it.name })
MySQL
val existingEnumColumn = customEnumeration("enumColumn", { value -> Foo.valueOf(value as String) }, { it.name })
val newEnumColumn = customEnumeration("enumColumn", "ENUM('Bar', 'Baz')", { value -> Foo.valueOf(value as String) }, { it.name })
PostgreSQL
PostgreSQL requires that ENUM is defined as a separate type, so you have to create it before creating your table. Also postgresql jdbc driver returns PGobject instances for such values. The full working sample is provided below.
class PGEnum<T : Enum<T>>(enumTypeName: String, enumValue: T?) : PGobject() {
init {
value = enumValue?.name
type = enumTypeName
}
}
object EnumTable : Table() {
val enumColumn = customEnumeration("enumColumn", "FooEnum", {value -> Foo.valueOf(value as String)}, { PGEnum("FooEnum", it) }
}
...
transaction {
exec("CREATE TYPE FooEnum AS ENUM ('Bar', 'Baz');")
SchemaUtils.create(EnumTable)
...
}
Add the following dependencies to your build.gradle.kts
:
val exposedVersion: String by project
dependencies {
implementation("org.jetbrains.exposed:exposed-core:$exposedVersion")
implementation("org.jetbrains.exposed:exposed-json:$exposedVersion")
}
Exposed works together with kotlinx.serialization to support @Serializable
classes and JSON serialization/deserialization:
import kotlinx.serialization.Serializable
import kotlinx.serialization.json.Json
@Serializable
data class Project(val name: String, val language: String, val active: Boolean)
val format = Json { prettyPrint = true }
object Team : Table("team") {
val groupId = varchar("group_id", 32)
val project = json<Project>("project", format) // equivalent to json("project", format, Project.serializer())
}
transaction {
val mainProject = Project("Main", "Java", true)
Team.insert {
it[groupId] = "A"
it[project] = mainProject
}
Team.update({ Team.groupId eq "A" }) {
it[project] = mainProject.copy(language = "Kotlin")
}
Team.selectAll().map { "Team ${it[Team.groupId]} -> ${it[Team.project]}" }.forEach { println(it) }
// Team A -> Project(name=Main, language=Kotlin, active=true)
}
Both column types also support custom serializer and deserializer arguments, using the form:
fun <T : Any> json(name: String, serialize: (T) -> String, deserialize: (String) -> T): Column<T>
JSON path strings can be used to extract values (either as JSON or as a scalar value) at a specific field/key:
val projectName = Team.project.extract<String>("name")
val languageIsKotlin = Team.project.extract<String>("language").lowerCase() eq "kotlin"
Team.slice(projectName).select { languageIsKotlin }.map { it[projectName] }
Note: Databases that support a path context root $
will have this value appended to the generated SQL path expression by default, so it is not necessary to include it in the provided argument String. In the above example, if MySQL is being used, the provided path arguments should be .name
and .language
respectively.
The JSON functions exists()
and contains()
are currently supported as well:
val hasActiveStatus = Team.project.exists(".active")
val activeProjects = Team.select { hasActiveStatus }.count()
// Depending on the database, filter paths can be provided instead, as well as optional arguments
// PostgreSQL example
val mainId = "Main"
val hasMainProject = Team.project.exists(".name ? (@ == \$main)", optional = "{\"main\":\"$mainId\"}")
val mainProjects = Team.select { hasMainProject }.map { it[Team.groupId] }
val usesKotlin = Team.project.contains("{\"language\":\"Kotlin\"}")
val kotlinTeams = Team.select { usesKotlin }.count()
// Depending on the database, an optional path can be provided too
// MySQL example
val usesKotlin = Team.project.contains("\"Kotlin\"", ".language")
val kotlinTeams = Team.select { usesKotlin }.count()
JSON columns also accept JSON arrays as input values. For example, using the serializable data class Project
from the example above, the following details some ways to create such a column:
object TeamProjects : Table("team_projects") {
val memberIds = json<IntArray>("member_ids", Json.Default)
val projects = json<Array<Project>>("projects", Json.Default)
// equivalent to: @OptIn(ExperimentalSerializationApi::class) json("projects", Json.Default, ArraySerializer(Project.serializer()))
}
transaction {
TeamProjects.insert {
it[memberIds] = intArrayOf(1, 2, 3)
it[projects] = arrayOf(
Project("A", "Kotlin", true),
Project("B", "Java", true)
)
}
// generates SQL
// INSERT INTO team_projects (member_ids, projects) VALUES ([1,2,3], [{"name":"A","language":"Kotlin","active":true},{"name":"B","language":"Java","active":true}])
}