-
Notifications
You must be signed in to change notification settings - Fork 2
SolarUser Datum Import API
The SolarUser Datum Import API provides methods to manage importing datum in bulk to SolarNetwork. The API is designed around import tasks where the following steps are taken:
- Upload the bulk data.
- Optionally preview how SolarNetwork will parse the data into datum objects, then confirm the job.
- Check the import task status to wait for SolarNetwork to complete the import process.
All paths are relative to a /solaruser
prefix. All dates and times are represented in the
Gregorian calendar system. All requests must provide a valid user authentication token. See
SolarNet API authentication for information on how to use
authentication tokens.
Verb | Endpoint | Description |
---|---|---|
POST |
/user/import/jobs |
Submit an import task request. |
GET |
/user/import/jobs |
List previously submitted import tasks with their processing status. |
GET |
/user/import/jobs/{jobId} |
View a previously submitted import task with its processing status. |
DELETE |
/user/import/jobs/{jobId} |
Cancel a previously submitted import task. |
POST |
/user/import/jobs/{jobId} |
Update a previously submitted import task configuration. |
GET |
/user/import/jobs/{jobId}/preview |
Preview a staged import task request. |
POST |
/user/import/jobs/{jobId}/confirm |
Confirm a staged import task request. |
GET |
/user/import/services/input |
List the available import input format types. |
Many endpoints return localized messages. The locale of the response is determined by the Accept-Language
HTTP header passed on the request. If not provided, the default locale of the SolarNetwork service will be used.
Some responses include a localizedInfoMessages
response object. This objects contains a pair of
localized messages for the setting specifier key
values returned in the same response. The
localized message pairs are in the form X.key
(a title) and X.desc
(a longer description) where
X
is a setting specifier key. For example, a response might look like:
{
"id": "net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService",
"locale": "en-NZ",
"settingSpecifiers": [
{
"key": "headerRowCount",
"defaultValue": "1",
"secureTextEntry": false,
"transient": false,
"type": "net.solarnetwork.settings.TextFieldSettingSpecifier"
},
],
"localizedName": "CSV - Basic",
"localizedDescription": "Import data in a basic CSV format, with JSON encoded columns for instantaneous, accumulating, status, and tag sample data.",
"localizedInfoMessages": {
"headerRowCount.key": "Skip Rows",
"headerRowCount.desc": "The number of rows to skip. Can be used to skip header rows, or to skip previously loaded rows from a partially completed import job."
}
}
A single headerRowCount
setting is defined for this service. The localizedInfoMessages
object
thus defines the title for that setting via headerRowCount.key
(Skip Rows) and a description
via headerRowCount.desc
(The number of rows to skip. …).
The following input data formats are supported; each input format has a unique service ID that is referenced in various endpoints, such as the Import Task Submit endpoint.
Format | Description |
---|---|
CSV - Basic | CSV format with JSON-encoded columns for datum properties. |
CSV - Simple | CSV format with simple property value columns. |
Cloud Datum Stream | Import from a configured Cloud Datum Stream |
ID | net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService |
---|
The CSV input format encodes each datum as a CSV row, with datum properties as 4 columns of JSON encoded objects for each supported property classification (instantaneous, accumulating, status, and tag). A header row is optional in this format. It supports the following settings:
Setting | Type | Default | Description |
---|---|---|---|
headerRowCount |
integer | 1 |
The number of header lines to skip in the import data. At least one header row is required, and the first header row must define the datum property names. |
nodeIdColumn |
string | 1 |
The CSV column number or name that contains the node ID values, starting from 1 or A. |
sourceIdColumn |
string | 2 |
The CSV column number or name that contains the source ID values, starting from 1 or A. |
dateColumnsValue |
string | 3 |
A CSV columns reference for the datum date and time. When multiple columns are configured, all column values are concatenated with a space to form the timestamp to be parsed using the configured date format. |
instantaneousDataColumn |
string | 4 |
The CSV column number or name, starting from 1 or A, that contains the instantaneous property values, as a JSON objects |
accumulatingDataColumn |
string | 5 |
The CSV column number or name, starting from 1 or A, that contains the accumulating property values, as a JSON objects |
statusDataColumn |
string | 6 |
The CSV column number or name, starting from 1 or A, that contains the status property values, as a JSON object. |
tagDataColumn |
string | 7 |
The CSV column number or name, starting from 1 or A, that contains the tags, as a comma-delimited string. |
Example input looks like this:
nodeId,sourceId,created,i,a
123,/S01/TEST,2013-01-21 07:25:00,"{""irradiance"":0.000}","{""wattHours"":0.000}"
123,/S01/TEST,2013-01-21 07:30:00,"{""irradiance"":9.000}","{""wattHours"":10.100}"
123,/S01/TEST,2013-01-21 07:35:00,"{""irradiance"":12.336}","{""wattHours"":15.020}"
123,/S01/TEST,2013-01-21 07:40:00,"{""irradiance"":15.660}","{""wattHours"":19.325}"
123,/S01/TEST,2013-01-21 07:45:00,"{""irradiance"":18.996}","{""wattHours"":25.923}"
⚠️ Note that the JSON column values must correctly escape the quotes within the JSON values by replacing each quote character with two quote characters.
It is not a problem if a specified column does not actually exist in the imported data. The example
shown above does not contain status or tag columns, even though those settings default to columns
6
and 7
, respectively. Missing columns will simply be ignored during the import process.
Additionally, empty column values are allowed except for the node ID, source ID, and date columns.
ID | net.solarnetwork.central.datum.imp.standard.SimpleCsvDatumImportInputFormatService |
---|
The CSV input format encodes each datum as a CSV row, with datum properties as individual columns. A header row is required as the datum property names are defined there. It supports the following settings:
Setting | Type | Default | Description |
---|---|---|---|
headerRowCount |
integer | 1 |
The number of header lines to skip in the import data. At least one header row is required, and the first header row must define the datum property names. |
nodeIdColumn |
string | 1 |
The CSV column number or name that contains the node ID values, starting from 1 or A. |
sourceIdColumn |
string | 2 |
The CSV column number or name that contains the source ID values, starting from 1 or A. |
dateColumnsValue |
string | 3 |
A CSV columns reference for the datum date and time. When multiple columns are configured, all column values are concatenated with a space to form the timestamp to be parsed using the configured date format. |
dateFormat |
string | yyyy-MM-dd HH:mm:ss |
The date format for parsing the datum date and time. |
instantaneousDataColumns |
string | The CSV columns reference for the instantaneous property values. | |
accumulatingDataColumns |
string | The CSV columns reference for the accumulating property values. | |
statusDataColumns |
string | The CSV columns reference for the status property values. | |
tagDataColumns |
string | The CSV columns reference for the tag values. Tag values may be specified individually or as a comma-delimited list. |
Example input looks like this:
node,source,date,irradiance,wattHours
123,/S01/TEST,2013-01-21 07:25:00,0.000,0.000
123,/S01/TEST,2013-01-21 07:30:00,9.000,10.100
123,/S01/TEST,2013-01-21 07:35:00,12.336,15.020
123,/S01/TEST,2013-01-21 07:40:00,15.660,19.325
123,/S01/TEST,2013-01-21 07:45:00,18.996,25.923
It is not a problem if a specified column does not actually exist in the imported data. The example shown above does not contain status or tag columns. Missing columns will simply be ignored during the import process. Additionally, empty column values are allowed except for the node ID, source ID, and date columns.
A CSV columns reference is a comma-delimited list of column numbers, or letters, with optional ranges
separated by a dash character. Numbers start at 1
and letters start at A
, following the common
spreadsheet method of adding additional letters as needed to reach columns higher than 26.
Here are some examples:
Reference | Description |
---|---|
4 |
The forth column. |
4,5,6,10 |
Four columns including 4, 5, 6, and 10. |
4-6,10 |
Same as previous, using a range: four columns including 4, 5, 6, and 10. |
D |
The forth column. |
D,E,F,J |
Four columns including 4, 5, 6, and 10. |
D-F,J |
Same as previous, using a range: four columns including 4, 5, 6, and 10. |
AA-AC,AG |
Four columns including 27, 28, 29, and 33. |
4,E-F,10 |
Four columns including 4, 5, 6, and 10. |
ID | s10k.c2c.ds-import |
---|
The Cloud Datum Stream format imports datum from a configured Cloud Datum Stream.
⚠️ Not all Cloud Datum Stream services support importing data over arbitrary time ranges. Consult the documentation for the service you are interested in for more information.
⚠️ ThebatchSize
input setting should be set to1
for Cloud Datum Stream import jobs See the Import transaction handling section for more details.
It supports the following settings:
Setting | Type | Description |
---|---|---|
datumStreamId |
integer | The unique ID of the Cloud Datum Stream to import from |
startDate |
string | The starting date to import from, in ISO 8601 timestamp format |
endDate |
string | The ending date to import to, in ISO 8601 timestamp format |
Submit an import task configuration for asynchronous execution. Once submitted, the import task will either become:
- staged if the configuration requests it, meaning the import task will not execute but can be previewed and then must be confirmed to execute or be cancelled.
- queued and execute at some point in the future.
Either way, use the list or view endpoints to check on the status of submitted tasks.
POST | /solaruser/api/v1/sec/user/import/jobs |
---|---|
config |
The import configuration JSON object, as an application/json multipart object. |
data |
The import data, as an application/octet-stream multipart object. |
The request must be submitted as multipart/form-data
with config
and data
parts, as described
in the following sections.
For example, the following HTTP request will import CSV data for staging:
POST /solaruser/api/v1/sec/user/import/jobs HTTP/1.1
Authorization: SNWS2 <<SNWS2 AUTH HERE>>
Date: Tue, 16 Mar 2021 21:11:00 GMT
Content-MD5: s749I4GCeoV6YcTgvgxmMA==
Content-Type: multipart/form-data; charset=utf-8; boundary=6nhCnmkiuPlQKbskL1_kPKQsso2CMH0a
Host: data.solarnetwork.net:443
Connection: close
Content-Length: 929
--6nhCnmkiuPlQKbskL1_kPKQsso2CMH0a
Content-Disposition: form-data; name="config"
Content-Type: application/json
{"name":"Test Import","stage":true,"inputConfiguration":{"name":"Test Input","timeZoneId":"America/New_York","serviceIdentifier":"net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService","serviceProperties":{"headerRowCount":"0","dateColumnsValue":"1","dateFormat":"MM/dd/yyyy HH:mm:ss","nodeIdColumn":"2","sourceIdColumn":"3"}}}
--6nhCnmkiuPlQKbskL1_kPKQsso2CMH0a
Content-Disposition: form-data; name="data"; filename="test-data-load-123-XSM.csv"
Content-Type: text/csv
01/01/2013 07:25:00,123,/S01/TEST,"{""irradiance"":0.000}"
01/01/2013 07:30:00,123,/S01/TEST,"{""irradiance"":9.000}"
01/01/2013 07:35:00,123,/S01/TEST,"{""irradiance"":12.336}"
01/01/2013 07:40:00,123,/S01/TEST,"{""irradiance"":15.660}"
01/01/2013 07:45:00,123,/S01/TEST,"{""irradiance"":18.996}"
--6nhCnmkiuPlQKbskL1_kPKQsso2CMH0a--
The data
component can provide compressed content, to reduce the size of the HTTP request. The
following compression formats are supported:
- bzip2
- gzip
- lz4
- xz
When utilized, the Content-Type
part header value can be specified as application/octet-stream
. The compression format will be detected automatically.
The data
content can be included as-is (raw bytes). A Content-Type
part can be specified as application/octet-stream
if binary content is provided (for example compressed data). You can also provide Base64-encoded binary content if you include a Content-Transfer-Encoding: base64
part header. For example:
POST /solaruser/api/v1/sec/user/import/jobs HTTP/1.1
Authorization: SNWS2 <<SNWS2 AUTH HERE>>
Date: Tue, 16 Mar 2021 21:11:00 GMT
Content-Type: multipart/form-data; charset=utf-8; boundary=PEAtNV-9A9wuDto9aWROrpCG4gUAbf
Host: data.solarnetwork.net:443
Connection: close
Content-Length: 929
--PEAtNV-9A9wuDto9aWROrpCG4gUAbf
Content-Disposition: form-data; name="config"
Content-Type: application/json
{"name":"Test Import","stage":true,"inputConfiguration":{"name":"Test Input","timeZoneId":"America/New_York","serviceIdentifier":"net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService","serviceProperties":{"headerRowCount":"0","dateColumnsValue":"1","dateFormat":"MM/dd/yyyy HH:mm:ss","nodeIdColumn":"2","sourceIdColumn":"3"}}}
--PEAtNV-9A9wuDto9aWROrpCG4gUAbf
Content-Disposition: form-data; name="data"; filename="test-data-load.csv.xz"
Content-Type: application/octet-stream
Content-Transfer-Encoding: base64
SGVsbG8sIHdvcmxkLg==
--PEAtNV-9A9wuDto9aWROrpCG4gUAbf--
The config
HTTP request part must be of type application/json
and contain a JSON object that
specifies the import task configuration.
Property | Type | Description |
---|---|---|
name |
string | A descriptive name for the import task. |
stage |
boolean |
true to stage the data so it can be previewed for errors, false to immediately queue the task for execution. |
batchSize |
integer | An optional transactional batch size. Values between 1,000 - 10,000 are good candidates. Setting to 1 disables the import transaction completely. See below for more details. |
groupKey |
string | An optional import group key. See the section on parallel import jobs for more information. |
inputConfiguration |
object | Input format configuration object that defines the format to decode the data as. |
The stage
property allows you to upload the data set without actually trying to import the data.
You can call the preview endpoint to see if the data will be parsed
correctly, and then confirm or cancel the task
execution.
The batchSize
controls how many datum are imported per transaction. If any error occurs during the
import process, the datum imported within the current transaction will be discarded. Or put another
way, all datum up to the start of the current transaction will have been committed. The result
status for the import task will provide details on how many datum were
successfully imported, so you could then create a new import task with the problem corrected and the
successfully imported datum omitted.
Setting batchSize
to 1
means the import job will run without a transaction, so all datum up to
the point of any error will have been committed. Not setting batchSize
at all, or setting it to
anything less than 1
, means the import job will run completely within a single transaction, so if
any error occurs no datum will be committed.
Generally configuring a batchSize
can help an import job comlpete more quickly. As a rule of thumb
a value between 1,000 and 10,000 is a good starting point.
⚠️ Some input services like Cloud Datum Stream work much better with the import transaction disabled, so setbatchSize
to1
for those services.
Import jobs are processed by creation order, from oldest to newest. SolarNetwork can process
multiple jobs in parallel, however. If two import jobs include datum that overwrite each other and
they are executed at the same time, one of them can fail from a transactional deadlock as they both
try to write the same datum. To prevent this from happening, define a group key for the import
jobs so that all jobs importing datum for the same node and source ID stream share a common
groupKey
value. Then those jobs will be imported sequentially, from oldest to newest, without
conflicting with each other.
If no groupKey
is provided with a given import job, SolarNetwork will assign a unique value
itself.
The inputConfiguration
task configuration property defines the format of the data
HTTP request
part. The supported import formats are available via the List input formats
endpoint.
Property | Type | Description |
---|---|---|
name |
string | A descriptive name for the input format. |
timeZoneId |
string | A time zone to apply to imported datum that do not specify a time zone. |
serviceIdentifier |
string | The id of any supported input format. |
serviceProperties |
object | An optional object of setting keys and associated values to apply on the selected input format. |
The supported serviceProperties
are specific to each input format. Consult the
List input formats documentation for more information.
An example input configuration object that imports datum in CSV form with no header row and local
dates in the America/New_York
time zone:
{
"name": "Test Input",
"timeZoneId": "America/New_York",
"serviceIdentifier": "net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService",
"serviceProperties": {
"headerRowCount": "0",
"dateColumnsValue": "1",
"dateFormat": "MM/dd/yyyy HH:mm:ss",
"nodeIdColumn": "2",
"sourceIdColumn": "3"
}
}
The response will be a task object. See the view task response for more details.
{
"success": true,
"data": {
"jobId": "7e426ffb-5928-420d-90c6-c7c0404c6da1",
"jobState": "Staged",
"groupKey": "4dff899d-0fee-47e1-8f74-a0e74941ac1f"
}
}
List the available import tasks previously submitted via a POST
to this same URL.
GET | /solaruser/api/v1/sec/user/import/jobs |
---|---|
states |
An optional comma-delimited list of import task state values. If not provided, tasks for all states are returned. |
For example, to find all pending tasks you'd call this API like:
/solaruser/api/v1/sec/user/import/jobs?states=Queued,Staged,Claimed,Executing
To find all executing tasks you'd call this API like:
/solaruser/api/v1/sec/user/import/jobs?states=Executing
To find all completed tasks (both successful and failed) you'd call this API like:
/solaruser/api/v1/sec/user/import/jobs?states=Completed
The response contains a list of import tasks. See the view import task response for more details.
View details on a previously submitted import task.
GET | /solaruser/api/v1/sec/user/import/jobs/{jobId} |
---|---|
jobId |
The import task job ID, returned via the submit import task API. |
The response is an import task status object, with the following properties:
Property | Type | Description |
---|---|---|
jobId |
string | A unique identifier assigned by SolarNetwork. |
jobState |
string | The current import task state. |
groupKey |
string | The job group key. See the section on parallel import jobs for more information. |
success |
boolean |
true if the task has completed successfully. |
cancelled |
boolean |
true if the task was cancelled. |
done |
boolean |
true if the task is complete. |
userId |
number | The ID of the user that owns the task. |
submitDate |
number | The date the task was submitted, in milliseconds since Jan 1, 1970, in the UTC time zone. |
startedDate |
number | The date the task started executing, in milliseconds since Jan 1, 1970, in the UTC time zone. Set to 0 if the task has not yet started. |
completedDate |
number | The date the task finished executing, in milliseconds since Jan 1, 1970, in the UTC time zone. Set to 0 if the task has not yet done. |
loadedCount |
number | The count of datum that have been imported so far. |
percentComplete |
number | The percent of datum that have been imported so far, between 0 and 1 . |
importDate |
number | The date used as the posted date on the imported datum, in milliseconds since Jan 1, 1970, in the UTC time zone. This will be approximate to the submitDate and set internally by SolarNetwork. |
configuration |
object | The full import task configuration. |
An example response looks like:
{
"success": true,
"data": {
"jobId": "7e426ffb-5928-420d-90c6-c7c0404c6da1",
"jobState": "Staged",
"groupKey": "4dff899d-0fee-47e1-8f74-a0e74941ac1f",
"success": false,
"cancelled": false,
"done": false,
"userId": 147,
"submitDate": 1615929546543,
"startedDate": 0,
"completionDate": 0,
"loadedCount": 0,
"percentComplete": 0.0,
"importDate": 1615929545362,
"configuration": {
"name": "Test Import",
"stage": true,
"inputConfiguration": {
"name": "Test Input",
"serviceIdentifier": "net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService",
"timeZoneId": "America/New_York",
"serviceProperties": {
"dateFormat": "MM/dd/yyyy HH:mm:ss",
"nodeIdColumn": "2",
"headerRowCount": "0",
"sourceIdColumn": "3",
"dateColumnsValue": "1"
}
}
}
}
}
Update the configuration of a previously submitted but not yet executed import task.
POST | /solaruser/api/v1/sec/user/import/jobs/{jobId} |
---|---|
jobId |
The import task job ID to update, previously returned via the submit import task API. |
This request must provide application/json
HTTP content of a JSON object representing the full
job configuration to update. The submitted configuration will replace the existing configuration.
Once updated, a staged task can be previewed again using the new settings. This allows you to
correct any mistakes in the configuration without having to re-upload the import data.
See the submit import task documentation on how the configuration
JSON object is structured. The JSON is basically the config
part of that API.
Cancel a previously submitted import task.
DELETE | /solaruser/api/v1/sec/user/import/jobs/{jobId} |
---|---|
jobId |
The import task job ID, previously returned via the submit import task API. |
Preview how SolarNetwork will parse a staged import task.
GET | /solaruser/api/v1/sec/user/import/jobs/{jobId}/preview |
---|---|
jobId |
The import task job ID, returned via the submit import task API. |
If the imported data cannot be parsed, a 422
HTTP status will be returned along with an error
message. For example:
{
"success": false,
"code": "DI.00400",
"message": "Import not allowed for node 123"
}
If the imported data is parsed successfully, a 200
HTTP status will be returned along with a
limited set of the parsed datum. For example:
{
"success": true,
"data": {
"totalResults": 307,
"startingOffset": 0,
"returnedResultCount": 50,
"results": [
{
"created": "2013-01-01 05:00:00.000Z",
"nodeId": 123,
"sourceId": "/S01/TEST",
"localDate": "2013-01-01",
"localTime": "18:00",
"i": {
"irradiance": "0.000"
},
"a": {
"wattHours": 0
}
},
...
]
}
}
⚠️ ThetotalResults
in the preview response is an estimate only and will not necessarily match the actual number of datum that are included in the import data.
Confirm a staged import task into the Queued
state so it may execute at some point in the future.
POST | /solaruser/api/v1/sec/user/import/jobs/{jobId}/confirm |
---|---|
jobId |
The import task job ID, returned via the submit import task API. |
This method will list the input formats supported by the SolarNetwork datum import service. Input formats dictate the format the imported data must be encoded as.
GET | /solaruser/api/v1/sec/user/import/services/input |
---|
See the supported input formats section for more details.
The response contains an array of input format service definitions.
Property | Type | Description |
---|---|---|
id |
string | The service unique identifier. |
settingSpecifiers |
array | The configurable setting specifiers for the service. |
locale |
string | The locale of the localized messages in the response. |
localizedName |
string | A localized name for the service. |
localizedDescription |
string | A localized description of the service. |
localizedInfoMessages |
object | Localized messages for the associated setting specifiers. |
An example response looks like:
{
"success": true,
"data": [
{
"id": "net.solarnetwork.central.datum.imp.standard.BasicCsvDatumImportInputFormatService",
"locale": "en-NZ",
"settingSpecifiers": [
{
"key": "headerRowCount",
"defaultValue": "1",
"secureTextEntry": false,
"transient": false,
"type": "net.solarnetwork.settings.TextFieldSettingSpecifier"
},
...
],
"localizedName": "CSV - Basic",
"localizedDescription": "Import data in a basic CSV format, with JSON encoded columns for instantaneous, accumulating, status, and tag sample data.",
"localizedInfoMessages": {
"headerRowCount.key": "Skip Rows",
"headerRowCount.desc": "The number of rows to skip. Can be used to skip header rows, or to skip previously loaded rows from a partially completed import job.",
...
}
}
]
}
- SolarNetwork API access
- SolarNetwork API authentication
- SolarNetwork global objects
- SolarNetwork aggregation
- SolarFlux API
- SolarIn API
- SolarQuery API
-
SolarUser API
- SolarUser enumerated types
- SolarUser datum expire API
- SolarUser datum export API
- SolarUser datum import API
- SolarUser event hook API
- SolarUser location request API
- SolarUser Cloud Integrations API
- SolarUser DIN API
- SolarUser DNP3 API
- SolarUser ININ API
- SolarUser OCPP API
- SolarUser OSCP API
- SolarUser SolarFlux API