Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSV Component #60

Closed
skhatri opened this issue Nov 15, 2016 · 10 comments
Closed

CSV Component #60

skhatri opened this issue Nov 15, 2016 · 10 comments
Milestone

Comments

@skhatri
Copy link

skhatri commented Nov 15, 2016

I can write CSV component, if you like.

I was thinking of creating a CSVFlow as well as CSVSourceStage.
Where CSVFlow will take a ByteString and chunk it, the CSVSourceStage could be a wrapper around FileIO. Let me know if this will be an acceptable component.

@hseeberger
Copy link
Contributor

AFAIK Camel has it, too, so this makes sense to me.

@stephennancekivell
Copy link
Contributor

Great idea!
Might be nice to wrap open csv or something to take care of the escaping nuance's.

Or scala-csv / pure-csv / mighty csv.

@joearasin
Copy link

joearasin commented Nov 17, 2016

Not sure if it's optimal, but we've been using something along the lines of this to write CSVs, built around commons-csv:

  def forFormat(csvFormat: CSVFormat, charset: Charset = StandardCharsets.UTF_8): Flow[Seq[Any], ByteString, NotUsed] = {
    Flow[Seq[Any]].map {
      cells => {
        val buf: ByteStringBuilder = ByteString.newBuilder
        val writer = new OutputStreamWriter(buf.asOutputStream, charset)
        csvFormat.printRecord(writer, cells.map(_.asInstanceOf[AnyRef]): _*)
        writer.close()
        buf.result()
      }
    }
  }

On top of this, one thing worth keeping ind mind with an API sort of like this is possibly padding the inputs, to deal with varying-length sequences that may be truncated.

@ennru
Copy link
Member

ennru commented Feb 23, 2017

@skhatri Did you put any more effort into this?

What I find in the existing CSV libraries (commons-csv, open csv, scala-csv) is that they tend to take control over file IO, which is not what we want in an Akka Stream. I've done some parsing Stages with scala-csv, but converting to CSV is a bit like what @joearasin shows for commons-csv.

@johanandren
Copy link
Member

On the parser side we'd want it to be push/record based, in that we can push bytestrings on it and potentially get a record back, just like the JSON framing in Akka stream core: https://github.com/akka/akka/blob/master/akka-stream/src/main/scala/akka/stream/scaladsl/JsonFraming.scala

@ennru
Copy link
Member

ennru commented Feb 23, 2017

So the expected signature could be something like
val csvFlow: Flow[ByteString, Source[ByteString, NotUsed], NotUsed] = CsvFraming()
where the created stream contains one element per column in the incoming CSV ByteString and a new element for every "line" (CSV columns may contain line breaks).

Or should it make things simpler and create elements of immutable.Seq[ByteString].

@johanandren
Copy link
Member

Or the stage would both frame and parse and emit CsvRecords or something like that?

@johanandren
Copy link
Member

Can you think of a use-case where it is interesting to consume fields rather than entire records?

@ennru
Copy link
Member

ennru commented Feb 23, 2017

That would be this extra-wide Excel data a former colleague of mine produced, in case I'd only be interested in the first 100 columns.
But the framing would need to go through the rest anyway, so a fixed length thingy will be best. I would not want to introduce a CsvRecord. Another step in the stream would be to convert the list of elements to a map out of the column names (Map[String, String]), which on request could be read as the first record in the stream.

@johanandren
Copy link
Member

Ok, sounds like you have a plan! 👍

ennru added a commit to ennru/alpakka that referenced this issue Mar 6, 2017
@jrudolph jrudolph added this to the 0.8 milestone Apr 12, 2017
s-soroosh added a commit to s-soroosh/alpakka that referenced this issue Jun 12, 2017
- Set default settings as default parameter
- Improve tests

Implement java dsl

[FTP] Critical fix for infinite loop of traversing "." and ".." directories

Upgrade to aws-java-sdk-dynamodb 1.11.106

Allow to pass a `SSLSocketFactory` to `MqttConnectionSettings`

FTP - attribute enrichment of FTPFile akka#153

Add KairosDB connector

= akka#135 Limit parallelism for the SqsSource (akka#163)

* Create a custom thread pool for the SqsSource and limit concurrency with the buffer size

* Provide AWSCredentials for all SqsSource tests

* Provide better java api

* Use ArrayDeque as FIFO queue
Messages for asserts in SqsSourceSettings

* Remove AmazonSQSAsyncClient from factory method

- Correct the javadoc

* Add implicit AmazonSQSAsync to factory method

- It now conforms again with the SqsSink
- Passing the clients from extern seems the preferred way (awslambda module)

* Update documentation to reflect thread pool usage

* Improvements from review by ktoso

- Replace IntStream with Source.range
- Use Sink.head instead of Sink.seq

=pro update akka http to 10.0.5 (akka#230)

Make Travis fail build on code format differences

=sqs fix typo in require in SqsSourceSettings (akka#228)

FTP - toPath sink akka#182

Add GCE Pubsub with publish and subscribe.

improve naming consistency of private vars.

PR feedback. Improve java api, java examples, better json marshalling.

use mapAsyncUnordered

s3: provide access to the returned response

s3: clean up test log

s3: add javadsl for request()

Time goes by, next try

Update connectors.md (akka#237)

Make external libs more visible (reactive kafka) (akka#229)

* Update TOC depth to 3, to show Reactive Kafka

Now people don't notice Kafka is in here since it's "external", expanding the TOC one more level makes it more visible.

WDYT?

* Update index.md

FTP: make lastModified test more robust (fixes akka#236)

Add SqsAckSink (akka#129)

Add SqsAckSink

* update elasticmq version
* update dependencies

Added possibility configure Sftp connection using private key akka#197.

- Added SftpIdentity case class to allow configuring private/public key,
- Added option to configure known_hosts file and test to check its usage.
- Added spec that should fail password based authentication and revert to private key one,
- Added docs paragraph to describe this option.

Upgrade to scalafmt 0.6.6

Remove deperecated binPack.callSite scalafmt setting

Format with updated scalafmt and fixed settings

S3 - add documentation akka#103

Fix alphabetical ordering in the docs

Separate out release docs

S3 path style access akka#64

Make SqsSourceTest less likely to fail

- Reduce amount of sent messages to 1 (multiple batch streaming is tested in the SqsSourceSpec)
- Increase timeout

Introduced "secure" boolean property for S3 which controls whether HTTPS is used akka#247

README: add scaladex, travis badges

And make docs links less scary to click on :)

Add CSV data transformation module (akka#213)

* Alpakka Issue akka#66: CSV component

* Alpakka Issue akka#66: revised CSV parser

* Alpakka Issue akka#60: CSV parsing stage

* wait for line end before issuing line

As the byte string may not contain a whole line the parser needs to read until a line end is reached.

* Add Java API and JUnit test; add a bit of documentation

* Introduce CsvToMap stage; more documentation

* Parse line even without line end at upstream finish

* Add Java API for CsvToMap; more documentation

* More restricted API, incorporated comments by @johanandren

* Format sequence as CSV in ByteString

* Add Scala CSV formatting stage

* Add Java API for CSV formatting; more docs

* Separate enums for Java and Scala DSLs

* Use Flow.fromGraph to construct flow

* Rename CsvFraming to CsvParsing

* Check for Byte Order Mark and ignore it for UTF-8

* Emit Byte Order Marks in formatting; CsvFormatting is just a map

* Byte Order Mark for Java API

* Add line number to error messages; sample files exported from third party software

* Use Charset directly instead of name

* csv: autoformatted files

* simplified dependency declaration

Fixes akka#60.

SQS flows + Embedded ElasticMQ akka#255

* Add a flow stage and use ElasticMQ
* Use flow-based stage for ACKs
* Use AmazonSQSAsync instead of AmazonSQSAsyncClient
* Using embedded ElasticMQ for tests

add SNS connector with publish sink akka#204

Await futures before performing assertion (fixes akka#235)

When an assertion fails after the test has already succeeded it will be
ignored, so Await the future before continuing with the check.

Document 'docker-compose' for running tests

Fail MqttSourceStage mat. value on connection loss

And increase the timeout. Might help with akka#189, or otherwise help generate a
better error message when it does happen again.

Ref akka#2 add IronMq integration

Refs akka#2 add at-least-one semantic to IronMq connector

Improve documentation and test coverage for IronMq ref akka#2

- Document the IronMq domain classes
- Document IronMq client
- Test the at-least-once producer/consumer mechanism
- Improve the IronMQ connector documentation

Ref akka#2 Preserve newline in reference.conf

Ref akka#2 Make seure the actor system is fully terminated after each test

Ref akka#2 Reformat code

Refs akka#2 define a different Committable and CommittableMessage for Java and Scala DSL

Refs akka#2 Fix typos in IronMQ documentations

Refs akka#2 Remove non needed Environment variables from TravisCI config file

Refs akka#2 Add a simple Java test and refactor the Java DSL to looks better in Java

FTP: Attempt to fix flaky test on Travis

Link to scaladex (akka#266)

s3: support for encryption, storage class, custom headers akka#109

s3: Added support for partial file download from S3 akka#264 (akka#265)

Add version info and links in index page (akka#273)

FTP - append mode for toPath sink + improved upstream failure handling akka#207

Fix broken recovery of EventSource (sse)

Replace scala.binaryVersion with scalaBinaryVersion (see akka#278)

Fix minor typo in alpakka MQTT Connector doc

Add Flow to support RabbitMQ RPC workflow akka#160

Changes Amqp sinks to materialize to Future[Done]. As currently it was
very difficult to determine when/if a sink failed due to a amqp error.

AMQP: add more options to configuration of the ConnectionFactory, akka#191

Directory sources akka#272

sse: Upgrade to Akka SSE 3 and make test more robust

CSV: Fixes ignored second double quote

S3: add listBucket method to S3 library (akka#253)

* Added recursive listBucket call to get all keys under a specific prefix.

* Properly using the request URI method and constructing queries with the Query type

* Added tests around query parsing

* Fixed formatting and removed recoverWithRetries on listbucket calls as they are already retried on the underlying layer with max-retries

* Using signAndGetAs instead of signAndGet as to not duplicate logic.

* Implemented quick fixes based on comments. Removed recursive call to get keys and used unfoldAsync to get all keys to run in constant memory.

* Added execution context. Fixed broken test

* Fixed formatting error.

* Cleaned up lisBucket call by added a ListBucketState object instead of the brutal type signature from earlier.

* Moved trait for listBucket into the def itself as to remove it from the public namespace.

azure-storage-queue connector akka#280

Add attribute parameters to sqs source settings akka#302

Formatting fix for akka#302

Streaming XML parser and utilities.

Prepare XML parser to join Alpakka family

Remove duplicated region argument in client methods akka#297

Build with Akka 2.5 as well

Add Azure Storage Queue documentation to TOC

Stub documentation for S3.listBucket

S3: fix formatting

Run the deployment only against Akka 2.4

PubSub: Add support for emulator host variables

Initial commit for apache geode connector

CSV: Emit all lines on completion akka#315

XML: make code in tests more consistent

Add whitesource plugin

Merge branch 'master' into add-kairosdb-connector

Add copyright header

update docker-compose

Make execution context optional in java api

Make execution context optional in scala api

remove ec from sink spec
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants