Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.storm.hdfs.bolt;

import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.common.rotation.MoveFileAction;

public class CSVFileBolt extends HdfsBolt {
private static String fileExtension = ".csv";

public CSVFileBolt(String sourceDir, String destDir) {
super(sourceDir, destDir, fileExtension);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any guarantees that this bolt actually writes out comma separated values? I don't see that, which makes the class name somewhat misleading.

Similar observation about the TSV bolt further down.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I could set the RecordDefaultDelimiter to ",". Yes, thanks for pointing it out, I will make the changes and it up soon

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would take much more work than that and a clearer definition of what the CSVFileBolt does, IMO. What should the user expect from the behavior of CSVBolt.emit() ? Will it take arbitrary values from a tuple and join them together with commas? Will it escape characters? etc.

Another approach would be to create a separate bolt that does nothing but generate CSV output and then passes the results to any other bolt the user wanted. I often create pre-processing bolts like that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For futher clarification, I was further thinking over this issue and I presume currently the spout emits tuples in the form of fields and the csv or tsv bolt joins them with the specified record format delimiter. If it is a ",", all tuples will be appended by a comma. The HdfsBolt actually does this implementation when the execute is called upon it. The TSV or CSV are just abstractions to get the intended values based on the record delimiter. Can you please let me know what exactly has to be done with an example, I thought I got your point but right now I cannot clearly picture it. It will be great if you can get back with a reply. Thanks a lot.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If CSV and TSV formats are a bit controversial perhaps we can file a separate JIRA to implement them properly, with escaping, etc. and remove them from here.

Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,15 @@
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.client.HdfsDataOutputStream;
import org.apache.hadoop.hdfs.client.HdfsDataOutputStream.SyncFlag;
import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat;
import org.apache.storm.hdfs.bolt.format.FileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.bolt.sync.SyncPolicy;
import org.apache.storm.hdfs.common.rotation.MoveFileAction;
import org.apache.storm.hdfs.common.rotation.RotationAction;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Expand All @@ -44,6 +49,25 @@ public class HdfsBolt extends AbstractHdfsBolt{
private transient FSDataOutputStream out;
private RecordFormat format;
private long offset = 0;
private static String defaultSourceDir = "/tmp/source";
private static String defaultDestDir = "/tmp/dest";
private static String defaultFileExtension = ".txt";


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extra blank line.

public HdfsBolt() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure a no argument constructor makes a lot of since. I think you need to specify at a minimum the destination directory.

this(defaultSourceDir, defaultDestDir, defaultFileExtension);
}

public HdfsBolt(String defaultSourceDir, String defaultDestDir, String fileExtension) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call sourceDir, a stagingDir instead? To me it seems more logical.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And not use "default" in the variable names.

this.withRotationPolicy(new TimedRotationPolicy(1.0f, TimedRotationPolicy.TimeUnit.MINUTES))
.withConfigKey("hdfs.config")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really like this as a default I thought null would be a better default.

.withRecordFormat(new DelimitedRecordFormat().withRecordDelimiter("|"))
.withFileNameFormat(new DefaultFileNameFormat()
.withPath(defaultSourceDir)
.withExtension(fileExtension))
.withSyncPolicy(new CountSyncPolicy(1000))
.addRotationAction(new MoveFileAction().toDestination(defaultDestDir));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to see it that if there is no stagingDir specified that the destDir is used for the DefaultFileNameFormat path and there is no MoveFileAction. If there is a stagingDir then it behaved like this.

}

public HdfsBolt withFsUrl(String fsUrl){
this.fsUrl = fsUrl;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,15 @@
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.DefaultSequenceFormat;
import org.apache.storm.hdfs.bolt.format.FileNameFormat;
import org.apache.storm.hdfs.bolt.format.SequenceFormat;
import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy;
import org.apache.storm.hdfs.bolt.sync.SyncPolicy;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.common.rotation.MoveFileAction;
import org.apache.storm.hdfs.common.rotation.RotationAction;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Expand All @@ -45,8 +50,23 @@ public class SequenceFileBolt extends AbstractHdfsBolt {

private String compressionCodec = "default";
private transient CompressionCodecFactory codecFactory;
private static String sourceDir = "/tmp/source";
private static String destDir = "/tmp/dest";

public SequenceFileBolt() {
this(sourceDir,destDir);
}

public SequenceFileBolt(String sourceDir, String destDir) {
this.withFileNameFormat(new DefaultFileNameFormat()
.withPath(sourceDir)
.withExtension(".seq"))
.withSequenceFormat(new DefaultSequenceFormat("timestamp", "sentence"))
.withRotationPolicy(new FileSizeRotationPolicy(5.0f, FileSizeRotationPolicy.Units.MB))
.withSyncPolicy(new CountSyncPolicy(1000))
.withCompressionType(SequenceFile.CompressionType.RECORD)
.withCompressionCodec("deflate")
.addRotationAction(new MoveFileAction().toDestination(destDir));
}

public SequenceFileBolt withCompressionCodec(String codec){
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.storm.hdfs.bolt;

import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.TimedRotationPolicy;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.common.rotation.MoveFileAction;

public class TSVFileBolt extends HdfsBolt {
private static String fileExtension = ".txt";

public TSVFileBolt(String sourceDir, String destDir) {
super(sourceDir, destDir, fileExtension);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -58,19 +58,11 @@ public class HdfsFileTopology {
public static void main(String[] args) throws Exception {
Config config = new Config();
config.setNumWorkers(1);
String sourceDir = "/tmp/foo";
String destDir = "tmp/dest";

SentenceSpout spout = new SentenceSpout();

// sync the filesystem after every 1k tuples
SyncPolicy syncPolicy = new CountSyncPolicy(1000);

// rotate files when they reach 5MB
FileRotationPolicy rotationPolicy = new TimedRotationPolicy(1.0f, TimedRotationPolicy.TimeUnit.MINUTES);

FileNameFormat fileNameFormat = new DefaultFileNameFormat()
.withPath("/tmp/foo/")
.withExtension(".txt");

// use "|" instead of "," for field delimiter
RecordFormat format = new DelimitedRecordFormat()
.withFieldDelimiter("|");
Expand All @@ -81,14 +73,8 @@ public static void main(String[] args) throws Exception {
in.close();
config.put("hdfs.config", yamlConf);

HdfsBolt bolt = new HdfsBolt()
.withConfigKey("hdfs.config")
.withFsUrl(args[0])
.withFileNameFormat(fileNameFormat)
.withRecordFormat(format)
.withRotationPolicy(rotationPolicy)
.withSyncPolicy(syncPolicy)
.addRotationAction(new MoveFileAction().toDestination("/tmp/dest2/"));
HdfsBolt bolt = new TSVFileBolt(sourceDir,destDir)
.withFsUrl(args[0]);

TopologyBuilder builder = new TopologyBuilder();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,29 +56,8 @@ public static void main(String[] args) throws Exception {

SentenceSpout spout = new SentenceSpout();

// sync the filesystem after every 1k tuples
SyncPolicy syncPolicy = new CountSyncPolicy(1000);

// rotate files when they reach 5MB
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, Units.MB);

FileNameFormat fileNameFormat = new DefaultFileNameFormat()
.withPath("/tmp/source/")
.withExtension(".seq");

// create sequence format instance.
DefaultSequenceFormat format = new DefaultSequenceFormat("timestamp", "sentence");

SequenceFileBolt bolt = new SequenceFileBolt()
.withFsUrl(args[0])
.withFileNameFormat(fileNameFormat)
.withSequenceFormat(format)
.withRotationPolicy(rotationPolicy)
.withSyncPolicy(syncPolicy)
.withCompressionType(SequenceFile.CompressionType.RECORD)
.withCompressionCodec("deflate")
.addRotationAction(new MoveFileAction().toDestination("/tmp/dest/"));

.withFsUrl(args[0]);



Expand Down