Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DX-8267: Expose the ability to set last set value #1

Closed
wants to merge 3 commits into from
Closed

DX-8267: Expose the ability to set last set value #1

wants to merge 3 commits into from

Conversation

siddharthteotia
Copy link

The changes here expose the ability in Dremio/Arrow to set the last set value in Vectors.

Currently, we have LastSetter.java and Vectors.java in Dremio/Dremio with static methods to do this on particular types of Vectors (NullableVarBinary, ListVector and NullableVarchar).

An abstract method has been declared in BaseValueVector and over-ridden in all the non-abstract sub-classes -- BaseDataValueVector, ListVector, FixedSizeListVector.

Since all the Nullable<>Vector and <>Vector extend BaseDataValueVector, the setLastSet(int value) is now generically available to all these vectors types.

Once this change goes in, a complimentary change-set will be pushed to Dremio/Dremio to:

(1) Remove LastSetter.java and accordingly update all the usages of its methods to use the new setLastSet() API on the required vectors.

(2) Do a similar cleanup in Vectors.java and subsequently update the callers of its methods to use setLastSet() API.


public void setLastSet(int value) {
try {
Field f = this.getMutator().getClass().getDeclaredField("lastSet");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should avoid reflection here and below. Implement methods directly.

@laurentgo
Copy link

laurentgo commented Jul 18, 2017

Shouldn't this be directly pushed upstream?

@siddharthteotia
Copy link
Author

Closing this pull request. Will directly fork the apache-arrow repo and create a pull request against apache-arrow master.

yufeldman pushed a commit to yufeldman/arrow that referenced this pull request Oct 1, 2017
…XXX in plasma protocol.

Related to apache#878, add DCHECK for ReadXXX.

Author: Yeolar <yeolar@gmail.com>

Closes apache#887 from Yeolar/fixtypo_plasma_and_add_DCHECK and squashes the following commits:

4df63bc [Yeolar] clang-format for too long lines.
143d254 [Yeolar] Update, compile passed.
09ff103 [Yeolar] Fix conflicts.
b951d8d [Yeolar] Merge pull request dremio#1 from apache/master
ebae611 [Yeolar] Fix typo in plasma protocol; add DCHECK for ReadXXX in plasma protocol.
yufeldman pushed a commit to yufeldman/arrow that referenced this pull request Oct 1, 2017
…ties

As per apache#872 I am upgrading Jackson to the latest version on the current train (2.7.1 --> 2.7.9)

Author: Matt Darwin <(none)>
Author: Matt <mattdarwin@yahoo.co.uk>

Closes apache#929 from mattdarwin/ARROW-1242-upgrade-jackson and squashes the following commits:

d059517 [Matt Darwin] 1242 upgraing jackson to 2.7.9
bc3b6a0 [Matt] Merge pull request dremio#1 from apache/master
yufeldman pushed a commit to yufeldman/arrow that referenced this pull request Oct 1, 2017
NB this commit excludes Jackson and logback upgrades, since they are dealt with in 871 and 872

Author: Matt Darwin <(none)>
Author: Matt Darwin <matt.darwin@oracle.com>
Author: Matt <mattdarwin@yahoo.co.uk>

Closes apache#873 from mattdarwin/upgrade-libs and squashes the following commits:

9b51f46 [Matt Darwin] Merge branch 'master' into upgrade-libs
284a4ce [Matt Darwin] Merge branch 'master' of https://github.com/apache/arrow
79550b1 [Matt Darwin] rolling back lilith to 0.9.44 since 8 doesn't support java 7
c63eef6 [Matt Darwin] Merge branch 'master' into upgrade-libs
bc3b6a0 [Matt] Merge pull request dremio#1 from apache/master
8599ba0 [Matt Darwin] backing out guava upgrade
80d81e6 [Matt Darwin] downgrading guava to 20 for java 7 compatibility
806f348 [Matt Darwin] Merge branch 'master' into upgrade-libs
8aafb7e [Matt Darwin] correcting indentation in BaseValueVector
94c1469 [Matt Darwin] upgrading netty to 4.0.49
cff5596 [Matt Darwin] reverting to netty 4.0.41.Final
568737d [Matt Darwin] switching to Collections from Guava for empty iterator
c194e48 [Matt Darwin] upgraded hppc to 0.7.2
38be468 [Matt Darwin] upgrading libs except jackson and logback
yufeldman pushed a commit to yufeldman/arrow that referenced this pull request Oct 1, 2017
…(take 2)

sorry, this was still not fixed properly.  logback version is separately specified in 2 places.

Fixed properly this time.

Author: Matt Darwin <(none)>
Author: Matt <mattdarwin@yahoo.co.uk>

Closes apache#960 from mattdarwin/ARROW-1240-upgrade-logback and squashes the following commits:

3492f66 [Matt Darwin] upgrading logback in tools/pom.xml
206b48d [Matt Darwin] Merge branch 'master' into ARROW-1240-upgrade-logback
284a4ce [Matt Darwin] Merge branch 'master' of https://github.com/apache/arrow
bc3b6a0 [Matt] Merge pull request dremio#1 from apache/master
3e2f676 [Matt Darwin] Merge branch 'master' into ARROW-1240-upgrade-logback
caed163 [Matt Darwin] upgrading slf4j to 1.7.25
yufeldman pushed a commit to yufeldman/arrow that referenced this pull request Oct 1, 2017
…ties (take 2)

sorry, PR apache#929 failed to actually change the Jackson version, since the `jackson.version` variable defined in java/pom.xml is not used in java/vector/pom.xml

That's now fixed in this PR.

Author: Matt Darwin <(none)>
Author: Matt <mattdarwin@yahoo.co.uk>

Closes apache#957 from mattdarwin/ARROW-1242-upgrade-jackson and squashes the following commits:

ad15e5f [Matt Darwin] Merge branch 'master' into ARROW-1242-upgrade-jackson
ee29d65 [Matt Darwin] Merge branch 'master' of https://github.com/apache/arrow into ARROW-1242-upgrade-jackson
06d7745 [Matt Darwin] upgrading jackson to 2.7.9 PROPERLY this time...
284a4ce [Matt Darwin] Merge branch 'master' of https://github.com/apache/arrow
d059517 [Matt Darwin] 1242 upgraing jackson to 2.7.9
bc3b6a0 [Matt] Merge pull request dremio#1 from apache/master
DremioQA pushed a commit that referenced this pull request Apr 20, 2018
…is alive before enqueue new record when download file.

use pyarrow download file will raise queue.Full exceptions sometimes.
jira: https://issues.apache.org/jira/browse/ARROW-2002

Author: kmiku7 <kakoimiku@gmail.com>

Closes apache#1485 from kmiku7/master and squashes the following commits:

8d5f905 [kmiku7] fix queue.FULL exception when writer thread write data slowly.
722182b [kmiku7] Merge pull request #1 from apache/master
DremioQA pushed a commit that referenced this pull request Apr 20, 2018
…lue data

Modified BinaryBuilder::Resize(int64_t) so that when building BinaryArrays with a known size, space is also reserved for value_data_builder_ to prevent internal reallocation.

Author: Panchen Xue <pan.panchen.xue@gmail.com>

Closes apache#1481 from xuepanchen/master and squashes the following commits:

707b67b [Panchen Xue] ARROW-1712: [C++] Fix lint errors
360e601 [Panchen Xue] Merge branch 'master' of https://github.com/xuepanchen/arrow
d4bbd15 [Panchen Xue] ARROW-1712: [C++] Modify test case for BinaryBuilder::ReserveData() and change arguments for offsets_builder_.Resize()
77f8f3c [Panchen Xue] Merge pull request #5 from apache/master
bc5db7d [Panchen Xue] ARROW-1712: [C++] Remove unneeded data member in BinaryBuilder and modify test case
5a5b70e [Panchen Xue] Merge pull request #4 from apache/master
8e4c892 [Panchen Xue] Merge pull request #3 from xuepanchen/xuepanchen-arrow-1712
d3c8202 [Panchen Xue] ARROW-1945: [C++] Fix a small typo
0b07895 [Panchen Xue] ARROW-1945: [C++] Add data_capacity_ to track capacity of value data
18f90fb [Panchen Xue] ARROW-1945: [C++] Add data_capacity_ to track capacity of value data
bbc6527 [Panchen Xue] ARROW-1945: [C++] Update test case for BinaryBuild data value space reservation
15e045c [Panchen Xue] Add test case for array-test.cc
5a5593e [Panchen Xue] Update again ReserveData(int64_t) method for BinaryBuilder
9b5e805 [Panchen Xue] Update ReserveData(int64_t) method signature for BinaryBuilder
8dd5eaa [Panchen Xue] Update builder.cc
b002e0b [Panchen Xue] Remove override keyword from ReserveData(int64_t) method for BinaryBuilder
de318f4 [Panchen Xue] Implement ReserveData(int64_t) method for BinaryBuilder
e0434e6 [Panchen Xue] Add ReserveData(int64_t) and value_data_capacity() for methods for BinaryBuilder
5ebfb32 [Panchen Xue] Add capacity() method for TypedBufferBuilder
5b73c1c [Panchen Xue] Update again BinaryBuilder::Resize(int64_t capacity) in builder.cc
d021c54 [Panchen Xue] Merge pull request #2 from xuepanchen/xuepanchen-arrow-1712
232024e [Panchen Xue] Update BinaryBuilder::Resize(int64_t capacity) in builder.cc
c2f8dc4 [Panchen Xue] Merge pull request #1 from apache/master
DremioQA pushed a commit that referenced this pull request Apr 20, 2018
This PR moves the `Table` class out of the Vector hierarchy and adds optimized dataframe operations to it. Currently implements an optimized `scan()` method, `filter(predicate)`, `count()`, and `countBy(column_name)` (only works on dictionary-encoded columns).

Some usage examples, based on the file generated by `js/test/data/tables/generate.py`:
``` js
> let table = Table.from(...);
> table.count()
1000000
> table.filter(col('lat').gteq(0)).count()
499718
> table.countBy('origin').toJSON()
{ Charlottesville: 166839,
  'New York': 166251,
  'San Francisco': 166642,
  Seattle: 166659,
  'Terre Haute': 166756,
  'Washington, DC': 166853 }
> table.filter(col('lng').gteq(0)).countBy('origin').toJSON()
{ Charlottesville: 83109,
  'New York': 83221,
  'San Francisco': 83515,
  Seattle: 83362,
  'Terre Haute': 83314,
  'Washington, DC': 83479 }
```
There are performance tests for the dataframe operations, to run them you must first generate the test data by running `npm run create:perfdata`.

The PR also includes @trxcllnt's refactor of the JS implementation to make it more closely resemble the C++ implementation. This refactor resolves multiple JIRAs: ARROW-1903, ARROW-1898, ARROW-1502, ARROW-1952 (partially), and ARROW-1985

Author: Paul Taylor <paul.e.taylor@me.com>
Author: Brian Hulette <brian.hulette@ccri.com>
Author: Brian Hulette <hulettbh@gmail.com>

Closes apache#1482 from TheNeuralBit/table-scan-perf and squashes the following commits:

52f1e0e [Brian Hulette] <, > are not commutative, misc cleanup
04b1838 [Brian Hulette] even more table tests
16b9ccb [Brian Hulette] Merge pull request #4 from trxcllnt/js-cpp-refactor
fe300df [Paul Taylor] fix closure es5/umd toString() iterator
3d5240a [Paul Taylor] fix more externs
10c48ad [Paul Taylor] Merge branch 'table-scan-perf' of github.com:ccri/arrow into js-cpp-refactor
dbe7f81 [Brian Hulette] Add more Table unit tests
1910962 [Brian Hulette] Add optional bind callback to scan
5bdf17f [Brian Hulette] Fix perf
8cf2473 [Brian Hulette] Merge remote-tracking branch 'origin/master' into table-scan-perf
4a41b18 [Paul Taylor] add src/predicate to the list of exports we should save from uglify
5a91fab [Paul Taylor] add more view, predicate externs
f6adfb3 [Brian Hulette] Create predicate namespace
f7bb0ed [Paul Taylor] Merge branch 'table-scan-perf' of github.com:ccri/arrow into js-cpp-refactor
e148ee4 [Paul Taylor] Merge branch 'extern-woes' into js-cpp-refactor
25cdc4a [Paul Taylor] add src/predicate to the list of exports we should save from uglify
dc7c728 [Paul Taylor] add more view, predicate externs
25e6af7 [Brian Hulette] Create predicate namespace
579ab1f [Brian Hulette] Merge pull request #2 from trxcllnt/js-cpp-refactor
f3cde1a [Paul Taylor] fix lint
9769773 [Paul Taylor] fix vector perf tests
016ba78 [Brian Hulette] Merge pull request #1 from trxcllnt/js-cpp-refactor
272d293 [Paul Taylor] Merge pull request #4 from ccri/empty-table
7bc7363 [Brian Hulette] Fix exception for empty Table
8ddce0a [Paul Taylor] check bounds in getChildAt(i) to avoid NPEs
f1dead0 [Paul Taylor] compute chunked nested childData list correctly
18807c6 [Paul Taylor] rename ChunkData's fields so it's more clear they're not semantically similar to other similarly named fields
7e43b78 [Paul Taylor] add test:integration npm script
a5f200f [Paul Taylor] Merge pull request #3 from ccri/table-from-struct
c8cd286 [Brian Hulette] Add Table.fromStruct
a00415e [Brian Hulette] Fix perf
54d4f5b [Paul Taylor] lazily allocate table and recordbatch columns, support NestedView's getChildAt(i) method in ChunkedView
40b3638 [Paul Taylor] run integration tests with local data for coverage stats
fe31ee0 [Paul Taylor] slice the flat data values before returning an iterator of them
e537789 [Paul Taylor] make it easier to run all integration tests from local data
c0fd2f9 [Paul Taylor] use the dictionary of the last chunked vector list for chunked dictionary vectors
e33c068 [Paul Taylor] Merge pull request #2 from ccri/fixed-size-list
5bb63af [Brian Hulette] Don't read OFFSET vector for FixedSizeList
614b688 [Paul Taylor] add asEpochMs to date and timestamp vectors
87334a5 [Paul Taylor] Merge branch 'table-scan-perf' of github.com:ccri/arrow into js-cpp-refactor
b7f5bfb [Paul Taylor] rename numRows to length, add table.getColumn()
e81082f [Paul Taylor] export vector views, allow cloning data as another type
700a47c [Paul Taylor] export visitors
e859e13 [Paul Taylor] fix package.json bin entry
0620cfd [Brian Hulette] use Math.fround
0126dc4 [Brian Hulette] Don't recompute total length
e761eee [Brian Hulette] Rename asJSON to toJSON
6c91ed4 [Paul Taylor] Merge branch 'master' of github.com:apache/arrow into js-cpp-refactor-merge_with-table-scan-perf
d2b18d5 [Paul Taylor] Merge remote-tracking branch 'ccri/table-scan-perf' into js-cpp-refactor-merge_with-table-scan-perf
f3f3b86 [Paul Taylor] rename table.ts to recordbatch.ts in preparation for merging latest
e3f629d [Paul Taylor] fix rest of the mangling issues
fa7c17a [Paul Taylor] passing all tests except es5 umd mangler ones
e20decd [Brian Hulette] Add license headers
edcbdbe [Brian Hulette] cleanup
20717d5 [Brian Hulette] Fixed countBy(string)
7244887 [Brian Hulette] Add table unit tests...
6719147 [Brian Hulette] Add DataFrame.countBy operation
2f4a349 [Brian Hulette] Minor tweaks
2e118ab [Brian Hulette] linter
a788db3 [Brian Hulette] Cleanup
a9fff89 [Brian Hulette] Move Table out of the Vector hierarchy
1d60aa1 [Brian Hulette] Moved DataFrame ops to Table. DataFrame is now an interface
e8979ba [Brian Hulette] Refactor DataFrame to extend Vector<StructRow>
6a41d68 [Brian Hulette] clean up table benchmarks
2744c63 [Brian Hulette] Remove Chunked/Simple DataFrame distinction
aa999f8 [Brian Hulette] Add DictionaryVector optimization for equals predicate
4d9e8c0 [Brian Hulette] Add concept of predicates for filtering dataframes
796f45d [Brian Hulette] add DataFrame filter and count ops
30f0330 [Brian Hulette] Add basic DataFrame impl ...
a1edac2 [Brian Hulette] Add perf tests for table scans
d18d915 [Paul Taylor] fix struct and map rows
61dc699 [Paul Taylor] WIP -- refactor types to closer match arrow-cpp
62db338 [Paul Taylor] update dependencies and add es6+ umd targets to jest transform ignore patterns to fix ci
6ff18e9 [Paul Taylor] ship es2015 commonJS in main package to avoid confusion
74e828a [Paul Taylor] fix typings issues (ARROW-1903)
praveenbingo pushed a commit that referenced this pull request Sep 7, 2018
Bootstrap evaluation using llvm code generation

LLVM code generation is done using a mix of :
- glue IR code that loops over the vector, generates function
  calls (and)
- byte-code files generated from simple c++ functions using 
  clang (emit-llvm).
The glue-code and pre-compiled byte code are merged and 
optimized together.

Expressions are specified using a "tree builder" where each 
node is an arrow vector, or a binary/unary function.

During code generation, the expressions are "decomposed" so 
that the value array and bitmap array are evaluated separately 
to compute the expression result. This avoids the use of too 
many branch/conditional instructions (checks for "if null"), and
hence, can be vectorized efficiently.

Support added for arithmetic and logical expressions on 
numeric types.

Travis CI support added for build on ubuntu.
praveenbingo pushed a commit that referenced this pull request Mar 16, 2019
https://issues.apache.org/jira/browse/ARROW-3965

This creates an object which configures the BaseAllocator and Calendar used during to configure the translation from a JDBC ResultSet to an Arrow vector.

Author: Mike Pigott <mpigott@gmail.com>
Author: Michael Pigott <mikepigott@users.noreply.github.com>

Closes apache#3133 from mikepigott/jdbc-to-arrow-config and squashes the following commits:

be95426 <Mike Pigott> ARROW-3965: JDBC-To-Arrow Config Builder javadocs.
d6c64a7 <Mike Pigott> ARROW-3965: JdbcToArrowConfigBuilder
d7ca982 <Mike Pigott> Merge branch 'master' into jdbc-to-arrow-config
789c8c8 <Michael Pigott> Merge pull request #4 from apache/master
e5b19ee <Michael Pigott> Merge pull request #3 from apache/master
3b17c29 <Michael Pigott> Merge pull request #2 from apache/master
5b1b364 <Mike Pigott> Merge branch 'master' into jdbc-to-arrow-config
881c6c8 <Michael Pigott> Merge pull request #1 from apache/master
bb3165b <Mike Pigott> Updating the function calls to use the JdbcToArrowConfig versions.
68c91e7 <Mike Pigott> Modifying the jdbcToArrowSchema and jdbcToArrowVectors methods to receive JdbcToArrowConfig objects.
8d6cf00 <Mike Pigott> Documentation for public static VectorSchemaRoot sqlToArrow(Connection connection, String query, JdbcToArrowConfig config)
4f1260c <Mike Pigott> Adding documentation for public static VectorSchemaRoot sqlToArrow(ResultSet resultSet, JdbcToArrowConfig config)
df632e3 <Mike Pigott> Updating the SQL tests to include JdbcToArrowConfig versions.
b270044 <Mike Pigott> Updated validaton & documentation, and unit tests for the new JdbcToArrowConfig.
da77cbe <Mike Pigott> Creating a configuration class for the JDBC-to-Arrow converter.
praveenbingo pushed a commit that referenced this pull request Mar 16, 2019
https://issues.apache.org/jira/browse/ARROW-3923

Hello!  I was reading through the JDBC source code and I noticed that a java.util.Calendar was required for creating an Arrow Schema and Arrow Vectors from a JDBC ResultSet, when none is required.

This change makes the Calendar optional.

Unit Tests:
The existing SureFire plugin configuration uses a UTC calendar for the database, which is the default Calendar in the existing code.  Likewise, no changes to the unit tests are required to provide adequate coverage for the change.

Author: Michael Pigott <mikepigott@users.noreply.github.com>
Author: Mike Pigott <mpigott@gmail.com>

Closes apache#3066 from mikepigott/jdbc-timestamp-no-calendar and squashes the following commits:

4d95da0 <Mike Pigott> ARROW-3923: Supporting a null Calendar in the config, and reverting the breaking change.
cd9a230 <Mike Pigott> Merge branch 'master' into jdbc-timestamp-no-calendar
509a1cc <Michael Pigott> Merge pull request #5 from apache/master
789c8c8 <Michael Pigott> Merge pull request #4 from apache/master
e5b19ee <Michael Pigott> Merge pull request #3 from apache/master
3b17c29 <Michael Pigott> Merge pull request #2 from apache/master
881c6c8 <Michael Pigott> Merge pull request #1 from apache/master
089cff4 <Mike Pigott> Format fixes
a58a4a5 <Mike Pigott> Fixing calendar usage.
e12832a <Mike Pigott> Allowing for timestamps without a time zone.
praveenbingo pushed a commit that referenced this pull request Mar 16, 2019
https://issues.apache.org/jira/browse/ARROW-3966

This change includes apache#3133, and supports a new configuration item called "Include Metadata."  If true, metadata from the JDBC ResultSetMetaData object is pulled along to the Schema Field Metadata.  For now, this includes:
* Catalog Name
* Table Name
* Column Name
* Column Type Name

Author: Mike Pigott <mpigott@gmail.com>
Author: Michael Pigott <mikepigott@users.noreply.github.com>

Closes apache#3134 from mikepigott/jdbc-column-metadata and squashes the following commits:

02f2f34 <Mike Pigott> ARROW-3966: Picking up lost change to support null calendars.
7049c36 <Mike Pigott> Merge branch 'master' into jdbc-column-metadata
e9a9b2b <Michael Pigott> Merge pull request #6 from apache/master
65741a9 <Mike Pigott> ARROW-3966: Code review feedback
cc6cc88 <Mike Pigott> ARROW-3966: Using a 1:N loop instead of a 0:N-1 loop for fewer index offsets in code.
cfb2ba6 <Mike Pigott> ARROW-3966: Using a helper method for building a UTC calendar with root locale.
2928513 <Mike Pigott> ARROW-3966: Moving the metadata flag assignment into the builder.
69022c2 <Mike Pigott> ARROW-3966: Fixing merge.
4a6de86 <Mike Pigott> Merge branch 'master' into jdbc-column-metadata
509a1cc <Michael Pigott> Merge pull request #5 from apache/master
789c8c8 <Michael Pigott> Merge pull request #4 from apache/master
e5b19ee <Michael Pigott> Merge pull request #3 from apache/master
3b17c29 <Michael Pigott> Merge pull request #2 from apache/master
d847ebc <Mike Pigott> Fixing file location
1ceac9e <Mike Pigott> Merge branch 'master' into jdbc-column-metadata
881c6c8 <Michael Pigott> Merge pull request #1 from apache/master
03091a8 <Mike Pigott> Unit tests for including result set metadata.
72d64cc <Mike Pigott> Affirming the field metadata is empty when the configuration excludes field metadata.
7b4527c <Mike Pigott> Test for the include-metadata flag in the configuration.
7e9ce37 <Mike Pigott> Merge branch 'jdbc-to-arrow-config' into jdbc-column-metadata
bb3165b <Mike Pigott> Updating the function calls to use the JdbcToArrowConfig versions.
a6fb1be <Mike Pigott> Fixing function call
5bfd6a2 <Mike Pigott> Merge branch 'jdbc-to-arrow-config' into jdbc-column-metadata
68c91e7 <Mike Pigott> Modifying the jdbcToArrowSchema and jdbcToArrowVectors methods to receive JdbcToArrowConfig objects.
b5b0cb1 <Mike Pigott> Merge branch 'jdbc-to-arrow-config' into jdbc-column-metadata
8d6cf00 <Mike Pigott> Documentation for public static VectorSchemaRoot sqlToArrow(Connection connection, String query, JdbcToArrowConfig config)
4f1260c <Mike Pigott> Adding documentation for public static VectorSchemaRoot sqlToArrow(ResultSet resultSet, JdbcToArrowConfig config)
e34a9e7 <Mike Pigott> Fixing formatting.
fe097c8 <Mike Pigott> Merge branch 'jdbc-to-arrow-config' into jdbc-column-metadata
df632e3 <Mike Pigott> Updating the SQL tests to include JdbcToArrowConfig versions.
b270044 <Mike Pigott> Updated validaton & documentation, and unit tests for the new JdbcToArrowConfig.
da77cbe <Mike Pigott> Creating a configuration class for the JDBC-to-Arrow converter.
a78c770 <Mike Pigott> Updating Javadocs.
523387f <Mike Pigott> Updating the API to support an optional 'includeMetadata' field.
5af1b5b <Mike Pigott> Separating out the field-type creation from the field creation.
praveenbingo pushed a commit that referenced this pull request Apr 1, 2019
…mpute module

Author: Nicolas Trinquier <nstq@protonmail.ch>
Author: Nicolas Trinquier <ntrinquier@users.noreply.github.com>
Author: Neville Dipale <nevilledips@gmail.com>

Closes apache#3741 from ntrinquier/ARROW-4605 and squashes the following commits:

344379a <Nicolas Trinquier> Initialize vectors with a capacity
257d235 <Nicolas Trinquier> Add support for null values in limit and filter
f0578f6 <Nicolas Trinquier> Add tests for limit and filter with BinaryArray
728884b <Nicolas Trinquier> Merge pull request #1 from nevi-me/ARROW-4605
58d1f5c <Nicolas Trinquier> Merge branch 'ARROW-4605' into ARROW-4605
5a1047c <Nicolas Trinquier> Name variables consistently
2e9616b <Nicolas Trinquier> Add documentation for the limit function
2f44a8a <Nicolas Trinquier> Use the size of the array as limit instead of returning an error
6422e18 <Neville Dipale> cargo fmt
2a389a3 <Neville Dipale> create BinaryArray directly from byte slice to prevent converting to String > &str > &
b20ea6d <Nicolas Trinquier> Do bound checking in limit function
32a2f85 <Nicolas Trinquier> Add tests for limit and filter
0ca0412 <Nicolas Trinquier> Rewrite filter and limit using macros
d216fa0 <Nicolas Trinquier> Move filter and limit to array_ops
pravindra pushed a commit that referenced this pull request Sep 9, 2019
This updates the language in `install_arrow()` to follow the README revision that will land in https://github.com/apache/arrow/pull/4948/files#diff-563b2cb2c8c2d51b2ff6b177e2d84286R33.

The [Jira ticket](https://issues.apache.org/jira/browse/ARROW-6142) requested three things; this is `#2` in the list. On `#1`, I defer to the C++ installation docs, which are already included in the install_arrow message, rather than duplicating content here. `#3` is out of scope.

Closes apache#5027 from nealrichardson/no-ppa and squashes the following commits:

80b142e <Neal Richardson> s/arrow/Arrow/
44c9659 <Neal Richardson> Tweak language again
36cfe28 <Neal Richardson> Further linux install revisions
79bd7e0 <Neal Richardson> One more PPurge
63f75bd <Neal Richardson> Revise install_arrow instructions for Linux

Authored-by: Neal Richardson <neal.p.richardson@gmail.com>
Signed-off-by: Sutou Kouhei <kou@clear-code.com>
projjal pushed a commit that referenced this pull request Mar 8, 2020
According to the discussion in apache#4993 (comment), we often encountered this scenario: we compare values repeatedly. The comparisons differs only in the parameters (vector to compare, start index, etc).

According to the current API, we have to create a new RangeEqualVisitor object each time the comparison is performed. This leads to non-trivial performance overhead.

To address this problem, we make the RangeEqualVisitor reusable, and allow the client to change parameters of an existing visitor.

Closes apache#5195 from liyafan82/fly_0826_reuse and squashes the following commits:

ffe0e6a <liyafan82> Merge pull request #1 from pravindra/pull-5195
073bc78 <Pindikura Ravindra> Test: Move out Range from the visitor params
7482414 <liyafan82>  Wrapper visit parameters into a pojo
53c1e0b <liyafan82> Merge branch 'master' into fly_0826_reuse
a1f7046 <liyafan82>  Make range equal visitor reusable

Lead-authored-by: liyafan82 <fan_li_ya@foxmail.com>
Co-authored-by: Pindikura Ravindra <ravindra@dremio.com>
Co-authored-by: liyafan82 <42827532+liyafan82@users.noreply.github.com>
Signed-off-by: Pindikura Ravindra <ravindra@dremio.com>
projjal pushed a commit that referenced this pull request Mar 13, 2020
According to the discussion in apache#4993 (comment), we often encountered this scenario: we compare values repeatedly. The comparisons differs only in the parameters (vector to compare, start index, etc).

According to the current API, we have to create a new RangeEqualVisitor object each time the comparison is performed. This leads to non-trivial performance overhead.

To address this problem, we make the RangeEqualVisitor reusable, and allow the client to change parameters of an existing visitor.

Closes apache#5195 from liyafan82/fly_0826_reuse and squashes the following commits:

ffe0e6a <liyafan82> Merge pull request #1 from pravindra/pull-5195
073bc78 <Pindikura Ravindra> Test: Move out Range from the visitor params
7482414 <liyafan82>  Wrapper visit parameters into a pojo
53c1e0b <liyafan82> Merge branch 'master' into fly_0826_reuse
a1f7046 <liyafan82>  Make range equal visitor reusable

Lead-authored-by: liyafan82 <fan_li_ya@foxmail.com>
Co-authored-by: Pindikura Ravindra <ravindra@dremio.com>
Co-authored-by: liyafan82 <42827532+liyafan82@users.noreply.github.com>
Signed-off-by: Pindikura Ravindra <ravindra@dremio.com>
projjal pushed a commit that referenced this pull request Mar 13, 2020
…comments.

The reset method allow the data structures to be re-used so they don't have to be allocated over and over again.

Closes apache#6430 from richardartoul/ra/merge-upstream and squashes the following commits:

5a08281 <Richard Artoul> Add license to test file
d76be05 <Richard Artoul> Add test for data reset
d102b1f <Richard Artoul> Add tests
d3e6e67 <Richard Artoul> cleanup comments
c8525ae <Richard Artoul> Add Reset method to int array (#5)
489ca25 <Richard Artoul> Fix array.setData() to retain before release (#4)
88cd05f <Richard Artoul> Add reset method to Data (#3)
6d1b277 <Richard Artoul> Add Reset() method to String array (#2)
dca2303 <Richard Artoul> Add Reset method to buffer and cleanup comments (#1)

Lead-authored-by: Richard Artoul <richard.artoul@datadoghq.com>
Co-authored-by: Richard Artoul <richardartoul@gmail.com>
Signed-off-by: Sebastien Binet <binet@cern.ch>
pprudhvi pushed a commit that referenced this pull request May 26, 2020
This PR enables tests for `ARROW_COMPUTE`, `ARROW_DATASET`, `ARROW_FILESYSTEM`, `ARROW_HDFS`, `ARROW_ORC`, and `ARROW_IPC` (default on). apache#7131 enabled a minimal set of tests as a starting point.

I confirmed that these tests pass locally with the current master. In the current TravisCI environment, we cannot see this result due to a lot of error messages in `arrow-utility-test`.

```
$ git log | head -1
commit ed5f534
% ctest
...
      Start  1: arrow-array-test
 1/51 Test  #1: arrow-array-test .....................   Passed    4.62 sec
      Start  2: arrow-buffer-test
 2/51 Test  #2: arrow-buffer-test ....................   Passed    0.14 sec
      Start  3: arrow-extension-type-test
 3/51 Test  #3: arrow-extension-type-test ............   Passed    0.12 sec
      Start  4: arrow-misc-test
 4/51 Test  #4: arrow-misc-test ......................   Passed    0.14 sec
      Start  5: arrow-public-api-test
 5/51 Test  #5: arrow-public-api-test ................   Passed    0.12 sec
      Start  6: arrow-scalar-test
 6/51 Test  #6: arrow-scalar-test ....................   Passed    0.13 sec
      Start  7: arrow-type-test
 7/51 Test  #7: arrow-type-test ......................   Passed    0.14 sec
      Start  8: arrow-table-test
 8/51 Test  #8: arrow-table-test .....................   Passed    0.13 sec
      Start  9: arrow-tensor-test
 9/51 Test  #9: arrow-tensor-test ....................   Passed    0.13 sec
      Start 10: arrow-sparse-tensor-test
10/51 Test #10: arrow-sparse-tensor-test .............   Passed    0.16 sec
      Start 11: arrow-stl-test
11/51 Test #11: arrow-stl-test .......................   Passed    0.12 sec
      Start 12: arrow-concatenate-test
12/51 Test #12: arrow-concatenate-test ...............   Passed    0.53 sec
      Start 13: arrow-diff-test
13/51 Test #13: arrow-diff-test ......................   Passed    1.45 sec
      Start 14: arrow-c-bridge-test
14/51 Test #14: arrow-c-bridge-test ..................   Passed    0.18 sec
      Start 15: arrow-io-buffered-test
15/51 Test #15: arrow-io-buffered-test ...............   Passed    0.20 sec
      Start 16: arrow-io-compressed-test
16/51 Test #16: arrow-io-compressed-test .............   Passed    3.48 sec
      Start 17: arrow-io-file-test
17/51 Test #17: arrow-io-file-test ...................   Passed    0.74 sec
      Start 18: arrow-io-hdfs-test
18/51 Test #18: arrow-io-hdfs-test ...................   Passed    0.12 sec
      Start 19: arrow-io-memory-test
19/51 Test #19: arrow-io-memory-test .................   Passed    2.77 sec
      Start 20: arrow-utility-test
20/51 Test #20: arrow-utility-test ...................***Failed    5.65 sec
      Start 21: arrow-threading-utility-test
21/51 Test #21: arrow-threading-utility-test .........   Passed    1.34 sec
      Start 22: arrow-compute-compute-test
22/51 Test #22: arrow-compute-compute-test ...........   Passed    0.13 sec
      Start 23: arrow-compute-boolean-test
23/51 Test #23: arrow-compute-boolean-test ...........   Passed    0.15 sec
      Start 24: arrow-compute-cast-test
24/51 Test #24: arrow-compute-cast-test ..............   Passed    0.22 sec
      Start 25: arrow-compute-hash-test
25/51 Test #25: arrow-compute-hash-test ..............   Passed    2.61 sec
      Start 26: arrow-compute-isin-test
26/51 Test #26: arrow-compute-isin-test ..............   Passed    0.81 sec
      Start 27: arrow-compute-match-test
27/51 Test #27: arrow-compute-match-test .............   Passed    0.40 sec
      Start 28: arrow-compute-sort-to-indices-test
28/51 Test #28: arrow-compute-sort-to-indices-test ...   Passed    3.33 sec
      Start 29: arrow-compute-nth-to-indices-test
29/51 Test #29: arrow-compute-nth-to-indices-test ....   Passed    1.51 sec
      Start 30: arrow-compute-util-internal-test
30/51 Test #30: arrow-compute-util-internal-test .....   Passed    0.13 sec
      Start 31: arrow-compute-add-test
31/51 Test #31: arrow-compute-add-test ...............   Passed    0.12 sec
      Start 32: arrow-compute-aggregate-test
32/51 Test #32: arrow-compute-aggregate-test .........   Passed   14.70 sec
      Start 33: arrow-compute-compare-test
33/51 Test #33: arrow-compute-compare-test ...........   Passed    7.96 sec
      Start 34: arrow-compute-take-test
34/51 Test #34: arrow-compute-take-test ..............   Passed    4.80 sec
      Start 35: arrow-compute-filter-test
35/51 Test #35: arrow-compute-filter-test ............   Passed    8.23 sec
      Start 36: arrow-dataset-dataset-test
36/51 Test #36: arrow-dataset-dataset-test ...........   Passed    0.25 sec
      Start 37: arrow-dataset-discovery-test
37/51 Test #37: arrow-dataset-discovery-test .........   Passed    0.13 sec
      Start 38: arrow-dataset-file-ipc-test
38/51 Test #38: arrow-dataset-file-ipc-test ..........   Passed    0.21 sec
      Start 39: arrow-dataset-file-test
39/51 Test #39: arrow-dataset-file-test ..............   Passed    0.12 sec
      Start 40: arrow-dataset-filter-test
40/51 Test #40: arrow-dataset-filter-test ............   Passed    0.16 sec
      Start 41: arrow-dataset-partition-test
41/51 Test #41: arrow-dataset-partition-test .........   Passed    0.13 sec
      Start 42: arrow-dataset-scanner-test
42/51 Test #42: arrow-dataset-scanner-test ...........   Passed    0.20 sec
      Start 43: arrow-filesystem-test
43/51 Test #43: arrow-filesystem-test ................   Passed    1.62 sec
      Start 44: arrow-hdfs-test
44/51 Test #44: arrow-hdfs-test ......................   Passed    0.13 sec
      Start 45: arrow-feather-test
45/51 Test #45: arrow-feather-test ...................   Passed    0.91 sec
      Start 46: arrow-ipc-read-write-test
46/51 Test #46: arrow-ipc-read-write-test ............   Passed    5.77 sec
      Start 47: arrow-ipc-json-simple-test
47/51 Test #47: arrow-ipc-json-simple-test ...........   Passed    0.16 sec
      Start 48: arrow-ipc-json-test
48/51 Test #48: arrow-ipc-json-test ..................   Passed    0.27 sec
      Start 49: arrow-json-integration-test
49/51 Test #49: arrow-json-integration-test ..........   Passed    0.13 sec
      Start 50: arrow-json-test
50/51 Test #50: arrow-json-test ......................   Passed    0.26 sec
      Start 51: arrow-orc-adapter-test
51/51 Test #51: arrow-orc-adapter-test ...............   Passed    1.92 sec

98% tests passed, 1 tests failed out of 51

Label Time Summary:
arrow-tests      =  27.38 sec (27 tests)
arrow_compute    =  45.11 sec (14 tests)
arrow_dataset    =   1.21 sec (7 tests)
arrow_ipc        =   6.20 sec (3 tests)
unittest         =  79.91 sec (51 tests)

Total Test time (real) =  79.99 sec

The following tests FAILED:
	 20 - arrow-utility-test (Failed)
Errors while running CTest
```

Closes apache#7142 from kiszk/ARROW-8754

Authored-by: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
Signed-off-by: Sutou Kouhei <kou@clear-code.com>
pprudhvi pushed a commit that referenced this pull request May 26, 2020
…lure on big-endian platforms

This PR gets an element data using an endianless API in Flatbuffer instead of getting a pointer. This can fix a failure of TestPlasmaSerialization.DeleteReply in plasma-serialization-tests.

Without this PR
```
1: [==========] Running 14 tests from 1 test case.
1: [----------] Global test environment set-up.
1: [----------] 14 tests from TestPlasmaSerialization
1: [ RUN      ] TestPlasmaSerialization.CreateRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-kk8t88p9/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.CreateRequest (2 ms)
1: [ RUN      ] TestPlasmaSerialization.CreateReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-97gspx5v/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.CreateReply (0 ms)
1: [ RUN      ] TestPlasmaSerialization.SealRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-dkksx76p/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.SealRequest (1 ms)
1: [ RUN      ] TestPlasmaSerialization.SealReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-oqbs9vm0/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.SealReply (0 ms)
1: [ RUN      ] TestPlasmaSerialization.GetRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-d7q6h5q4/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.GetRequest (1 ms)
1: [ RUN      ] TestPlasmaSerialization.GetReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-sxsncs72/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.GetReply (1 ms)
1: [ RUN      ] TestPlasmaSerialization.ReleaseRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-njc3g3b5/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.ReleaseRequest (0 ms)
1: [ RUN      ] TestPlasmaSerialization.ReleaseReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-917ybxmo/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.ReleaseReply (1 ms)
1: [ RUN      ] TestPlasmaSerialization.DeleteRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-1kwauefv/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.DeleteRequest (0 ms)
1: [ RUN      ] TestPlasmaSerialization.DeleteReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-4ftq28pq/fileXXXXXX'
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:271: Failure
1: Value of: error_vec[0] == PlasmaError::ObjectExists
1:   Actual: false
1: Expected: true
1: [  FAILED  ] TestPlasmaSerialization.DeleteReply (1 ms)
1: [ RUN      ] TestPlasmaSerialization.EvictRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-vl97870w/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.EvictRequest (0 ms)
1: [ RUN      ] TestPlasmaSerialization.EvictReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-3am9a6rv/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.EvictReply (1 ms)
1: [ RUN      ] TestPlasmaSerialization.DataRequest
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-plye5tmm/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.DataRequest (0 ms)
1: [ RUN      ] TestPlasmaSerialization.DataReply
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma/test/serialization_tests.cc:87: file path: '/tmp/ser-test-mbu6lqsq/fileXXXXXX'
1: [       OK ] TestPlasmaSerialization.DataReply (1 ms)
1: [----------] 14 tests from TestPlasmaSerialization (9 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 14 tests from 1 test case ran. (9 ms total)
1: [  PASSED  ] 13 tests.
1: [  FAILED  ] 1 test, listed below:
1: [  FAILED  ] TestPlasmaSerialization.DeleteReply
1:
1:  1 FAILED TEST
1: /home/ishizaki/Arrow/arrow/cpp/src/plasma
1/3 Test #1: plasma-serialization-tests .......***Failed    0.27 sec
...
3/3 Test #3: plasma-external-store-tests ......   Passed    0.46 sec
```

With this PR
```
$ ctest
Test project /home/ishizaki/Arrow/arrow/cpp/src/plasma
    Start 1: plasma-serialization-tests
1/3 Test #1: plasma-serialization-tests .......   Passed    0.26 sec
    Start 2: plasma-client-tests
2/3 Test #2: plasma-client-tests ..............   Passed   14.99 sec
    Start 3: plasma-external-store-tests
3/3 Test #3: plasma-external-store-tests ......   Passed    0.49 sec

100% tests passed, 0 tests failed out of 3

Label Time Summary:
plasma-tests    =  15.74 sec (3 tests)
unittest        =  15.74 sec (3 tests)

Total Test time (real) =  15.74 sec
```

Closes apache#7148 from kiszk/ARROW-8759

Authored-by: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
Signed-off-by: Antoine Pitrou <antoine@python.org>
projjal pushed a commit that referenced this pull request Jun 2, 2022
TODOs:
Convert cheat sheet to PDF and hide slide #1.

Closes apache#12445 from pachadotdev/patch-4

Lead-authored-by: Stephanie Hazlitt <stephhazlitt@gmail.com>
Co-authored-by: Pachá <mvargas@dcc.uchile.cl>
Co-authored-by: Mauricio Vargas <mavargas11@uc.cl>
Co-authored-by: Pachá <mavargas11@uc.cl>
Signed-off-by: Jonathan Keane <jkeane@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants