Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Work in progress in migrating the TensorFlow.jl into the Julia 1.0.0 #419

Merged
merged 64 commits into from
Aug 29, 2018
Merged
Show file tree
Hide file tree
Changes from 35 commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
c324df3
Fixes is_windows depreciation
adamryczkowski Aug 19, 2018
c661209
Fixes depreciations in the mv command
adamryczkowski Aug 19, 2018
70d50d7
Replace info, warn & error with the macros
adamryczkowski Aug 19, 2018
ac2ad35
Replace info, warn & error with the macros
adamryczkowski Aug 19, 2018
c2e9b5e
Replace Void into Cvoid to avoid deprecation
adamryczkowski Aug 19, 2018
cb0bfec
Add Nullable dependency
adamryczkowski Aug 19, 2018
03e058b
Add Nullable dependency
adamryczkowski Aug 19, 2018
d404c71
Another deprecation fix
adamryczkowski Aug 19, 2018
6a6f45f
Another deprecation fix with ...
adamryczkowski Aug 19, 2018
7b7c132
Complex64, Coplex128 -> ComplexF64, ComplexF128
adamryczkowski Aug 19, 2018
447fef1
Complex64, Coplex128 -> ComplexF32, ComplexF64
adamryczkowski Aug 19, 2018
3022797
Complex64, Coplex128 -> ComplexF32, ComplexF64
adamryczkowski Aug 19, 2018
06b9a38
Iterator for OperationIterator converted to 1.0 syntax
adamryczkowski Aug 19, 2018
42c4be7
Iterator for OperationIterator converted to 1.0 syntax
adamryczkowski Aug 19, 2018
df56552
Fix deprecations
femtocleaner[bot] Aug 21, 2018
038cf86
fix up broadcasting
oxinabox Aug 22, 2018
416dae4
fix math
oxinabox Aug 22, 2018
b8270fc
put using Nullables in the places needed
oxinabox Aug 22, 2018
70316ef
use 0.7 travis ci
oxinabox Aug 22, 2018
42c2447
Merge pull request #1 from adamryczkowski/fbot/deps
adamryczkowski Aug 23, 2018
630c957
Another batch of deprecation fixes towards Julia 1.0.0
adamryczkowski Aug 23, 2018
9a0662d
Another batch of deprecation fixes until the error with the JLD package
adamryczkowski Aug 24, 2018
ed59ea7
New batch of updates
adamryczkowski Aug 24, 2018
0aba8cf
Meged with current upstream master
adamryczkowski Aug 24, 2018
415a1d7
fix REQUIRE
oxinabox Aug 24, 2018
3f3b137
Proposed fix to the c_deallocator[] problem
adamryczkowski Aug 24, 2018
990a395
Another batch of deprecation fixes
adamryczkowski Aug 24, 2018
03f8ffe
Merge branch 'master' of https://github.com/adamryczkowski/TensorFlow…
oxinabox Aug 24, 2018
52cd03e
Remove deps/build.log
adamryczkowski Aug 24, 2018
5885cee
fix convert constructors
oxinabox Aug 24, 2018
10101cd
fix eltype
oxinabox Aug 24, 2018
45deb6a
fix node_name(::Operation)
oxinabox Aug 24, 2018
f1b5848
remove buildlog
oxinabox Aug 24, 2018
9dc09f5
Adds gitignore rule against adding deps/build.log
adamryczkowski Aug 24, 2018
9494c1a
Stuck at 'Tensorflow error: Status: Input 'ref' passed double expecte…
adamryczkowski Aug 24, 2018
8e8725d
fix reading attributes
oxinabox Aug 25, 2018
113433a
remove some deps
oxinabox Aug 25, 2018
67a6048
Error properly
oxinabox Aug 25, 2018
64be76d
Random Julia 1.0 compatibility fixes in the examples
adamryczkowski Aug 26, 2018
53a68cb
Merge branch 'master' of github.com:adamryczkowski/TensorFlow.jl
adamryczkowski Aug 26, 2018
006a3f2
Stuck at missing Array{TensorFlow.Port,1}(::Int64) specialization
adamryczkowski Aug 26, 2018
61d1646
logistic.jl runs. Only 2 errors to solve in the unit tests are left...
adamryczkowski Aug 26, 2018
0a9dcc3
Fix until problem with the 'Hello, TensorFlow' returned empty String
adamryczkowski Aug 26, 2018
f06ae94
more correct string conversions
oxinabox Aug 27, 2018
bf4a5b5
Correct 8 byte offset for reading strings
oxinabox Aug 27, 2018
ab32d21
Changes before rebasing with master
adamryczkowski Aug 27, 2018
f797e3a
NodeDef from CodeUnits
oxinabox Aug 27, 2018
c77359b
merge with master
adamryczkowski Aug 27, 2018
e31d140
Merge branch 'master' of github.com:adamryczkowski/TensorFlow.jl
adamryczkowski Aug 27, 2018
04e6f6e
Stuck at Cannot Version: ImageMagick 6.8.9-9 Q16 x86_64 2018-07-10 h…
adamryczkowski Aug 27, 2018
b7a8506
Removes RecordIteratorState type
adamryczkowski Aug 28, 2018
fc9add4
Cumulative batch of deprecation fixes
adamryczkowski Aug 28, 2018
3bf08ba
Fix build on OS X.
malmaud Aug 28, 2018
8c5c576
Explicitly call 'convert' for Arrays.
malmaud Aug 28, 2018
1ecc98d
Fix issues in math.jl.
malmaud Aug 28, 2018
9bd1c60
More broadcasting fixes.
malmaud Aug 28, 2018
8e0c538
Fix meta.
malmaud Aug 28, 2018
835de9a
Fix squeeze to dropdims.
malmaud Aug 28, 2018
5539580
Fixes to nn.jl
malmaud Aug 28, 2018
ac7981b
Shape inference.
malmaud Aug 28, 2018
1b79036
Fix transformations.
malmaud Aug 28, 2018
06ddcf4
Various fixes.
malmaud Aug 28, 2018
b30079a
Disable allowed failures
malmaud Aug 28, 2018
111178c
Update travis
malmaud Aug 28, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@ os:
- osx
- linux
julia:
- 0.6
- 0.7
- 1.0
- nightly
env:
- CONDA_JL_VERSION="2" PYTHON=""
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ saver = train.Saver()
# Run training
run(sess, global_variables_initializer())
checkpoint_path = mktempdir()
info("Checkpoint files saved in $checkpoint_path")
@info("Checkpoint files saved in $checkpoint_path")
for epoch in 1:100
cur_loss, _ = run(sess, [Loss, minimize_op], Dict(X=>x, Y_obs=>y))
println(@sprintf("Current loss is %.2f.", cur_loss))
Expand Down
7 changes: 5 additions & 2 deletions REQUIRE
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
julia 0.6
julia 0.7
ProtoBuf 0.3.0
PyCall 1.7.1
TakingBroadcastSeriously 0.1.1
Conda 0.6.0
Distributions 0.10.2
StatsFuns 0.3.0
SpecialFunctions v0.7.0
JLD2 0.0.6
FileIO 0.1.2
Juno 0.2.3
Compat 0.18
MacroTools 0.3.6
AutoHashEquals 0.1.0
Nullables 0.0.7
MNIST 0.0.2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is wrong now.

Nullables 0.0.7
SpecialFunctions 0.7.0
1 change: 1 addition & 0 deletions deps/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
build.log
22 changes: 11 additions & 11 deletions deps/build.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ const cur_py_version = "1.8.0"
# Error message for Windows
############################

if is_windows()
error("TensorFlow.jl does not support Windows. Please see https://github.com/malmaud/TensorFlow.jl/issues/204")
if Sys.iswindows()
@error("TensorFlow.jl does not support Windows. Please see https://github.com/malmaud/TensorFlow.jl/issues/204")
end

############################
Expand All @@ -19,15 +19,15 @@ end

use_gpu = "TF_USE_GPU" ∈ keys(ENV) && ENV["TF_USE_GPU"] == "1"

if is_apple() && use_gpu
warn("No support for TF_USE_GPU on OS X - to enable the GPU, build TensorFlow from source. Falling back to CPU")
if Sys.isapple() && use_gpu
@warn("No support for TF_USE_GPU on OS X - to enable the GPU, build TensorFlow from source. Falling back to CPU")
use_gpu=false
end

if use_gpu
info("Building TensorFlow.jl for use on the GPU")
@info("Building TensorFlow.jl for use on the GPU")
else
info("Building TensorFlow.jl for CPU use only. To enable the GPU, set the TF_USE_GPU environment variable to 1 and rebuild TensorFlow.jl")
@info("Building TensorFlow.jl for CPU use only. To enable the GPU, set the TF_USE_GPU environment variable to 1 and rebuild TensorFlow.jl")
end


Expand All @@ -45,7 +45,7 @@ else
# See if it works already
catch ee
typeof(ee) <: PyCall.PyError || rethrow(ee)
error("""
@error("""
Python TensorFlow not installed
Please either:
- Rebuild PyCall to use Conda, by running in the julia REPL:
Expand Down Expand Up @@ -79,7 +79,7 @@ function download_and_unpack(url)
run(`tar -xzf $tensorflow_zip_path -C downloads`)
end

@static if is_apple()
@static if Sys.isapple()
if use_gpu
url = "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-darwin-x86_64-$cur_version.tar.gz"
else
Expand All @@ -90,13 +90,13 @@ end
mv("$lib_dir/libtensorflow_framework.so", "usr/bin/libtensorflow_framework.so", remove_destination=true)
end

@static if is_linux()
@static if Sys.islinux()
if use_gpu
url = "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-$cur_version.tar.gz"
else
url = "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-$cur_version.tar.gz"
end
download_and_unpack(url)
mv("$lib_dir/libtensorflow.so", "usr/bin/libtensorflow.so", remove_destination=true)
mv("$lib_dir/libtensorflow_framework.so", "usr/bin/libtensorflow_framework.so", remove_destination=true)
mv("$lib_dir/libtensorflow.so", "usr/bin/libtensorflow.so", force=true)
mv("$lib_dir/libtensorflow_framework.so", "usr/bin/libtensorflow_framework.so", force=true)
end
2 changes: 1 addition & 1 deletion docs/src/logistic.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ saver = train.Saver()
# Run training
run(sess, global_variables_initializer())
checkpoint_path = mktempdir()
info("Checkpoint files saved in $checkpoint_path")
@info("Checkpoint files saved in $checkpoint_path")
for epoch in 1:100
cur_loss, _ = run(sess, (Loss, minimize_op), Dict(X=>x, Y_obs=>y))
println(@sprintf("Current loss is %.2f.", cur_loss))
Expand Down
8 changes: 4 additions & 4 deletions docs/src/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ end
### Evaluate the model

```julia
correct_prediction = indmax(y, 2) .== indmax(y_, 2)
correct_prediction = argmax(y, 2) .== argmax(y_, 2)
accuracy=reduce_mean(cast(correct_prediction, Float32))
testx, testy = load_test_set()

Expand Down Expand Up @@ -137,7 +137,7 @@ cross_entropy = reduce_mean(-reduce_sum(y_.*log(y_conv), axis=[2]))

train_step = train.minimize(train.AdamOptimizer(1e-4), cross_entropy)

correct_prediction = indmax(y_conv, 2) .== indmax(y_, 2)
correct_prediction = argmax(y_conv, 2) .== argmax(y_, 2)

accuracy = reduce_mean(cast(correct_prediction, Float32))

Expand All @@ -147,12 +147,12 @@ for i in 1:1000
batch = next_batch(loader, 50)
if i%100 == 1
train_accuracy = run(session, accuracy, Dict(x=>batch[1], y_=>batch[2], keep_prob=>1.0))
info("step $i, training accuracy $train_accuracy")
@info("step $i, training accuracy $train_accuracy")
end
run(session, train_step, Dict(x=>batch[1], y_=>batch[2], keep_prob=>.5))
end

testx, testy = load_test_set()
test_accuracy = run(session, accuracy, Dict(x=>testx, y_=>testy, keep_prob=>1.0))
info("test accuracy $test_accuracy")
@info("test accuracy $test_accuracy")
```
4 changes: 2 additions & 2 deletions examples/ae.jl
Original file line number Diff line number Diff line change
Expand Up @@ -77,15 +77,15 @@ history = MVHistory()
push!(history,:loss_val, epoch, val_loss)
plot(history, reuse=true)
scatter3d(center[:,1],center[:,2],center[:,3], zcolor = testy, legend=false, title="Latent space", reuse=true)
info("step $epoch, training loss $train_loss, time taken: $(printtime(t0))")
@info("step $epoch, training loss $train_loss, time taken: $(printtime(t0))")
train.save(saver, session, joinpath(checkpoint_path, "ae_mnist"), global_step=epoch)
end
run(session, train_step, Dict(x=>batch))
end
end

test_loss, center, reconstruction = run(session, [loss_MSE, l_z, l_out], Dict(x=>testx))
info("test accuracy $test_loss")
@info("test accuracy $test_loss")

# Plot som example reconstructions
offset = 0
Expand Down
4 changes: 2 additions & 2 deletions examples/logistic.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ using Distributions
x = randn(100, 50)
w = randn(50, 10)
y_prob = exp.(x*w)
y_prob ./= sum(y_prob,2)
y_prob ./= sum(y_prob,dims=2)

function draw(probs)
y = zeros(size(probs))
Expand Down Expand Up @@ -39,7 +39,7 @@ saver = train.Saver()
# Run training
run(sess, global_variables_initializer())
checkpoint_path = mktempdir()
info("Checkpoint files saved in $checkpoint_path")
@info("Checkpoint files saved in $checkpoint_path")
for epoch in 1:100
cur_loss, _ = run(sess, [Loss, minimize_op], Dict(X=>x, Y_obs=>y))
println(@sprintf("Current loss is %.2f.", cur_loss))
Expand Down
6 changes: 3 additions & 3 deletions examples/mnist_full.jl
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ end

train_step = train.minimize(train.AdamOptimizer(1e-4), cross_entropy)

correct_prediction = indmax(y_conv, 2) .== indmax(y_, 2)
correct_prediction = argmax(y_conv, 2) .== argmax(y_, 2)

accuracy = reduce_mean(cast(correct_prediction, Float32))

Expand All @@ -72,13 +72,13 @@ for i in 1:200
batch = next_batch(loader, 50)
if i%100 == 1
train_accuracy = run(session, accuracy, Dict(x=>batch[1], y_=>batch[2], keep_prob=>1.0))
info("step $i, training accuracy $train_accuracy")
@info("step $i, training accuracy $train_accuracy")
end
run(session, train_step, Dict(x=>batch[1], y_=>batch[2], keep_prob=>.5))
end

testx, testy = load_test_set()
test_accuracy = run(session, accuracy, Dict(x=>testx, y_=>testy, keep_prob=>1.0))
info("test accuracy $test_accuracy")
@info("test accuracy $test_accuracy")

visualize()
2 changes: 1 addition & 1 deletion examples/mnist_simple.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ y = nn.softmax(x*W + b)
cross_entropy = reduce_mean(-reduce_sum(y_ .* log(y), axis=[2]))
train_step = train.minimize(train.GradientDescentOptimizer(.00001), cross_entropy)

correct_prediction = indmax(y, 2) .== indmax(y_, 2)
correct_prediction = argmax(y, 2) .== argmax(y_, 2)
accuracy=reduce_mean(cast(correct_prediction, Float32))

for i in 1:1000
Expand Down
9 changes: 6 additions & 3 deletions src/TensorFlow.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
__precompile__(true)
module TensorFlow

warn("Loading a new version of TensorFlow.jl for the first time. This initial load can take around 5 minutes as code is precompiled; subsequent usage will only take a few seconds.")
@warn("Loading a new version of TensorFlow.jl for the first time. This initial load can take around 5 minutes as code is precompiled; subsequent usage will only take a few seconds.")

export
Graph,
Expand Down Expand Up @@ -129,8 +128,12 @@ tf_versioninfo

const pyproc = Ref(0)

function deallocator(data, len, arg)

end

function __init__()
c_deallocator[] = cfunction(deallocator, Void, (Ptr{Void}, Csize_t, Ptr{Void}))
c_deallocator[] = @cfunction(deallocator, Cvoid, (Ptr{Cvoid}, Csize_t, Ptr{Cvoid}))
end

function load_python_process(;force_reload=false)
Expand Down
Loading