Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nim and Futhark #11843

Closed
develooper1994 opened this issue Jul 28, 2019 · 7 comments
Closed

Nim and Futhark #11843

develooper1994 opened this issue Jul 28, 2019 · 7 comments

Comments

@develooper1994
Copy link

Summary

There is a language like nim. It is futhark. Futhark language also generate optimized c/c++/Python/C#/cuda/opencl parallel executable code. but futhark aims to damin specific language. Futhark generates parallel executable code.

Description

These languages can cooparate with each others in my opinion.

Alternatives

Additional Information

@mratsim mratsim added the RFC label Jul 28, 2019
@mratsim
Copy link
Collaborator

mratsim commented Jul 28, 2019

Tagging RFC: "Request for Cooperation" ;).

I don't see how a cooperation would work.
I think introducing a Cuda and OpenCL backend to Nim would make more sense to Nim if GPU computation is needed.

Unfortunately no one applied for that GSoC proposition in 2014, 2015 or 2016.

Can you tell us more how you envision such a cooperation?

I think this discussion makes more sense on the forum but if members of the Futhark community wants to speak up, I will leave this thread open for the moment.

@awr1
Copy link
Contributor

awr1 commented Jul 28, 2019

I've never heard of this language before but it sounds interesting. You could probably write a DSL in Nim to emit Futhark code, although given Nim's philosophy and goals in general I think more people are interested in writing systems in Nim that directly compile to OpenCL/CUDA C/GLSL/Vectorized CPU code, etc.

@krux02
Copy link
Contributor

krux02 commented Jul 29, 2019

Well, I have heared about this language before. But I don't don't necessarily like it. The language tries to find it's niche in "compute-intensive parts of an application". But it ignores the most important principle of programming language: While programming languages are desigend for different use cases and different languages are better for different tasks, The programmer who learns a language uses that language for everything, no matter if it is the best fir for the job or not, because that is the only languge he knows, or the language he knows best. And that is why Futhark is doomed to fail in my opinion. You don't change your programming language for one piece of the project, especially not if that requires performance. You want to write the high performance code in the language that you are most familiar with. Even if it means shoehorning.

Futhark is not intended to replace existing general-purpose languages. The intended use case is that Futhark is only used for relatively small but compute-intensive parts of an application

But that doesn't mean that Futhark does not have nice ideas on how to tackle GPGPU computing. We might indeed learn something from it. But I also thing that the paradigms of Nim and Futhark clash quite a lot, because Nim is not a "purely functional array language".

Futhark is not designed for graphics programming, but instead uses the compute power of the GPU to accelerate data-parallel array computations

This is another problem of Futhark. Because here it has to compete with CUDA. NVidia has its own fork of clang that allows to write compute kernels in C++. So people can use C++ for everything, they don't need to switch to another language just for the high performance computing part.

@develooper1994
Copy link
Author

develooper1994 commented Jul 29, 2019

I know it. Programing paradigms aren't fall with nim but backend principles are similar. Nim also can produce cuda/opencl in procuder way in my opinion but there is no need to struggle with compiling to opencl/cuda. Futhark looks like has a good foresight about parallel executable codes.
I don't like C++ for parallel execution. As the program grows, the c++ code becomes more complex.
Two languages has the same idea; reduce mental overhead. Good interaction makes easy to produce native parallel code without working with ffi.
FFI should looks like Python and Julia ffi in user perspective.

@krux02
Copy link
Contributor

krux02 commented Jul 29, 2019

I have an experimental GLSL backend for Nim that is written as a macro library.
https://github.com/krux02/opengl-sandbox/blob/master/experiment/hello_triangle.nim

@mratsim
Copy link
Collaborator

mratsim commented Jul 29, 2019

I had a look into Futhark for my in-depth deep learning compiler optimization research and more generally looking through Domain Specific Languages for GPU computing (mratsim/Arraymancer#347 (comment))

As shown in the issue I raised for Futhark benchmarks (diku-dk/futhark-benchmarks#9 ) 2 weeks ago, there are no benchmarks against expert-tuned kernels like other DSL with GPU backends, but the main author expects Futhark to be much slower.

Furthermore Futhark lacks a multidimensional representation of buffers/algorithms which is key for scientific computing and image processing. This makes it very verbose see some example neural network layers, compared to a DSL based on Einstein Summation like I'm planning in Laser.

Besides, it doesn't meet @krux02 needs for GPU computing: shaders, textures, rendering and game-related processing.

What it is strong at is offloading arrays computations on GPU including map/reduce/scan/filter. I.e. it's the GPU equivalent of zero-functional.

In short of the 3 areas of GPU computing I identified, Futhark is weak at two and strong at one:
Weak at

  • Scientific computing, Tensor/Matrix and image processing
  • Shaders, Texture processing, Rendering compute

Strong at

  • Parallel iterators, map/reduce, 1-dimensional "range" processing

Anyway, I don't see concrete path forward for collaboration in the main Nim compiler repo besides a library similar to nimpy so I'm closing the thread. That doesn't prevent the discussion for continuing though.

@mratsim mratsim closed this as completed Jul 29, 2019
@tokyovigilante
Copy link

tokyovigilante commented Nov 2, 2024

Oh actually this is an error from the generated wrapper:

struct_timespec_1157628478 {.pure, inheritable, bycopy.} = object
    tv_sec*: time_t_1157628789 ## Generated based on /usr/include/bits/alltypes.h:229:8
    anon0* {.bitsize: 0'i64.}: cint
    tv_nsec*: clong
    anon1* {.bitsize: 0'i64.}: cint

converting this wild struct with padding:

#if defined(__NEED_struct_timespec) && !defined(__DEFINED_struct_timespec)
struct timespec { time_t tv_sec; int :8*(sizeof(time_t)-sizeof(long))*(__BYTE_ORDER==4321); long tv_nsec; int :8*(sizeof(time_t)-sizeof(long))*(__BYTE_ORDER!=4321); };
#define __DEFINED_struct_timespec
#endif

Seems a trivial fix is just to redefine it to:

  struct_timespec_1157628478 {.pure, inheritable, bycopy.} = object
    tv_sec*: int64
    tv_nsec*: int

I'm using Alpine (musl libc) btw so possibly why it's a weird define. Anyway with that fixed, the wrapper works a treat!

Nov 02 20:58:23.886 Starting Tsunami...
Nov 02 20:58:23.887 Using Pipewire 1.2.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants