Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Intel Copyright #1

Closed
wants to merge 3 commits into from
Closed

Added Intel Copyright #1

wants to merge 3 commits into from

Conversation

ghost
Copy link

@ghost ghost commented Sep 9, 2014

doc: Added a line with an Intel Copyright

@ghost
Copy link
Author

ghost commented Sep 10, 2014

Closing this due to problems in local repo. Will try again shortly

@ghost ghost closed this Sep 10, 2014
plebioda referenced this pull request in plebioda/pmdk Nov 3, 2014
common: add exports for all libs in unittest.sh
krzycz referenced this pull request in krzycz/pmdk Nov 10, 2014
With this change, user may provide the extra flags CFLAGS/LDFLAGS using
the following syntax:

Example #1:
> make EXTRA_CFLAGS=-finstrument-functions <target>

Example #2:
> CFLAGS=-finstrument-functions make <target>
lplewa referenced this pull request in lplewa/pmdk Oct 14, 2016
As we have 6 jobs and 4 concurrent threads in Travis our job schedule
is presented in the figure below:

#1 ======
#2 ======
#3 ========================
#4 ========================
#5       ========================
#6       ======

a new order will look like this:

#3 ========================
#4 ========================
#5 ========================
#1 ======
#2       ======
#6             ======

what give us '======' improvement (at the time of this commit it's ~13 min)
krzycz added a commit that referenced this pull request Dec 30, 2016
GBuella added a commit to GBuella/nvml that referenced this pull request Feb 2, 2017
Use an escaped version of character pmem#1 in the
script, used as placeholder for comment sections
in the parsed C source file input.
krzycz pushed a commit that referenced this pull request Jul 21, 2017
Scripts must start with #!/usr/bin/env <shell> for portability.
Add set -e to top-level scripts.
Add use warnings to perl scripts.
gaweinbergi referenced this pull request in gaweinbergi/pmdk Jul 22, 2017
Scripts must start with #!/usr/bin/env <shell> for portability.
Add set -e to top-level scripts.
Add use warnings to perl scripts.

(This commit removes all previous changes other than the above.)
krzycz added a commit that referenced this pull request Oct 18, 2017
Tests #2/#3 are specifically for helgrind/drd tests.
Test #1 should never be executed under valgrind, even if force-enabled
via command-line options.

Ref: pmem/issues#664
krzycz added a commit that referenced this pull request Oct 18, 2017
test: disable Valgrind in vmem_multiple_pools #1
GBuella added a commit to GBuella/nvml that referenced this pull request Nov 3, 2017
Apparently valgrind just can't handle the way jemalloc
uses mutexes across forks, as demonstrated by this
dummy program:

$ cat dummy.c

pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;

static void prefork(void)
{
	pthread_mutex_lock(&dummy);
}

static void postfork_child(void)
{
	pthread_mutexattr_t attr;

	if (pthread_mutexattr_init(&attr) != 0)
	        abort();
	if (pthread_mutex_init(&dummy, &attr) != 0) {
	        pthread_mutexattr_destroy(&attr);
	        abort();
	}
	pthread_mutexattr_destroy(&attr);
}

static void postfork_parent(void)
{
	pthread_mutex_unlock(&dummy);
}

int main()
{
	pthread_atfork(prefork,
	    postfork_parent, postfork_child);
	fork();
	pthread_mutex_lock(&dummy);
	pthread_mutex_unlock(&dummy);
}
$ cc -pthread dummy.c -o dummy
$ valgrind --tool=helgrind ./dummy 2>&1 | grep holds
==26890== Thread pmem#1: Exiting thread still holds 1 lock
$ valgrind --tool=drd ./dummy 2>&1 | grep -i mutex
==26898== Mutex reinitialization: mutex 0x30a040,
		recursion count 1, owner 1.
==26898==    at 0x4C385F0: pthread_mutex_init
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== Recursive locking not allowed: mutex 0x30a040,
		recursion count 1, owner 1.
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
GBuella added a commit to GBuella/nvml that referenced this pull request Nov 3, 2017
Apparently valgrind just can't handle the way jemalloc
uses mutexes across forks, as demonstrated by this
dummy program:

```
$ cat dummy.c

pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;

static void prefork(void)
{
	pthread_mutex_lock(&dummy);
}

static void postfork_child(void)
{
	pthread_mutexattr_t attr;

	if (pthread_mutexattr_init(&attr) != 0)
	        abort();
	if (pthread_mutex_init(&dummy, &attr) != 0) {
	        pthread_mutexattr_destroy(&attr);
	        abort();
	}
	pthread_mutexattr_destroy(&attr);
}

static void postfork_parent(void)
{
	pthread_mutex_unlock(&dummy);
}

int main()
{
	pthread_atfork(prefork,
	    postfork_parent, postfork_child);
	fork();
	pthread_mutex_lock(&dummy);
	pthread_mutex_unlock(&dummy);
}
$ cc -pthread dummy.c -o dummy
$ valgrind --tool=helgrind ./dummy 2>&1 | grep holds
==26890== Thread pmem#1: Exiting thread still holds 1 lock
$ valgrind --tool=drd ./dummy 2>&1 | grep -i mutex
==26898== Mutex reinitialization: mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C385F0: pthread_mutex_init
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== Recursive locking not allowed: mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
```
GBuella added a commit to GBuella/nvml that referenced this pull request Nov 3, 2017
Apparently valgrind just can't handle the way jemalloc
uses mutexes across forks, as demonstrated by this
dummy program:

$ cat dummy.c
~~~~
pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;

static void prefork(void)
{
	pthread_mutex_lock(&dummy);
}

static void postfork_child(void)
{
	pthread_mutexattr_t attr;

	if (pthread_mutexattr_init(&attr) != 0)
	        abort();
	if (pthread_mutex_init(&dummy, &attr) != 0) {
	        pthread_mutexattr_destroy(&attr);
	        abort();
	}
	pthread_mutexattr_destroy(&attr);
}

static void postfork_parent(void)
{
	pthread_mutex_unlock(&dummy);
}

int main()
{
	pthread_atfork(prefork,
	    postfork_parent, postfork_child);
	fork();
	pthread_mutex_lock(&dummy);
	pthread_mutex_unlock(&dummy);
}
~~~~
$ cc -pthread dummy.c -o dummy
$ valgrind --tool=helgrind ./dummy 2>&1 | grep holds
```
==26890== Thread pmem#1: Exiting thread still holds 1 lock
```
$ valgrind --tool=drd ./dummy 2>&1 | grep -i mutex
```
==26898== Mutex reinitialization: mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C385F0: pthread_mutex_init
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== Recursive locking not allowed: mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
```
GBuella added a commit to GBuella/nvml that referenced this pull request Nov 3, 2017
Ref: pmem/issues#639
Ref: pmem/issues#665

Apparently valgrind just can't handle the way jemalloc
uses mutexes across forks, as demonstrated by this
dummy program:

$ cat dummy.c
~~~~
pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;

static void prefork(void)
{
	pthread_mutex_lock(&dummy);
}

static void postfork_child(void)
{
	pthread_mutexattr_t attr;

	if (pthread_mutexattr_init(&attr) != 0)
	        abort();
	if (pthread_mutex_init(&dummy, &attr) != 0) {
	        pthread_mutexattr_destroy(&attr);
	        abort();
	}
	pthread_mutexattr_destroy(&attr);
}

static void postfork_parent(void)
{
	pthread_mutex_unlock(&dummy);
}

int main()
{
	pthread_atfork(prefork,
	    postfork_parent, postfork_child);
	fork();
	pthread_mutex_lock(&dummy);
	pthread_mutex_unlock(&dummy);
}
~~~~
$ cc -pthread dummy.c -o dummy
$ valgrind --tool=helgrind ./dummy 2>&1 | grep holds
```
==26890== Thread pmem#1: Exiting thread still holds 1 lock
```
$ valgrind --tool=drd ./dummy 2>&1 | grep -i mutex
```
==26898== Mutex reinitialization: mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C385F0: pthread_mutex_init
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== Recursive locking not allowed: mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
```
GBuella added a commit to GBuella/nvml that referenced this pull request Nov 3, 2017
Ref: pmem/issues#639
Ref: pmem/issues#665

Apparently valgrind just can't handle the way jemalloc
uses mutexes across forks, as demonstrated by this
dummy program:

$ cat dummy.c
pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;

static void prefork(void)
{
	pthread_mutex_lock(&dummy);
}

static void postfork_child(void)
{
	pthread_mutexattr_t attr;

	if (pthread_mutexattr_init(&attr) != 0)
	        abort();
	if (pthread_mutex_init(&dummy, &attr) != 0) {
	        pthread_mutexattr_destroy(&attr);
	        abort();
	}
	pthread_mutexattr_destroy(&attr);
}

static void postfork_parent(void)
{
	pthread_mutex_unlock(&dummy);
}

int main()
{
	pthread_atfork(prefork,
	    postfork_parent, postfork_child);
	fork();
	pthread_mutex_lock(&dummy);
	pthread_mutex_unlock(&dummy);
}
$ cc -pthread dummy.c -o dummy
$ valgrind --tool=helgrind ./dummy 2>&1 | grep holds
==26890== Thread pmem#1: Exiting thread still holds 1 lock
$ valgrind --tool=drd ./dummy 2>&1 | grep -i mutex
==26898== Mutex reinitialization:
		mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C385F0: pthread_mutex_init
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== Recursive locking not allowed:
		mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
igchor pushed a commit to igchor/pmdk that referenced this pull request Mar 5, 2018
Tests pmem#2/pmem#3 are specifically for helgrind/drd tests.
Test pmem#1 should never be executed under valgrind, even if force-enabled
via command-line options.

Ref: pmem/issues#664
igchor pushed a commit to igchor/pmdk that referenced this pull request Mar 5, 2018
Ref: pmem/issues#639
Ref: pmem/issues#665

Apparently valgrind just can't handle the way jemalloc
uses mutexes across forks, as demonstrated by this
dummy program:

$ cat dummy.c
pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;

static void prefork(void)
{
	pthread_mutex_lock(&dummy);
}

static void postfork_child(void)
{
	pthread_mutexattr_t attr;

	if (pthread_mutexattr_init(&attr) != 0)
	        abort();
	if (pthread_mutex_init(&dummy, &attr) != 0) {
	        pthread_mutexattr_destroy(&attr);
	        abort();
	}
	pthread_mutexattr_destroy(&attr);
}

static void postfork_parent(void)
{
	pthread_mutex_unlock(&dummy);
}

int main()
{
	pthread_atfork(prefork,
	    postfork_parent, postfork_child);
	fork();
	pthread_mutex_lock(&dummy);
	pthread_mutex_unlock(&dummy);
}
$ cc -pthread dummy.c -o dummy
$ valgrind --tool=helgrind ./dummy 2>&1 | grep holds
==26890== Thread pmem#1: Exiting thread still holds 1 lock
$ valgrind --tool=drd ./dummy 2>&1 | grep -i mutex
==26898== Mutex reinitialization:
		mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C385F0: pthread_mutex_init
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== Recursive locking not allowed:
		mutex 0x30a040, recursion count 1, owner 1.
==26898==    at 0x4C390D3: pthread_mutex_lock
==26898== mutex 0x30a040 was first observed at:
==26898==    at 0x4C390D3: pthread_mutex_lock
PietrasMaciej pushed a commit to PietrasMaciej/pmdk that referenced this pull request Dec 14, 2018
Update personal fork with original master
marcinslusarz pushed a commit that referenced this pull request Sep 23, 2019
test (py): generalize obj_basic_integration
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants