-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Boolean Network Randomization #196
base: master
Are you sure you want to change the base?
Conversation
Needs a test suite to figure out where I inevitably implemented it incorrectly
Merging Myles' C-Sens completed (but not unit-tested) code
Codecov Report
@@ Coverage Diff @@
## master #196 +/- ##
===========================================
- Coverage 99.14% 82.04% -17.10%
===========================================
Files 16 21 +5
Lines 1397 1821 +424
===========================================
+ Hits 1385 1494 +109
- Misses 12 327 +315
Continue to review full report at Codecov.
|
This looks cool! I skimmed the code briefly, but I haven't tried it yet. Is the idea that we will have explicit generation schemes for some popular combinations of constraints (making them more efficient), but the user can also ask for arbitrary combinations of constraints, in which case we just sample until we find networks that meet the constraints? Is there some fundamental difference between "dynamic" and "topological" randomizations? |
Thanks. It took a little bit of toying to find something I thought might work. The jury is still out as to whether it was a success. We have the Capstone students working on unit testing it, but it's ready to be evaluated now if you want to toy with it. I'm sure there are some rough edges that we could smooth.
Yep. That's pretty much it. Specific randomization schemes can be implemented by deriving from the base classes. If there's some constraint you'd like to satisfy, but can't come up with a clean algorithm for, you can just use rejection testing: write a function that takes a network and returns
There is. The older code mixed up randomizing the topology and randomizing the dynamics. If you wanted to add something new, you'd have to do a lot of work creating functions that randomize both topology and dynamics. Now those two things are (almost) separate. You can create a new topological randomizer and then use a pre-existing randomizer for the dynamics. At least that's the idea. Of course, if you randomize the topology you have to randomize the dynamics, so the primary interface is actually a dynamical randomizer which takes a topological randomizer at construction. Also, topological randomizers produce No ConstraintsIn [1]: from itertools import islice
In [2]: from neet.boolean.examples import myeloid
In [3]: from neet.boolean.random import *
In [4]: gen = dynamics.MeanBias(myeloid, trand=topology.MeanDegree)
In [5]: gen.random()
Out[5]: <neet.boolean.logicnetwork.LogicNetwork at 0x7cab81379080>
In [6]: list(islice(gen, 5))
Out[6]:
[<neet.boolean.logicnetwork.LogicNetwork at 0x7cab812c1b70>,
<neet.boolean.logicnetwork.LogicNetwork at 0x7cab812c1940>,
<neet.boolean.logicnetwork.LogicNetwork at 0x7cab812c1780>,
<neet.boolean.logicnetwork.LogicNetwork at 0x7cab812d40b8>,
<neet.boolean.logicnetwork.LogicNetwork at 0x7cab812c1630>]
In [7]: gen = dynamics.MeanBias(myeloid) # trand=topology.FixedTopology
In [8]: gen.random()
Out[8]: <neet.boolean.logicnetwork.LogicNetwork at 0x7cab8127de48>
In [9]: gen = dynamics.LocalBias(myeloid, trand=topology.MeanDegree)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-9-4f2f21128acd> in <module>
----> 1 gen = dynamics.LocalBias(myeloid, trand=topology.MeanDegree)
~/neet/neet/boolean/random/dynamics.py in __init__(self, network, trand, **kwargs)
202 elif trand is not None:
203 if isclass(trand) and not issubclass(trand, (FixedTopology, InDegree)):
--> 204 raise NotImplementedError(trand)
205 elif not isclass(trand) and not isinstance(trand, (FixedTopology, InDegree)):
206 raise NotImplementedError(type(trand))
NotImplementedError: <class 'neet.boolean.random.topology.MeanDegree'> With ConstraintsConstraints can be added during initialization or after the fact, and can be applied to the dynamics ( In [15]: gen = dynamics.MeanBias(myeloid, trand=topology.MeanDegree)
In [16]: sum(map(lambda net: nx.is_weakly_connected(net.network_graph()),
...: islice(gen, 1000)))
Out[16]: 976
In [17]: gen.trand.add_constraint(nx.is_weakly_connected)
In [18]: sum(map(lambda net: nx.is_weakly_connected(net.network_graph()),
...: islice(gen, 1000)))
Out[18]: 1000 Constraints can be functions (as above), or they can be objects inheriting from one of the constraint classes. The latter is preferable because it makes adding constraints a bit easier (i.e. topological constraints can be delegated to the topology randomizer rather than at the network level → fewer timeout errors — inspired by some code that @bcdaniels wrote). In [19]: gen = dynamics.MeanBias(myeloid, trand=topology.MeanDegree,
...: constraints=[constraints.IsConnected()])
In [20]: sum(map(lambda net: nx.is_weakly_connected(net.network_graph()),
...: islice(gen, 1000))) There are certain kinds of constraints that are actually really difficult to get right. That is they like to time out. Enforcing canalyzing nodes is such a constraint. I wanted that to be a proper constraint, but just doesn't work. Instead, there is a In [22]: class CanalizingUniformBias(dynamics.FixCanalizingMixin, dynamics.UniformBias):
...: pass
...:
In [23]: gen = CanalizingUniformBias(myeloid, trand=topology.MeanDegree) Canalization could be done with something I'm calling |
P.S. The examples above make me think we should override the |
I agree with your PS, that would be nice.
One question--there's no guarantees that the randomized networks will be
different from one another, is there?
…On Wed, Mar 18, 2020 at 8:10 AM Douglas G. Moore ***@***.***> wrote:
P.S. The examples above make me think we should override the __str__
method for network classes. I'd be nice if instead of just the class name
and address in memory, it printed some basic information about the network,
e.g. number of nodes, number of edges, metadata, etc...
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#196 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACHE25AKYYDQXRR24RAWBO3RH77MZANCNFSM4LCSXCHA>
.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like good architecture to me
@hbsmith, no we don't guarantee that the generated variants are unique. You could do that, if you wanted to, using a constraint. Maybe something like... class Uniqueness(DynamicConstraint):
def __init__(self):
self.observed = []
def satisfies(self, net):
if self.super().satisfies(net):
for seen in self.observed:
if seen == net: ### The object.__eq__ method would have to be overridden
return False
self.observed.append(net)
return True
return False If you only cared about ensuring that the topologies were not isomorphic: class Uniqueness(TopologicalConstraint):
def __init__(self):
self.observed = []
def satisfies(self, graph):
if self.super().satisfies(graph):
for seen in self.observed:
if nx.is_isomorphic(seen, net):
return False
self.observed.append(net)
return True
return False In either case, you'd use it like this gen = UniformBias(myeloid, constraints=[Uniqueness()]) |
@hbsmith Actually, you might want something more general... something like class Uniqueness(DynamicConstraint):
def __init__(self, compare=None):
self.observed = []
self.compare = compare
def __compare(self, a, b):
if self.compare is None:
return self.compare(a, b)
else:
return a == b; # The object.__eq__ method would have to be overridden
def satisfies(self, net):
if self.super().satisfies(net):
for seen in self.observed:
if self.compare(seen, net):
return False
self.observed.append(net)
return True
return False This way the user could override what it means for two networks to be equal. Maybe you want to ensure that no two generated networks have the same mean bias: # Pretending Network.mean_bias exists... it should, but it doesn't
Uniqueness(lambda a, b: a.mean_bias != b.mean_bias) |
Still need to add genericTopological test cases and genericDynamical cases, along with a few other edge cases
Will update the rest when computer is together later tonight
Random net tests
Codecov Report
@@ Coverage Diff @@
## master #196 +/- ##
===========================================
- Coverage 99.14% 82.04% -17.10%
===========================================
Files 16 21 +5
Lines 1397 1821 +424
===========================================
+ Hits 1385 1494 +109
- Misses 12 327 +315
Continue to review full report at Codecov.
|
Description
We desperately need features to randomize networks, Boolean networks in particular. This pull request will bring in an hierarchy of network randomization.
These changes will break support for python27.
Closes #139
Type of change
expected)
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Checklist: