Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.0.10 #75

Merged
merged 56 commits into from
Jan 18, 2024
Merged
Changes from 1 commit
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
205d7cc
init joint solve
cameronmartino Apr 6, 2022
d4c9b99
init joint functions
cameronmartino Jun 20, 2022
1a56d87
update cov calculations
cameronmartino Aug 24, 2022
706f371
flake8 optspace
cameronmartino Oct 27, 2022
f975568
add transformation functions
cameronmartino Oct 27, 2022
6c6f222
joint-opt
cameronmartino Oct 27, 2022
d1b9f8f
add standalone script
cameronmartino Oct 27, 2022
42968bd
add scripts standalone and q2
cameronmartino Nov 7, 2022
32cc024
add scripts testing
cameronmartino Nov 7, 2022
385e745
add testing data scripts
cameronmartino Nov 7, 2022
4885ff2
update flake8 in optspcae
cameronmartino Nov 7, 2022
84216d8
update transform docs & param name
cameronmartino Nov 29, 2022
c426be2
add new data projection scripts
cameronmartino Nov 29, 2022
b18105a
add tests for transformer
cameronmartino Nov 29, 2022
28ec87e
fix index name, breaks conversion in q2
cameronmartino Nov 29, 2022
be58dc5
fix cv output to work with qiime2
cameronmartino Dec 1, 2022
8d89232
add utility functions to help with donwstream analysis
cameronmartino Dec 1, 2022
a6d50e5
add feature table corr type and function
cameronmartino Dec 1, 2022
ca59e93
fix type issues
cameronmartino Dec 1, 2022
32562e9
add index name to corr for transformer
cameronmartino Dec 1, 2022
4a4a541
first tutorial
cameronmartino Dec 1, 2022
740d91b
add tutorial data/images
cameronmartino Dec 1, 2022
04e52d5
update tutorial one
cameronmartino Dec 1, 2022
ed85e0d
fix new scripts
cameronmartino Dec 2, 2022
afa97e3
add tutorials for joint rpca
cameronmartino Dec 2, 2022
12ec3e3
catch up to master
cameronmartino May 25, 2023
7dfbff7
adding tempted & new sparse tensor class functionality
cameronmartino May 26, 2023
713d415
update tempted wrappers for commands
cameronmartino May 28, 2023
bf813f8
add command and q2
cameronmartino May 29, 2023
4d0c827
add transformations for tempted
cameronmartino May 30, 2023
906bea8
fix input types to composition
cameronmartino Jun 1, 2023
2db0fd0
Merge pull request #73 from biocore/tempted-dev
cameronmartino Nov 25, 2023
c05b8c8
add ability to input pre-transformed tables into joint-rpca
cameronmartino Nov 27, 2023
bde6656
depreciate auto rpca and rank estimation functionality see issue #70
cameronmartino Nov 27, 2023
8a49969
bug fix for issue #71
cameronmartino Nov 27, 2023
a50053b
add qc on distances see issue #70
cameronmartino Nov 28, 2023
a950c73
add updated functions for testing and visualizing issue #70
cameronmartino Nov 30, 2023
78d5665
update docs for issue #70
cameronmartino Nov 30, 2023
7104e1e
add commands for rpca with cv, see issue #70
cameronmartino Nov 30, 2023
7e4516e
update tutorials and add tempted tutorial
cameronmartino Nov 30, 2023
e976e47
update tutorials and add tempted tutorial
cameronmartino Nov 30, 2023
5b3dc81
changes logged and version up
cameronmartino Nov 30, 2023
d22a004
update readme
cameronmartino Dec 1, 2023
9247a54
update qc and cv intro and code
cameronmartino Dec 9, 2023
4bde03c
update README
cameronmartino Dec 9, 2023
1871ecd
update readme
cameronmartino Dec 11, 2023
c78d069
add updates tests examples of QC
cameronmartino Dec 12, 2023
fe282a2
update confusing flag in standalone command
cameronmartino Dec 13, 2023
00e24e7
make read me header easier to see
cameronmartino Dec 13, 2023
ba404a2
Merge branch 'master' into jointrpca
cameronmartino Dec 13, 2023
176973a
clean up text
cameronmartino Dec 18, 2023
1494b6d
Merge branch 'jointrpca' of https://github.com/biocore/gemelli into j…
cameronmartino Dec 18, 2023
7ad0ee3
fix permanova in tutorial
cameronmartino Jan 8, 2024
397f257
add breaking change
cameronmartino Jan 8, 2024
b42777e
remove unnecessary np.random seed setting
cameronmartino Jan 15, 2024
1fde9cf
add docstring
cameronmartino Jan 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
adding tempted & new sparse tensor class functionality
cameronmartino committed May 26, 2023

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 7dfbff78c1d533d7734bf7e60ae7d92f99a42b8a
422 changes: 422 additions & 0 deletions gemelli/preprocessing.py

Large diffs are not rendered by default.

575 changes: 575 additions & 0 deletions gemelli/tempted.py

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions gemelli/tests/data/tempted-coef.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
"","x"
"1",231.286771529821
"2",450.589253941152
"3",177.433115399811
1,041 changes: 1,041 additions & 0 deletions gemelli/tests/data/tempted-features-loadings.csv

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions gemelli/tests/data/tempted-individual-loadings.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"","Component 1","Component 2","Component 3"
"subject_one",0.81709351188791,-0.0800381159368038,0.923714100406608
"subject_two",-0.576505154209988,-0.996791803737012,0.383082576881292
4 changes: 4 additions & 0 deletions gemelli/tests/data/tempted-rsq.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
"","x"
"1",0.256956016271167
"2",0.198358423636939
"3",0.219506688521573
102 changes: 102 additions & 0 deletions gemelli/tests/data/tempted-state-loadings.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
"","Component 1","Component 2","Component 3"
"1",0.0544952116900873,0.252240119332513,0.0309221689046152
"2",0.0542817886148289,0.24663307631775,0.0323000770661606
"3",0.0540721984184947,0.241022480751773,0.0336731312195183
"4",0.0538702732368224,0.235404779764396,0.0350364767526136
"5",0.0536798443682127,0.229776420134214,0.0363852584127268
"6",0.0535047422597069,0.224133848288862,0.0377146203353745
"7",0.053348796554062,0.218473510308124,0.0390197059970491
"8",0.0532158360316583,0.212791851920692,0.0402956582362232
"9",0.0531096886600781,0.207085318505208,0.0415376192638524
"10",0.0530341815820867,0.20135035508987,0.0427407306292424
"11",0.0529931410820792,0.19558340635386,0.0439001332725609
"12",0.0529903926366604,0.189780916626304,0.0450109674775767
"13",0.0530297608650665,0.183939329885888,0.0460683728847881
"14",0.053115069583251,0.178055089760339,0.0470674885176782
"15",0.0532501417492977,0.172124639530438,0.0480034527302039
"16",0.0534387995064893,0.166144422123802,0.0488714032566814
"17",0.0536848641607716,0.160110880119423,0.0496664771986585
"18",0.0539921561717391,0.154020455745978,0.0503838110144125
"19",0.0543644951896939,0.147869590882739,0.0510185405031961
"20",0.0548057000150814,0.141654727058924,0.051565800868252
"21",0.0553195886295395,0.135372305451886,0.0520207265960357
"22",0.0559099781673531,0.129018766892291,0.052378451645258
"23",0.0565806849445006,0.122590551858808,0.0526341092368374
"24",0.0573355244286058,0.116084100479794,0.052782832019313
"25",0.0581783112689858,0.109495852534202,0.0528197519454409
"26",0.0591128592806258,0.102822247451964,0.0527400003955978
"27",0.0601399183097988,0.0960637760619571,0.0525436951907294
"28",0.0612479848159019,0.0892371358514095,0.0522509022140787
"29",0.0624224912991357,0.082363075706233,0.0518866738314674
"30",0.06364886942637,0.0754623441642282,0.0514760617339393
"31",0.0649125500241328,0.0685556894130125,0.0510441170637889
"32",0.0661989631016474,0.0616638592898912,0.0506158902360207
"33",0.0674935378267934,0.0548076012812096,0.0502164310827574
"34",0.0687817025231023,0.0480076625248136,0.049870788771846
"35",0.0700488847068169,0.0412847898073301,0.0496040118331142
"36",0.0712805110438218,0.0346597295663688,0.0494411481793745
"37",0.0724620073756858,0.0281532278892269,0.0494072450486618
"38",0.0735787987026341,0.0217860305144431,0.0495273490777498
"39",0.0746163092025791,0.0155788828270064,0.0498265062391365
"40",0.075559962211088,0.00955252986599619,0.0503297618673003
"41",0.0763951802514312,0.00372771631836664,0.0510621606718283
"42",0.0771073849855037,-0.00187481347937012,0.0520487467295391
"43",0.0776819972588971,-0.00723431553842423,0.0533145634424732
"44",0.0781044370858757,-0.0123300462229084,0.0548846535982822
"45",0.0783601236473732,-0.01714126224401,0.0567840593623514
"46",0.0784344752859845,-0.0216472206662075,0.0590378222410422
"47",0.0783129095269999,-0.0258271788993707,0.0616709830948193
"48",0.0779808430413454,-0.0296603947084734,0.064708582164507
"49",0.077423691674629,-0.033126126205305,0.0681756590240288
"50",0.0766268704561555,-0.0362036318509317,0.0720972526512978
"51",0.0755757935598641,-0.0388721704608759,0.0764984013389471
"52",0.0742643744414227,-0.0411182837695973,0.0813883237896093
"53",0.0727205260371725,-0.0429576441618796,0.0867129620131686
"54",0.0709806605762016,-0.0444132069444225,0.0924024384058786
"55",0.0690811894262223,-0.0455079277770869,0.0983868747154709
"56",0.0670585231446535,-0.0462647626669382,0.104596392046406
"57",0.0649490714345496,-0.0467066679714838,0.110961110894008
"58",0.0627892431936794,-0.0468566003991908,0.117411151086696
"59",0.0606154464654476,-0.0467375170083208,0.123876631806998
"60",0.058464088462933,-0.0463723752065413,0.130287671623049
"61",0.0563715755728951,-0.0457841327518322,0.136574388454464
"62",0.0543743133587786,-0.0449957477519679,0.142666899577585
"63",0.0525087065286628,-0.0440301786650349,0.148495321646491
"64",0.050811158980333,-0.0429103842986555,0.153989770645732
"65",0.04931807376322,-0.041659323810764,0.159080361950721
"66",0.0480658531014367,-0.0402999567087009,0.163697210272595
"67",0.0470908983877689,-0.0388552428515435,0.167770429718604
"68",0.0464296101816714,-0.0373481424469981,0.171230133713345
"69",0.0461183882032593,-0.035801616050752,0.174006435082777
"70",0.0461936313543408,-0.034238624573597,0.176029446001714
"71",0.0466917376893713,-0.0326821292711978,0.177229277980691
"72",0.0476491044465028,-0.0311550917526399,0.177536041923734
"73",0.0491021280105246,-0.0296804739751193,0.17687984808372
"74",0.0510872039569339,-0.0282812382460151,0.175190806075506
"75",0.0536407270048606,-0.026980347224055,0.17239902487068
"76",0.0567990910671472,-0.0258007639155592,0.168434612810684
"77",0.0605883506007173,-0.0247606618881596,0.163247611186337
"78",0.0649932048349642,-0.0238590558903289,0.15686779499213
"79",0.0699880135493719,-0.0230901712323692,0.149344872179805
"80",0.0755471357161356,-0.0224482335708804,0.140728550065709
"81",0.0816449294530861,-0.0219274689154946,0.131068535312417
"82",0.088255752061751,-0.0215221036240838,0.120414533973366
"83",0.0953539599892936,-0.0212263644047035,0.108816251416711
"84",0.102913908877592,-0.0210344783167571,0.0963233924251013
"85",0.110909953522171,-0.0209406727676298,0.0829856611090334
"86",0.119316447877215,-0.0209391755167023,0.0688527609383582
"87",0.128107745085611,-0.0210242146706895,0.0539743947632864
"88",0.13725819743488,-0.0211900186889495,0.0384002647697533
"89",0.146742156400247,-0.0214308163802466,0.0221800725529353
"90",0.156533972618599,-0.0217408369021032,0.00536351900434933
"91",0.166607995879469,-0.0221143097632607,-0.0119996955726209
"92",0.176938575162096,-0.0225454648210893,-0.0298598715388673
"93",0.187500058597364,-0.0230285322846964,-0.0481673098827989
"94",0.198266793498854,-0.023557742711818,-0.0668723122413464
"95",0.209213126333795,-0.0241273270112796,-0.0859251808658302
"96",0.220313402739088,-0.0247315164410536,-0.105276218695476
"97",0.231541967525318,-0.0253645426089069,-0.124875729257644
"98",0.242873164660722,-0.0260206374734365,-0.144674016733465
"99",0.254281337298238,-0.0266940333427752,-0.164621385952595
"100",0.265740827742447,-0.027378962875109,-0.184668142393209
"101",0.277225977474617,-0.0280696590782886,-0.204764592150499
234 changes: 234 additions & 0 deletions gemelli/tests/test_sparse_tensor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,234 @@
import unittest
import pandas as pd
import numpy as np
from biom import Table
from pandas.testing import assert_frame_equal
from gemelli.preprocessing import (build_sparse,
svd_centralize)
from numpy.testing import assert_allclose


class TestBuildSparse(unittest.TestCase):

def setUp(self):

# Create a sample table and metadata for testing
data = np.array([[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]])
table = Table(data,
['feat1', 'feat2', 'feat3'],
['sample1', 'sample2',
'sample3', 'sample4',
'sample5'])
metadata = pd.DataFrame({'sample': ['sample1',
'sample2',
'sample3',
'sample4',
'sample5'],
'individual_id': ['ind1',
'ind2',
'ind2',
'ind1',
'ind1'],
'state': [1, 2, 1, 2, 1]})
metadata = metadata.set_index('sample')
self.table = table
self.metadata = metadata

def test_construct_invalid_individual_id(self):
"""
Test the construct method with invalid ID
"""
bs = build_sparse()
with self.assertRaises(ValueError):
bs.construct(self.table.copy(), self.metadata.copy(),
'invalid_id', 'state')

def test_construct_invalid_state_column(self):
"""
Test the construct method with invalid states
"""
bs = build_sparse()
with self.assertRaises(ValueError):
bs.construct(self.table.copy(), self.metadata.copy(),
'individual_id', 'invalid_state')

def test_construct_invalid_table(self):
"""
Test the construct method with invalid table
"""
bs = build_sparse()
with self.assertRaises(ValueError):
bs.construct('invalid_table', self.metadata.copy(),
'individual_id', 'state')

def test_construct_replicate_handling_error(self):
"""
Test the construct method with replicate_handling='error'
"""
bs = build_sparse()
with self.assertRaises(ValueError):
bs.construct(self.table.copy(), self.metadata.copy(),
'individual_id', 'state',
replicate_handling='error')

def test_construct_with_dataframe_table(self):
"""
Test the construct method works with dataframes
"""
bs = build_sparse()
table_df = pd.DataFrame(self.table.matrix_data.toarray(),
self.table.ids('observation'),
self.table.ids('sample'))
bs.construct(table_df,
self.metadata.copy(),
'individual_id', 'state')
# Check if the constructed attributes are set correctly
self.assertTrue(isinstance(bs.table, pd.DataFrame))

def test_construct_replicate_handling_random(self):
"""
Test the construct method with replicate_handling='random'
"""

# epxected result
table_dereplicated_exp = pd.DataFrame(np.array([[1., 4., 2., 3.],
[6., 9., 7., 8.],
[11., 14., 12., 13.]]),
['feat1', 'feat2', 'feat3'],
['sample1', 'sample4',
'sample2', 'sample3'],)
mf_dereplicated_exp = pd.DataFrame(np.array([['ind1', 1],
['ind1', 2],
['ind2', 2],
['ind2', 1]]),
['sample1', 'sample4',
'sample2', 'sample3'],
['individual_id', 'state'])
mf_dereplicated_exp['state'] = mf_dereplicated_exp['state'].astype(int)
t1 = np.array([[1., 4.], [6., 9.], [11., 14.]])
t2 = np.array([[3., 2.], [8., 7.], [13., 12.]])
individual_id_tables_exp = {'ind1': pd.DataFrame(t1,
['feat1',
'feat2',
'feat3'],
['sample1',
'sample4']),
'ind2': pd.DataFrame(t2,
['feat1',
'feat2',
'feat3'],
['sample3',
'sample2'])}
individual_id_tables_exp['ind1'].index.name = None
individual_id_tables_exp['ind2'].index.name = None

# run and test
bs = build_sparse()
bs.construct(self.table.copy(),
self.metadata.copy(),
'individual_id', 'state',
replicate_handling='random',
transformation=lambda x: x,
pseudo_count=0)
# test dataframes are the same
assert_frame_equal(table_dereplicated_exp, bs.table_dereplicated)
bs.mf_dereplicated.index.name = None
assert_frame_equal(mf_dereplicated_exp, bs.mf_dereplicated)
bs.individual_id_tables['ind1'].columns.name = None
bs.individual_id_tables['ind2'].columns.name = None
bs.individual_id_tables['ind1'].index.name = None
bs.individual_id_tables['ind2'].index.name = None
assert_frame_equal(individual_id_tables_exp['ind1'],
bs.individual_id_tables['ind1'])
assert_frame_equal(individual_id_tables_exp['ind2'],
bs.individual_id_tables['ind2'])

def test_construct_replicate_handling_sum(self):
"""
Test the construct method with replicate_handling='sum'
"""

# epxected result
table_dereplicated_exp = pd.DataFrame(np.array([[6., 4., 2., 3.],
[16., 9., 7., 8.],
[26., 14., 12., 13.]]),
['feat1', 'feat2', 'feat3'],
['sample1', 'sample4',
'sample2', 'sample3'],)
mf_dereplicated_exp = pd.DataFrame(np.array([['ind1', 1],
['ind1', 2],
['ind2', 2],
['ind2', 1]]),
['sample1', 'sample4',
'sample2', 'sample3'],
['individual_id', 'state'])

mf_dereplicated_exp['state'] = mf_dereplicated_exp['state'].astype(int)
t1 = np.array([[6., 4.], [16., 9.], [26., 14.]])
t2 = np.array([[3., 2.], [8., 7.], [13., 12.]])
individual_id_tables_exp = {'ind1': pd.DataFrame(t1,
['feat1',
'feat2',
'feat3'],
['sample1',
'sample4']),
'ind2': pd.DataFrame(t2,
['feat1',
'feat2',
'feat3'],
['sample3',
'sample2'])}

# run and test
bs = build_sparse()
bs.construct(self.table.copy(),
self.metadata.copy(),
'individual_id', 'state',
replicate_handling='sum',
transformation=lambda x: x,
pseudo_count=0)
# test dataframes are the same
assert_frame_equal(table_dereplicated_exp, bs.table_dereplicated)
bs.mf_dereplicated.index.name = None
assert_frame_equal(mf_dereplicated_exp, bs.mf_dereplicated)
bs.individual_id_tables['ind1'].columns.name = None
bs.individual_id_tables['ind2'].columns.name = None
bs.individual_id_tables['ind1'].index.name = None
bs.individual_id_tables['ind2'].index.name = None
assert_frame_equal(individual_id_tables_exp['ind1'],
bs.individual_id_tables['ind1'])
assert_frame_equal(individual_id_tables_exp['ind2'],
bs.individual_id_tables['ind2'])

def test_svd_centralize(self):
"""
test svd_centralize
"""

# Create a list of dataframes
df1 = pd.DataFrame({'x': [1, 2, 3, 4, 5], 'y': [2, 4, 6, 8, 10]})
df2 = pd.DataFrame({'x': [6, 7, 8, 9, 10], 'y': [12, 14, 16, 18, 20]})
individual_id_tables = {'subject_one': df1, 'subject_two': df2}

# Test the svd_centralize function
results, _, _, _ = svd_centralize(individual_id_tables)

# Check the results
exp_one = np.array([[-2.29656623, -1.29656623],
[-2.01683978, -0.01683978],
[-1.73711332, 1.26288668],
[-1.45738687, 2.54261313],
[-1.17766041, 3.82233959]])
exp_two = np.array([[-2.28516848, 3.71483152],
[-3.09541201, 3.90458799],
[-3.90565554, 4.09434446],
[-4.71589906, 4.28410094],
[-5.52614259, 4.47385741]])
assert_allclose(results['subject_one'].values, exp_one, atol=1e-3)
assert_allclose(results['subject_two'].values, exp_two, atol=1e-3)


if __name__ == "__main__":
unittest.main()
218 changes: 218 additions & 0 deletions gemelli/tests/test_tempted.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,218 @@
import unittest
import os
import inspect
import pandas as pd
import numpy as np
from skbio import OrdinationResults
from pandas import read_csv
from biom import load_table
from skbio.util import get_data_path
from gemelli.testing import assert_ordinationresults_equal
from gemelli.tempted import (freg_rkhs,
bernoulli_kernel,
tempted_transform,
tempted)
from gemelli.preprocessing import build_sparse
from numpy.testing import assert_allclose


class TestTempted(unittest.TestCase):

def setUp(self):
pass

def test_freg_rkhs(self):
"""
test freg_rkhs
"""

Ly = [np.array([-7.511, -13.455, -10.307, -25.813, 26.429]),
np.array([2.131, 1.225, -3.488, 10.299])]
a_hat = np.array([0.9, 0.436])
ind_vec = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1])
Kmat = np.array([[1.258, 1.124, 0.995, 0.874,
0.758, 1.124, 0.995, 0.874, 0.758],
[1.124, 1.064, 1., 0.936,
0.874, 1.064, 1., 0.936, 0.874],
[0.995, 1., 1.003, 1.,
0.995, 1., 1.003, 1., 0.995],
[0.874, 0.936, 1., 1.064,
1.124, 0.936, 1., 1.064, 1.124],
[0.758, 0.874, 0.995, 1.124,
1.258, 0.874, 0.995, 1.124, 1.258],
[1.124, 1.064, 1., 0.936,
0.874, 1.064, 1., 0.936, 0.874],
[0.995, 1., 1.003, 1.,
0.995, 1., 1.003, 1., 0.995],
[0.874, 0.936, 1., 1.064,
1.124, 0.936, 1., 1.064, 1.124],
[0.758, 0.874, 0.995, 1.124,
1.258, 0.874, 0.995, 1.124, 1.258]])
Kmat_output = np.array([[1.258, 1.124, 0.995, 0.874,
0.758, 1.124, 0.995, 0.874, 0.758],
[1.198, 1.098, 0.998, 0.902,
0.809, 1.098, 0.998, 0.902, 0.809],
[1.139, 1.071, 1., 0.929,
0.861, 1.071, 1., 0.929, 0.861],
[1.08, 1.043, 1.002, 0.958,
0.914, 1.043, 1.002, 0.958, 0.914],
[1.023, 1.015, 1.003, 0.986,
0.968, 1.015, 1.003, 0.986, 0.968],
[0.968, 0.986, 1.003, 1.015,
1.023, 0.986, 1.003, 1.015, 1.023],
[0.914, 0.958, 1.002, 1.043,
1.08, 0.958, 1.002, 1.043, 1.08],
[0.861, 0.929, 1., 1.071,
1.139, 0.929, 1., 1.071, 1.139],
[0.809, 0.902, 0.998, 1.098,
1.198, 0.902, 0.998, 1.098, 1.198],
[0.758, 0.874, 0.995, 1.124,
1.258, 0.874, 0.995, 1.124, 1.258]])
exp_phi = np.array([-9.275, -16.187, -2.117,
-13.928, -10.251, -25.671,
-25.964, -17.503, -0.885, 36.716])
res_phi = freg_rkhs(Ly, a_hat, ind_vec, Kmat, Kmat_output)
assert_allclose(exp_phi, res_phi, atol=1e-3)

def test_bernoulli_kernel(self):
"""
test bernoulli_kernel
"""

Kmat_exp = np.array([[1.25833, 0.99531, 0.75833],
[0.99531, 1.00312, 0.99531],
[0.75833, 0.99531, 1.25833]])
Kmat_res = bernoulli_kernel(np.linspace(0, 1, num=3),
np.linspace(0, 1, num=3))
assert_allclose(Kmat_exp, Kmat_res, atol=1e-3)

def test_tempted(self):
"""
Tests tempted and also checks that it matches R version.
(R v.0.1.0 - 6/1/23)
"""

callers_filename = inspect.getouterframes(inspect.currentframe())[1][1]
path = os.path.dirname(os.path.abspath(callers_filename))
print(path)

# grab test data
in_table = get_data_path('test-small.biom', '../q2/tests/data')
in_meta = get_data_path('test-small.tsv', '../q2/tests/data')
# get R version expected results
tempted_rsq_exp = get_data_path('tempted-rsq.csv')
tempted_rsq_exp = pd.read_csv(tempted_rsq_exp,
index_col=0)
tempted_fl_exp = get_data_path('tempted-features-loadings.csv')
tempted_fl_exp = pd.read_csv(tempted_fl_exp,
index_col=0)
tempted_sl_exp = get_data_path('tempted-state-loadings.csv')
tempted_sl_exp = pd.read_csv(tempted_sl_exp,
index_col=0)
tempted_il_exp = get_data_path('tempted-individual-loadings.csv')
tempted_il_exp = pd.read_csv(tempted_il_exp,
index_col=0)
tempted_fl_exp.columns = ['component_1', 'component_2', 'component_3']
tempted_sl_exp.columns = ['component_1', 'component_2', 'component_3']
tempted_il_exp.columns = ['component_1', 'component_2', 'component_3']
tempted_il_exp.index = ['subject_6', 'subject_9']
exp_subject = OrdinationResults('exp', 'exp',
tempted_rsq_exp.values.flatten(),
samples=tempted_il_exp,
features=tempted_fl_exp)
exp_state = OrdinationResults('exp', 'exp',
tempted_rsq_exp.values.flatten(),
samples=tempted_sl_exp,
features=tempted_fl_exp)
# run tempted in gemelli
table = load_table(in_table)
sample_metadata = read_csv(in_meta, sep='\t', index_col=0)
sample_metadata['time_points'] = [int(x.split('_')[-1])
for x in sample_metadata['context']]
# tensor building
sparse_tensor = build_sparse()
sparse_tensor.construct(table,
sample_metadata,
'host_subject_id',
'time_points')
# run TEMPTED
tempted_res = tempted(sparse_tensor.individual_id_tables_centralized,
sparse_tensor.individual_id_state_orders,
sparse_tensor.feature_order)
# build res to test
res_subject = OrdinationResults('exp', 'exp',
tempted_res[4],
samples=tempted_res[0],
features=tempted_res[1])
tempted_res[2].index = tempted_res[2].index.astype(int) + 1
res_state = OrdinationResults('exp', 'exp',
tempted_res[4],
samples=tempted_res[2],
features=tempted_res[1])
# run testing
assert_ordinationresults_equal(res_subject, exp_subject)
assert_ordinationresults_equal(res_state, exp_state)

def test_tempted_projection(self):
"""
Test tempted projection data.
"""
individual_id_tables_test = {'ID1': pd.DataFrame([[1, 2, 3],
[4, 5, 6]],
columns=['Sample1',
'Sample2',
'Sample3'],
index=['Feature1',
'Feature2']),
'ID2': pd.DataFrame([[7, 8, 9],
[10, 11, 12]],
columns=['Sample1',
'Sample2',
'Sample3'],
index=['Feature1',
'Feature2'])}
individual_id_state_orders_test = {'ID1': np.array([1, 2, 3]),
'ID2': np.array([1, 2, 3])}
feature_loading_train = pd.DataFrame([[0.1, 0.2],
[0.3, 0.4]],
columns=['Component1',
'Component2'],
index=['Feature1',
'Feature2'])
state_loading_train = pd.DataFrame([[0.5, 0.6],
[0.7, 0.8],
[0.9, 1.0]],
columns=['Component1',
'Component2'],
index=[1, 2, 3])
eigen_coeff_train = np.array([100, 100])
time_train = pd.DataFrame([[1], [2], [3]],
index=['Time1',
'Time2',
'Time3'])
v_centralized_train = np.array([[0.1],
[0.2]])

# Expected output
expected_output = pd.DataFrame([[0.022926,
0.025735],
[0.053735,
0.062980]],
columns=['Component1',
'Component2'],
index=['ID1',
'ID2'])

# Run the function
output = tempted_transform(individual_id_tables_test,
individual_id_state_orders_test,
feature_loading_train,
state_loading_train,
eigen_coeff_train,
time_train,
v_centralized_train)
output.round(3).equals(expected_output.round(3))


if __name__ == "__main__":
unittest.main()