-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
delete from SpikeSortingOutput
exceeds max attempts
#886
Comments
delete_downstream_merge
finds a table not in the graph
I'm not 100% sure what's going on here.
That suggests that the graph DJ is using doesn't have access to all the same nodes that Spyglass does. It would hep me debug if I knew...
A known issue with |
Error:
|
Please try from spyglass.spikesorting.spikesorting_merge import SpikeSortingOutput
import spyglass.spikesorting.v0.spikesorting_recording as sgss
(sgss.SortGroup & {
'nwb_file_name': 'J1620210529_.nwb',
'sort_group_id': 100
}).cautious_delete() |
Thanks Chris. That works to delete some, but yields this error: Error stack---------------------------------------------------------------------------
DataJointError Traceback (most recent call last)
Cell In[8], line 4
1 from spyglass.spikesorting.spikesorting_merge import SpikeSortingOutput
2 import spyglass.spikesorting.v0.spikesorting_recording as sgss
----> 4 (sgss.SortGroup & {
5 'nwb_file_name': 'J1620210529_.nwb',
6 'sort_group_id': 100
7 }).cautious_delete()
File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:479, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
476 self._log_use(start)
477 return
--> 479 super().delete(*args, **kwargs) # Additional confirm here
481 self._log_use(start=start, merge_deletes=merge_deletes)
File ~/src/datajoint-python/datajoint/table.py:586, in Table.delete(self, transaction, safemode, force_parts)
584 # Cascading delete
585 try:
--> 586 delete_count = cascade(self)
587 except:
588 if transaction:
File ~/src/datajoint-python/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
554 else:
555 child &= table.proj()
--> 556 cascade(child)
557 else:
558 deleted.add(table.full_table_name)
File ~/src/datajoint-python/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
554 else:
555 child &= table.proj()
--> 556 cascade(child)
557 else:
558 deleted.add(table.full_table_name)
[... skipping similar frames: Table.delete.<locals>.cascade at line 556 (1 times)]
File ~/src/datajoint-python/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
554 else:
555 child &= table.proj()
--> 556 cascade(child)
557 else:
558 deleted.add(table.full_table_name)
File ~/src/datajoint-python/datajoint/table.py:566, in Table.delete.<locals>.cascade(table)
564 break
565 else:
--> 566 raise DataJointError("Exceeded maximum number of delete attempts.")
567 return delete_count
DataJointError: Exceeded maximum number of delete attempts. |
delete_downstream_merge
finds a table not in the graphSpikeSortingOutput
exceeds max attempts
Similar issue here when trying to delete one entry from the Nwbfile table so I could re-insert the correct data from the same day:
|
Hi @xlsun79 - The missing node error can be solved by importing the table and attempting to rerun. If you see a 'max attempt' error even after importing, please port your error stack in the following format
|
Thanks @CBroz1 ! I imported all the merge tables and reran, which solved the table not in the graph error. I didn't ran into a max attempt error, but had the follows happening: nwb_file_name = "Lewis20240222_.nwb"
(Nwbfile() & {'nwb_file_name':nwb_file_name}).cautious_delete() Error stack---------------------------------------------------------------------------
IntegrityError Traceback (most recent call last)
Cell In [17], line 1
----> 1 (Nwbfile() & {'nwb_file_name':nwb_copy_file_name}).cautious_delete()
File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:479, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
476 self._log_use(start)
477 return
--> 479 super().delete(*args, **kwargs) # Additional confirm here
481 self._log_use(start=start, merge_deletes=merge_deletes)
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/table.py:561, in Table.delete(self, transaction, safemode, force_parts)
559 # Cascading delete
560 try:
--> 561 delete_count = cascade(self)
562 except:
563 if transaction:
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/table.py:479, in Table.delete.<locals>.cascade(table)
477 for _ in range(max_attempts):
478 try:
--> 479 delete_count = table.delete_quick(get_count=True)
480 except IntegrityError as error:
481 match = foreign_key_error_regexp.match(error.args[0]).groupdict()
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/table.py:453, in Table.delete_quick(self, get_count)
448 """
449 Deletes the table without cascading and without user prompt.
450 If this table has populated dependent tables, this will fail.
451 """
452 query = "DELETE FROM " + self.full_table_name + self.where_clause()
--> 453 self.connection.query(query)
454 count = (
455 self.connection.query("SELECT ROW_COUNT()").fetchone()[0]
456 if get_count
457 else None
458 )
459 self._log(query[:255])
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/connection.py:340, in Connection.query(self, query, args, as_dict, suppress_warnings, reconnect)
338 cursor = self._conn.cursor(cursor=cursor_class)
339 try:
--> 340 self._execute_query(cursor, query, args, suppress_warnings)
341 except errors.LostConnectionError:
342 if not reconnect:
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/connection.py:296, in Connection._execute_query(cursor, query, args, suppress_warnings)
294 cursor.execute(query, args)
295 except client.err.Error as err:
--> 296 raise translate_query_error(err, query)
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/connection.py:294, in Connection._execute_query(cursor, query, args, suppress_warnings)
291 if suppress_warnings:
292 # suppress all warnings arising from underlying SQL library
293 warnings.simplefilter("ignore")
--> 294 cursor.execute(query, args)
295 except client.err.Error as err:
296 raise translate_query_error(err, query)
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/cursors.py:148, in Cursor.execute(self, query, args)
144 pass
146 query = self.mogrify(query, args)
--> 148 result = self._query(query)
149 self._executed = query
150 return result
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/cursors.py:310, in Cursor._query(self, q)
308 self._last_executed = q
309 self._clear_result()
--> 310 conn.query(q)
311 self._do_get_result()
312 return self.rowcount
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:548, in Connection.query(self, sql, unbuffered)
546 sql = sql.encode(self.encoding, "surrogateescape")
547 self._execute_command(COMMAND.COM_QUERY, sql)
--> 548 self._affected_rows = self._read_query_result(unbuffered=unbuffered)
549 return self._affected_rows
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:775, in Connection._read_query_result(self, unbuffered)
773 else:
774 result = MySQLResult(self)
--> 775 result.read()
776 self._result = result
777 if result.server_status is not None:
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:1156, in MySQLResult.read(self)
1154 def read(self):
1155 try:
-> 1156 first_packet = self.connection._read_packet()
1158 if first_packet.is_ok_packet():
1159 self._read_ok_packet(first_packet)
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:725, in Connection._read_packet(self, packet_type)
723 if self._result is not None and self._result.unbuffered_active is True:
724 self._result.unbuffered_active = False
--> 725 packet.raise_for_error()
726 return packet
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/protocol.py:221, in MysqlPacket.raise_for_error(self)
219 if DEBUG:
220 print("errno =", errno)
--> 221 err.raise_mysql_exception(self._data)
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/err.py:143, in raise_mysql_exception(data)
141 if errorclass is None:
142 errorclass = InternalError if errno < 1000 else OperationalError
--> 143 raise errorclass(errno, errval)
IntegrityError: (1217, 'Cannot delete or update a parent row: a foreign key constraint fails') |
I have some updates from trying to debug the last error. I figured that the foreign key error may be due to the requirement to delete child tables before deleting the parent table. So I ended up trying to delete sgc.Session() but then got a different error: Error[23:55:14][INFO] Spyglass: Queueing delete for session(s):
*nwb_file_name *lab_member_na
+------------+ +------------+
Lewis20240222_ Xulu Sun
(Total: 1)
[23:55:16][INFO] Spyglass: Building merge cache for _session.
Found 4 downstream merge tables
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In [32], line 1
----> 1 (sgc.Session() & {'nwb_file_name':nwb_copy_file_name}).cautious_delete()
File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:452, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
449 if not force_permission:
450 self._check_delete_permission()
--> 452 merge_deletes = self.delete_downstream_merge(
453 dry_run=True,
454 disable_warning=True,
455 return_parts=False,
456 )
458 safemode = (
459 dj.config.get("safemode", True)
460 if kwargs.get("safemode") is None
461 else kwargs["safemode"]
462 )
464 if merge_deletes:
File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:248, in SpyglassMixin.delete_downstream_merge(self, restriction, dry_run, reload_cache, disable_warning, return_parts, **kwargs)
245 restriction = restriction or self.restriction or True
247 merge_join_dict = {}
--> 248 for name, chain in self._merge_chains.items():
249 join = chain.join(restriction)
250 if join:
File ~/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner)
991 val = cache.get(self.attrname, _NOT_FOUND)
992 if val is _NOT_FOUND:
--> 993 val = self.func(instance)
994 try:
995 cache[self.attrname] = val
File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:173, in SpyglassMixin._merge_chains(self)
171 merge_chains = {}
172 for name, merge_table in self._merge_tables.items():
--> 173 chains = TableChains(self, merge_table, connection=self.connection)
174 if len(chains):
175 merge_chains[name] = chains
File ~/code/spyglass/src/spyglass/utils/dj_chains.py:76, in TableChains.__init__(self, parent, child, connection)
74 self.part_names = [part.full_table_name for part in parts]
75 self.chains = [TableChain(parent, part) for part in parts]
---> 76 self.has_link = any([chain.has_link for chain in self.chains])
File ~/code/spyglass/src/spyglass/utils/dj_chains.py:76, in <listcomp>(.0)
74 self.part_names = [part.full_table_name for part in parts]
75 self.chains = [TableChain(parent, part) for part in parts]
---> 76 self.has_link = any([chain.has_link for chain in self.chains])
File ~/code/spyglass/src/spyglass/utils/dj_chains.py:231, in TableChain.has_link(self)
225 """Return True if parent is linked to child.
226
227 If not searched, search for path. If searched and no link is found,
228 return False. If searched and link is found, return True.
229 """
230 if not self._searched:
--> 231 _ = self.path
232 return self.link_type is not None
File ~/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner)
991 val = cache.get(self.attrname, _NOT_FOUND)
992 if val is _NOT_FOUND:
--> 993 val = self.func(instance)
994 try:
995 cache[self.attrname] = val
File ~/code/spyglass/src/spyglass/utils/dj_chains.py:300, in TableChain.path(self)
297 return None
299 link = None
--> 300 if link := self.find_path(directed=True):
301 self.link_type = "directed"
302 elif link := self.find_path(directed=False):
File ~/code/spyglass/src/spyglass/utils/dj_chains.py:285, in TableChain.find_path(self, directed)
283 if not prev_table:
284 raise ValueError("Alias node found without prev table.")
--> 285 attr_map = self.graph[table][prev_table]["attr_map"]
286 ret[prev_table]["attr_map"] = attr_map
287 else:
File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/classes/coreviews.py:54, in AtlasView.__getitem__(self, key)
53 def __getitem__(self, key):
---> 54 return self._atlas[key]
KeyError: '`position_v1_dlc_centroid`.`__d_l_c_centroid`' |
Hi @CBroz1 I was wondering if there's any solution to fix the error above when I was trying to delete my entry in the sgc.Session() table before being able to delete that from Nwbfile() so I could reinsert the correct data. Otherwise I wouldn't be able to analyze data from that day. Thank you! |
In a possibly related case, a user reported that cascade failed on this recording table because it yielded an invalid restriction... DELETE FROM `spikesorting_v1_recording`.`__spike_sorting_recording` WHERE ( (`nwb_file_name`="bobrick20231204_.nwb")) Error stacksession_entry = sgc.Session & {'nwb_file_name': nwb_copy_file_name}
session_entry.super_delete()
---------------------------------------------------------------------------
UnknownAttributeError Traceback (most recent call last)
Cell In[49], line 4
2 nwb_copy_file_name = get_nwb_copy_filename(nwb_file_name)
3 session_entry = sgc.Session & {'nwb_file_name': nwb_copy_file_name}
----> 4 session_entry.super_delete()
File ~/Documents/gabby/spyglass/src/spyglass/utils/dj_mixin.py:537, in SpyglassMixin.super_delete(self, *args, **kwargs)
535 logger.warning("!! Using super_delete. Bypassing cautious_delete !!")
536 self._log_use(start=time(), super_delete=True)
--> 537 super().delete(*args, **kwargs)
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:586, in Table.delete(self, transaction, safemode, force_parts)
584 # Cascading delete
585 try:
--> 586 delete_count = cascade(self)
587 except:
588 if transaction:
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
554 else:
555 child &= table.proj()
--> 556 cascade(child)
557 else:
558 deleted.add(table.full_table_name)
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
554 else:
555 child &= table.proj()
--> 556 cascade(child)
557 else:
558 deleted.add(table.full_table_name)
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
554 else:
555 child &= table.proj()
--> 556 cascade(child)
557 else:
558 deleted.add(table.full_table_name)
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:504, in Table.delete.<locals>.cascade(table)
502 for _ in range(max_attempts):
503 try:
--> 504 delete_count = table.delete_quick(get_count=True)
505 except IntegrityError as error:
506 match = foreign_key_error_regexp.match(error.args[0]).groupdict()
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:463, in Table.delete_quick(self, get_count)
458 """
459 Deletes the table without cascading and without user prompt.
460 If this table has populated dependent tables, this will fail.
461 """
462 query = "DELETE FROM " + self.full_table_name + self.where_clause()
--> 463 self.connection.query(query)
464 count = (
465 self.connection.query("SELECT ROW_COUNT()").fetchone()[0]
466 if get_count
467 else None
468 )
469 self._log(query[:255])
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/connection.py:340, in Connection.query(self, query, args, as_dict, suppress_warnings, reconnect)
338 cursor = self._conn.cursor(cursor=cursor_class)
339 try:
--> 340 self._execute_query(cursor, query, args, suppress_warnings)
341 except errors.LostConnectionError:
342 if not reconnect:
File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/connection.py:296, in Connection._execute_query(cursor, query, args, suppress_warnings)
294 cursor.execute(query, args)
295 except client.err.Error as err:
--> 296 raise translate_query_error(err, query)
UnknownAttributeError: Unknown column 'nwb_file_name' in 'where clause' |
* Add spyglass version to created analysis nwb files (#897) * Add sg version to created analysis nwb files * update changelog * Change existing source script to spyglass version (#900) * Add pynapple support (#898) * Preliminary code * Add retrieval of file names * Add get_nwb_table function * Update docstrings * Update CHANGELOG.md * Hot fixes for clusterless `get_ahead_behind_distance` and `get_spike_times` (#904) * Squeeze results * Make method and not class method * Update CHANGELOG.md * fix bugs in fetch_nwb (#913) * Check for entry in merge part table prior to insert (#922) * check for entry in merge part table prior to insert * update changelog * Kachery fixes (#918) * Prioritize datajoint filepath for getting analysis file abs_path * remove deprecated kachery tables * update changelog * fix lint --------- Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu> Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com> * remove old tables from init (#925) * Fix improper uses of strip (#929) Strip will remove leading characters * Update CHANGELOG.md * Misc Issues (#903) * #892 * #885 * #879 * Partial address of #860 * Update Changelog * Partial solve of #886 - Ask import * Fix failing tests * Add note on order of inheritace * #933 * Could not replicate fill_nan error. Reverting except clause * Export logger (#875) * WIP: rebase Export process * WIP: revise doc * ✅ : Generate working export script * Cleanup: Expand notebook, migrate export process from graph class to export * Revert dj_chains related edits * Update changelog * Revise doc * Address review comments #875 * Remove walrus in eval * prevent log on preview * Fix arg order on fetch, iterate over restr * Add upstream analysis files during cascade. Address false positive fetch * Avoid regen file list on revisit node * Bump Export.Table.restr to mediumblob * Revise Export.Table uniqueness to include export_id * Spikesorting quality of life helpers (#910) * add utitlity function for finding spikesorting merge ids * add option to select v1 sorts that didn't go through artifact detection * add option to return merge keys as dicts for future restrictions * Add tool to get brain region and electrode info for a spikesorting merge id * update changelog * style cleanup * style cleanup * fix restriction bug for curation_id * account for change or radiu_um argument name in spikeinterface * only do joins with metric curastion tables if have relevant keys in the restriction * Update tutorial to use spikesorting merge table helper functions * fix spelling * Add logging of AnalysisNwbfile creation time and file size (#937) * Add logging for any func that creates AnalysisNwbfile * Migrate create to top of respective funcs * Use pathlib for file size. Bump creation time to top of in spikesort * Clear pre_create_time on create * get/del -> pop * Log when file accessed (#941) * Add logging for any func that creates AnalysisNwbfile * Fix bug on empty delete in merge table (#940) * fix bug on empty delete in merge table * update changelog * fix spelling --------- Co-authored-by: Chris Brozdowski <Chris.Broz@ucsf.edu> * Remove master restriction * Part delete takes restriction from self --------- Co-authored-by: Samuel Bray <sam.bray@ucsf.edu> Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com> Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu> Co-authored-by: Eric Denovellis <edeno@bu.edu>
* Create class for group parts to help propagate deletes * spelling * update changelog * Part delete edits (#946) * Add spyglass version to created analysis nwb files (#897) * Add sg version to created analysis nwb files * update changelog * Change existing source script to spyglass version (#900) * Add pynapple support (#898) * Preliminary code * Add retrieval of file names * Add get_nwb_table function * Update docstrings * Update CHANGELOG.md * Hot fixes for clusterless `get_ahead_behind_distance` and `get_spike_times` (#904) * Squeeze results * Make method and not class method * Update CHANGELOG.md * fix bugs in fetch_nwb (#913) * Check for entry in merge part table prior to insert (#922) * check for entry in merge part table prior to insert * update changelog * Kachery fixes (#918) * Prioritize datajoint filepath for getting analysis file abs_path * remove deprecated kachery tables * update changelog * fix lint --------- Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu> Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com> * remove old tables from init (#925) * Fix improper uses of strip (#929) Strip will remove leading characters * Update CHANGELOG.md * Misc Issues (#903) * #892 * #885 * #879 * Partial address of #860 * Update Changelog * Partial solve of #886 - Ask import * Fix failing tests * Add note on order of inheritace * #933 * Could not replicate fill_nan error. Reverting except clause * Export logger (#875) * WIP: rebase Export process * WIP: revise doc * ✅ : Generate working export script * Cleanup: Expand notebook, migrate export process from graph class to export * Revert dj_chains related edits * Update changelog * Revise doc * Address review comments #875 * Remove walrus in eval * prevent log on preview * Fix arg order on fetch, iterate over restr * Add upstream analysis files during cascade. Address false positive fetch * Avoid regen file list on revisit node * Bump Export.Table.restr to mediumblob * Revise Export.Table uniqueness to include export_id * Spikesorting quality of life helpers (#910) * add utitlity function for finding spikesorting merge ids * add option to select v1 sorts that didn't go through artifact detection * add option to return merge keys as dicts for future restrictions * Add tool to get brain region and electrode info for a spikesorting merge id * update changelog * style cleanup * style cleanup * fix restriction bug for curation_id * account for change or radiu_um argument name in spikeinterface * only do joins with metric curastion tables if have relevant keys in the restriction * Update tutorial to use spikesorting merge table helper functions * fix spelling * Add logging of AnalysisNwbfile creation time and file size (#937) * Add logging for any func that creates AnalysisNwbfile * Migrate create to top of respective funcs * Use pathlib for file size. Bump creation time to top of in spikesort * Clear pre_create_time on create * get/del -> pop * Log when file accessed (#941) * Add logging for any func that creates AnalysisNwbfile * Fix bug on empty delete in merge table (#940) * fix bug on empty delete in merge table * update changelog * fix spelling --------- Co-authored-by: Chris Brozdowski <Chris.Broz@ucsf.edu> * Remove master restriction * Part delete takes restriction from self --------- Co-authored-by: Samuel Bray <sam.bray@ucsf.edu> Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com> Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu> Co-authored-by: Eric Denovellis <edeno@bu.edu> * Fix linting --------- Co-authored-by: Chris Brozdowski <Chris.Broz@ucsf.edu> Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com> Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu> Co-authored-by: Eric Denovellis <edeno@bu.edu>
Submitted as datajoint 1159 |
Merged into datajoint here: datajoint/datajoint-python#1160 But not released. Should we close @CBroz1 ? |
Yes, I think we can close. We already depend on the unreleased version for password management. I'm hoping they'll be able to make a release after 1158 |
I get the below error when trying to delete from SortGroup.
Code:
Error stack
@CBroz1
The text was updated successfully, but these errors were encountered: