You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should update the Allura, GitHub, Bugzilla, Redmine, JIRA, and (maybe) Gerrit backends to get data from APIs rather than via Beautiful Soup. Per the mailing list discussion:
Most bicho backends, such as jira.py, currently do search queries to get an XML file with many bug URLs at once and then use Beautiful Soup to grab information from the rendered HTML. This is not optimal; screenscraping is less robust than calling a web API.
Whenever possible, we want to use available APIs. However, some old bugtrackers used to not have APIs, and even today, some APIs don't give us all of the information that we want. So we need to be able to fall back to HTML scraping/parsing in cases where the bug tracker's API doesn't exist (maybe an old version of Bugzilla/JIRA/etc.) or doesn't provide stuff we want.
The text was updated successfully, but these errors were encountered:
We should update the Allura, GitHub, Bugzilla, Redmine, JIRA, and (maybe) Gerrit backends to get data from APIs rather than via Beautiful Soup. Per the mailing list discussion:
Most bicho backends, such as
jira.py
, currently do search queries to get an XML file with many bug URLs at once and then use Beautiful Soup to grab information from the rendered HTML. This is not optimal; screenscraping is less robust than calling a web API.Whenever possible, we want to use available APIs. However, some old bugtrackers used to not have APIs, and even today, some APIs don't give us all of the information that we want. So we need to be able to fall back to HTML scraping/parsing in cases where the bug tracker's API doesn't exist (maybe an old version of Bugzilla/JIRA/etc.) or doesn't provide stuff we want.
The text was updated successfully, but these errors were encountered: