Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cellular: Fix queue scheduling for bare metal #11859

Merged
merged 1 commit into from
Nov 16, 2019
Merged

Cellular: Fix queue scheduling for bare metal #11859

merged 1 commit into from
Nov 16, 2019

Conversation

kivaisan
Copy link
Contributor

Description (required)

Cellular event queue was not dispatched at all in case of bare metal build. This commit fixes the problem by chaining cellular queue with shared event queue.

Summary of change (What the change is for and why)
Documentation (Details of any document updates required)

Pull request type (required)

[X] Patch update (Bug fix / Target update / Docs update / Test update / Refactor)
[] Feature update (New feature / Functionality change / New API)
[] Major update (Breaking change E.g. Return code change / API behaviour change)

Test results (required)

[X] No Tests required for this change (E.g docs only update)
[] Covered by existing mbed-os tests (Greentea or Unittest)
[] Tests / results supplied as part of this PR

Reviewers (optional)

@AriParkkila @AnttiKauppila


Release Notes (required for feature/major PRs)

Summary of changes
Impact of changes
Migration actions required

For non-rtos build (bare metal) cellular event queue is now scheduled by shared event queue.
@ciarmcom
Copy link
Member

@kivaisan, thank you for your changes.
@AriParkkila @AnttiKauppila @ARMmbed/mbed-os-wan @ARMmbed/mbed-os-maintainers please review.

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 13, 2019

CI started

@mbed-ci
Copy link

mbed-ci commented Nov 13, 2019

Test run: FAILED

Summary: 1 of 11 test jobs failed
Build number : 1
Build artifacts

Failed test jobs:

  • jenkins-ci/mbed-os-ci_greentea-test

@kivaisan
Copy link
Contributor Author

Greentea test fail does not seem to relate this PR.

@kjbracey
Copy link
Contributor

This makes me nervous that you're going to cause large scheduling delays on the shared event queue (which has a guideline of <~100ms event duration) while waiting for an AT command response. Wasn't that the reason for having a separate event queue thread in the RTOS build in the first place?

Still, large scheduling delays are better than nothing working at all. And I suspect you may have occasionally ended up blocking the shared event queue waiting for an AT mutex anyway even in the RTOS setup.

@bulislaw
Copy link
Member

We have some changes queued to backport (uhm it sounds easier than it will be) to introduce standard/high priority levels for the bare metal queue past Mbed 6. That should allow us to make sure higher priority work won't be blocked, would it mitigate this issue?

@kjbracey
Copy link
Contributor

would it mitigate this issue?

Not if the events do "send AT command, block waiting for response", as I think they do.

Within-queue priority levels don't really help that much (in general), because the responsiveness is determined by the longest-running event on a queue.

You can't pre-emptively stop a slow low-priority event from running when someone queues a high-priority one.

Priority levels can be useful, but only if both high and low priority events have similar execution time. Eg in Nanostack we did the work to have long-running crypto (on slow 16-bit platforms) break itself up into fraction-of-a-second units, and schedule that repeatedly at low priority.

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 15, 2019

CI started

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 15, 2019

deadcee

Nice sha there , noticed while checking SHA for CI 😄

@mbed-ci
Copy link

mbed-ci commented Nov 15, 2019

Test run: SUCCESS

Summary: 11 of 11 test jobs passed
Build number : 2
Build artifacts

@0xc0170 0xc0170 merged commit cb54f50 into ARMmbed:master Nov 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants