Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

STM32: add setting for LSE drive load level #13797

Merged
merged 7 commits into from
Nov 18, 2020

Conversation

JojoS62
Copy link
Contributor

@JojoS62 JojoS62 commented Oct 21, 2020

Summary of changes

This PR is related to #13781
It adds a setting for the LSE drive load level.

This settings ist target hardware dependend and
depends on the crystal that is used for LSE clock.
For low power requirements, crystals with low load capacitors can be used and
driver setting is RCC_LSEDRIVE_LOW.
For higher stablity, crystals with higher load capacitys can be used and
driver setting is RCC_LSEDRIVE_HIGH.

When the setting is not matching the hardware, the LSE oscillator may not start or have large jitter.
Symptoms are Mbed Exceptions with LSE error or RTC wakeup error, or sleep_for() call never returns.

This PR fixes also a missing #include "platform/mbed_error.h" for the error() function that was used conditionally already.

Impact of changes

For known targets, the setting can be added to targets.json. For custom_targets, a default is used, but the user should provide a value that matches the hardware (crystal parameter depending).

Migration actions required

The defaults may be changed and the settings for existing targets in target.json need to be defined.

Documentation

A hint for this setting is necessary in the documentation, specially for custom_targets.


Pull request type

[X] Patch update (Bug fix / Target update / Docs update / Test update / Refactor)
[] Feature update (New feature / Functionality change / New API)
[] Major update (Breaking change E.g. Return code change / API behaviour change)

Test results

[] No Tests required for this change (E.g docs only update)
[] Covered by existing mbed-os tests (Greentea or Unittest)
[] Tests / results supplied as part of this PR

Manual tests with H743 and F411 targets were successfull.

Reviewers


@ciarmcom ciarmcom added the release-type: patch Indentifies a PR as containing just a patch label Oct 21, 2020
@ciarmcom
Copy link
Member

@JojoS62, thank you for your changes.
@ARMmbed/mbed-os-maintainers please review.

@ciarmcom ciarmcom requested a review from a team October 21, 2020 17:30
Copy link
Collaborator

@jeromecoutant jeromecoutant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be good to add some examples in the targets.json file with some known NUCLEO boards?

targets/TARGET_STM/mbed_overrides.c Outdated Show resolved Hide resolved
targets/TARGET_STM/mbed_overrides.c Outdated Show resolved Hide resolved
@jeromecoutant
Copy link
Collaborator

@ARMmbed/team-st-mcd

@jeromecoutant
Copy link
Collaborator

Could be good to add some examples in the targets.json file with some known NUCLEO boards?

Next steps after:

@JojoS62
Copy link
Contributor Author

JojoS62 commented Oct 22, 2020

I will check for defaults in target.json and add some lines in the documentation later.

@jeromecoutant
Copy link
Collaborator

See Travis info, seems there is some compilation issue with NUCLEO_F103RB

@JojoS62
Copy link
Contributor Author

JojoS62 commented Oct 22, 2020

I see, that happens for targets where this setting is not applicable but LSE is present.

use unique #if defined()
set all defaults to initial low as after a reset
most STM32 eval boards use low power crystals and work with this setting
@JojoS62 JojoS62 marked this pull request as ready for review October 23, 2020 17:40
@jeromecoutant
Copy link
Collaborator

Let's start CI

@JojoS62
Copy link
Contributor Author

JojoS62 commented Nov 11, 2020

what about the CI?

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 12, 2020

CI started

@mbed-ci
Copy link

mbed-ci commented Nov 13, 2020

Jenkins CI Test : ❌ FAILED

Build Number: 1 | 🔒 Jenkins CI Job | 🌐 Logs & Artifacts

CLICK for Detailed Summary

jobs Status
jenkins-ci/mbed-os-ci_unittests ✔️
jenkins-ci/mbed-os-ci_build-greentea-ARM ✔️
jenkins-ci/mbed-os-ci_build-greentea-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_build-example-ARM ✔️
jenkins-ci/mbed-os-ci_build-cloud-example-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_build-cloud-example-ARM ✔️
jenkins-ci/mbed-os-ci_cmake-example-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_cmake-example-ARM ✔️
jenkins-ci/mbed-os-ci_build-example-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_dynamic-memory-usage ✔️
jenkins-ci/mbed-os-ci_cloud-client-pytest ✔️
jenkins-ci/mbed-os-ci_cmake-example-test
jenkins-ci/mbed-os-ci_greentea-test

@JojoS62
Copy link
Contributor Author

JojoS62 commented Nov 13, 2020

not sure what the errors are meaning, is a rebase necessary?

@0xc0170
Copy link
Contributor

0xc0170 commented Nov 17, 2020

CI restarted

It was internal CI issue, it should be fixed now

@mbed-ci
Copy link

mbed-ci commented Nov 18, 2020

Jenkins CI Test : ✔️ SUCCESS

Build Number: 2 | 🔒 Jenkins CI Job | 🌐 Logs & Artifacts

CLICK for Detailed Summary

jobs Status
jenkins-ci/mbed-os-ci_unittests ✔️
jenkins-ci/mbed-os-ci_cmake-example-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_build-greentea-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_build-greentea-ARM ✔️
jenkins-ci/mbed-os-ci_build-cloud-example-ARM ✔️
jenkins-ci/mbed-os-ci_cmake-example-ARM ✔️
jenkins-ci/mbed-os-ci_build-example-ARM ✔️
jenkins-ci/mbed-os-ci_build-example-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_build-cloud-example-GCC_ARM ✔️
jenkins-ci/mbed-os-ci_dynamic-memory-usage ✔️
jenkins-ci/mbed-os-ci_cmake-example-test ✔️
jenkins-ci/mbed-os-ci_cloud-client-pytest ✔️
jenkins-ci/mbed-os-ci_greentea-test ✔️

@0xc0170 0xc0170 merged commit e1b8dd6 into ARMmbed:master Nov 18, 2020
@mergify mergify bot removed the ready for merge label Nov 18, 2020
@JojoS62 JojoS62 deleted the PR_fix_LSE-drive-load-setting branch November 19, 2020 17:31
@mbedmain mbedmain removed release-type: patch Indentifies a PR as containing just a patch Release-pending labels Nov 23, 2020
// set defaults for LSE drive load level
#if defined(LSE_CONFIG_AVAILABLE)

# if defined(MBED_CONF_TARGET_LSE_DRIVE_LOAD_LEVEL)
Copy link

@zimjjan zimjjan Jun 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure if something changed since [1] was documented, but I think the build system will interpret the "macro_name" from target.json without adding the "MBED_CONF_" prefix, which will lead to the setting never being applied. I noticed this while I was backporting this to mbedos 5 so I might be wrong.

[1] https://os.mbed.com/docs/mbed-os/v6.11/program-setup/advanced-configuration.html


Edited: added the comment here, as adding it to all the target.json lines would have been a pain.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi
I am sorry, I didn't understand well the question.
I don't think there is something different between mbedos 5 and 6 about json configuration.
And no need to add this line for each target as it can be defined at the family level.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't made my point in an understandable way:
The implementation requires MBED_CONF_TARGET_LSE_DRIVE_LOAD_LEVEL to be defined to set LSEDRV from the config. However, in target.json the actual parameter is set to "macro_name": "TARGET_LSE_DRIVE_LOAD_LEVEL" which will result in the target not being generated as MBED_CONF_TARGET_LSE_DRIVE_LOAD_LEVEL, but without the prefix MBED_CONF_. (See [1] above comment)

This results in the value from target.json (or anywhere else, where the setting is overridden from) never being used in the first place, as far as I understand.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, was resolved here #13939
Never mind. My fault.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants