Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profile and improve front-end database requests #11

Open
zucler opened this issue Oct 6, 2016 · 0 comments
Open

Profile and improve front-end database requests #11

zucler opened this issue Oct 6, 2016 · 0 comments

Comments

@zucler
Copy link
Owner

zucler commented Oct 6, 2016

The code below is not optimised and can cause long delays and broad results.

 parkings = Parking.objects.filter(lat__gte=min_lat, lat__lte=max_lat, long__gte=min_long, long__lte=max_long).all()
    rtype = RateType.objects.filter(parkingID__in=parkings).all()
    rprice = RatePrice.objects.filter(rateID__in=rtype).all()

    # To cache whole querysets
    # FIXME: We need workaround where no extra work done
    bool(parkings)
    bool(rtype)
    bool(rprice)

    parkings = list(geosearch['parkings'])     # Here we have list of dicts
    rtype = geosearch['ratetype']
    rprice = geosearch['rateprice']

        for p_item in parkings:
            sub_ratetype = rtype.filter(parkingID=p_item.parkingID)
            for rt_item in sub_ratetype:
                rt_item.rateprice = list(rprice.filter(rateID=rt_item.rateID))
            p_item.ratetype = sub_ratetype

In general, we need to limit the maximum number of rows returned (e.g. to not return carparks for the entire world) by either:

a) Limiting the difference between min and max lat/long
b) Limiting max count of rows returned

zucler added a commit that referenced this issue Nov 24, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant