Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

流控规则-匀速排队控制效果统计不够准确 | Throttling (rate limiter controlBehavior) not accurate enough #399

Closed
fingthinking opened this issue Jan 9, 2019 · 7 comments · Fixed by #2951
Labels
area/flow-control Issues or PRs related to flow control kind/enhancement Category issues or prs related to enhancement.
Milestone

Comments

@fingthinking
Copy link

fingthinking commented Jan 9, 2019

RateLimiterController
在RateLimiterController中,计算costTime的时候,使用的是毫秒统计,就会出现
long cost = Math.round(1 * 1 / 160 * 1000) = Math.round(1 * 1 / 170 * 1000) = 6 ms的情况,也就是设置阈值为160和170得到的效果是一样的,都是会将qps限制在167.
误差评估:diff(100 / int(1000/n), n)
以下是我在原版的PaceController的修改,修改为纳秒时间, like this:

  public PaceController(int timeOut, double count) {
        this.maxQueueingTimeMs =  1000_000 * timeOut;
        this.count = count;
    }

    @Override
    public CheckResult canPass(Node node, int acquireCount) {

        long currentTime = System.nanoTime();
        // 花费时间
        long costTime = Math.round(1e9 * (acquireCount) / count);

        //期待时间
        long lastTime = latestPassedTime.get();
        long expectedTime = costTime + lastTime;
        long waitTime = expectedTime - currentTime;
        CheckResult rs = new CheckResult();
        if (waitTime <= 0) {
            //这里会有冲突,然而冲突就冲突吧.
            latestPassedTime.set(currentTime);
            return CheckResult.SUCCESS;
        } else if (waitTime >= maxQueueingTimeNano) {
            rs.setPass(false);
            rs.setMsg("超出阈值等待时间:" + (maxQueueingTimeNano / 1000_000) + "允许通行时间:"
                    + expectedTime + ", 当前时间:" + currentTime);
            return rs;
        }
        while (!latestPassedTime.compareAndSet(lastTime, expectedTime)) {
            lastTime = latestPassedTime.get();
            expectedTime = costTime + lastTime;
            currentTime = System.nanoTime();
            waitTime = expectedTime - currentTime;
            if (waitTime >= maxQueueingTimeNano) {
 
                rs.setPass(false);
                rs.setMsg("超出阈值等待时间:" + (maxQueueingTimeNano / 1000_000) + " ,允许通行时间:"
                        + expectedTime + ", 当前时间:" + currentTime);
                return rs;
            }
        }

         LockSupport.parkNanos(waitTime);
         return CheckResult.SUCCESS;
    }

测试代码:

@Test
    public void testPaceController_timeout() throws InterruptedException {
        final PaceController paceController = new PaceController(1000, 160d);
        final Node node = mock(Node.class);

        final AtomicInteger passcount = new AtomicInteger();
        final AtomicInteger blockcount = new AtomicInteger();

        final AtomicInteger done = new AtomicInteger();
        long lastSec = System.currentTimeMillis() / 1000;
        for (int i = 0; i < 1000; i++) {
            //            Thread thread = new Thread(new Runnable() {
            //                @Override
            //                public void run() {
            boolean pass = paceController.canPass(node, 1).isPass();

            if (pass == true) {
                passcount.incrementAndGet();
            } else {
                blockcount.incrementAndGet();
            }
            done.incrementAndGet();
            long now = System.currentTimeMillis() / 1000;
            if (lastSec != now) {
                System.out.println("pass:" + passcount.get() + ", tm:" + lastSec);
                System.out.println("block" + blockcount.get() + ", tm:" + lastSec);
                System.out.println("done" + done.get() + ", tm:" + lastSec);
                passcount.set(0);
                blockcount.set(0);
                done.set(0);
            }
            lastSec = now;
        }
        //        countDown.await();
        System.out.println("pass:" + passcount.get() + ", tm:" + lastSec);
        System.out.println("block" + blockcount.get() + ", tm:" + lastSec);
        System.out.println("done" + done.get() + ", tm:" + lastSec);

    }

    @Test
    public void testPaceController_multiThread() throws InterruptedException {
        final PaceController paceController = new PaceController(1000, 160d);
        final Node node = mock(Node.class);

        final AtomicInteger passcount = new AtomicInteger();
        final AtomicInteger blockcount = new AtomicInteger();

        final AtomicInteger done = new AtomicInteger();
       AtomicLong lastTm = new AtomicLong(System.currentTimeMillis() / 1000);
        int count = 1000;
        final CountDownLatch countDown = new CountDownLatch(count);

        for (int i = 0; i < count; i++) {
            Thread thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    for(int j =0; j< 10; j++) {
                        boolean pass = paceController.canPass(node, 1).isPass();

                        if (pass == true) {
                            passcount.incrementAndGet();
                        } else {
                            blockcount.incrementAndGet();
                        }
                        done.incrementAndGet();
                        long now = System.currentTimeMillis() / 1000;
                        if (lastTm.get() != now) {
                            System.out.println("pass:" + passcount.get() + ", tm:" + lastTm.get());
                            System.out.println("block:" + blockcount.get() + ", tm:" + lastTm.get());
                            System.out.println("done:" + done.get() + ", tm:" + lastTm.get());
                            passcount.set(0);
                            blockcount.set(0);
                            done.set(0);
                        }
                        lastTm.set(now);
                    }
                    countDown.countDown();
                }
            }, "Thread " + i);
            thread.start();
        }
        countDown.await();
        System.out.println("pass:" + passcount.get() + ", tm:" + lastTm.get());
        System.out.println("block:" + blockcount.get() + ", tm:" + lastTm.get());
        System.out.println("done:" + done.get() + ", tm:" + lastTm.get());

    }

先这样吧~~~

@sczyh30 sczyh30 added the kind/enhancement Category issues or prs related to enhancement. label Jan 9, 2019
@sczyh30
Copy link
Member

sczyh30 commented Jan 9, 2019

感谢反馈,这里有一个相似的 issue: #259

现在主要的问题是 TimeUnit.NANOSECONDS.sleep(xxx) 的实现比较坑,它的效果不是 sleep nanos,而是将这个值转成 ms 以后给 Thread.sleep(ms, nanos) 调用,后者方法只精确到 ms(见相关源码),因此效果还是 sleep ms。另一个实现 LockSupport.parkNanos(xxx) 效果稍微好一些但也不精确(能大致精确到 0.3 ms 左右,由系统的时间片决定)。自旋的话可以比较精确但是性能消耗大。所以平衡考虑,不太好做到非常精确。

另外改成 nano 以后,取时间就要用 System.nanoTime() 方法,这个方法并发性能不一定非常好。Sentinel 为了解决 System.currentTimeMillis() 并发性能不佳的问题实现了 TimeUtil 来取 ms 级别的时间。


Hi, thanks for reporting. A related issue here: #259

TimeUnit.NANOSECONDS.sleep(xxx) does not strictly wait for nanoseconds. In fact, it will convert the time value to ms and call Thread.sleep(ms, nanos) method, which actually wait in ms. LockSupport.parkNanos(xxx) seems better but not very accurate too.

@fingthinking
Copy link
Author

@sczyh30 给了一个新的解决方案,可以试试。

@fingthinking
Copy link
Author

@sczyh30 System.nanoTime性能不好应该是一个可以忽略的问题,在这种允许排队的场景下,这个性能损耗,与统计不准确相比的权衡,个人倾向于让限流更加精准,而不是这个并发的性能。

fingthinking pushed a commit to fingthinking/Sentinel that referenced this issue Jan 9, 2019
@jasonjoo2010
Copy link
Collaborator

就这个问题我也分享下我的观点,仅针对这个精度“有问题”的流量整形算法:

  1. 需要平衡连续离散
    连续是个在自然界普遍存在的概念,但在大多数人为场景中,都做不到,所以很多情景中,我们都是将连续问题抽象简化为了离散问题,才可以进一步处理。那么在这里,nano与milli仅仅是单位的差别,在延时环节有出入,但在计算环节其实可以统一。

  2. 延时
    不仅jvm提供的api在微小范围内的延时存在精度问题,而且对于x86体系这种非实时的系统体系来说,在微小范围内是一定做不到准确的,所以算得准是一回事,唤得准是另外一回事,并且大多数情况下后者一定存在毫秒级误差(只有系统负载非常低中断压力同样低的时候才可能实现更小的误差),这一点哪怕是利用kernel内核实现中断处理,也是一样的。

鉴于此,基于匀速器的实现目的,即将突发流量均匀分布到单位时间内(一般来说是1秒),简化为离散问题的话,那我认为就匀速结果来说,以1ms、5ms还是10ms为粗化精度的单元来进行排布请求,就已经达到了目的,而从限速结果来说,将计算的单位以纳秒甚至皮秒来运算,可以提升运算的效果,也不冲突。

@fingthinking
Copy link
Author

@jasonjoo2010 理论上认可你的较粗粒度的观点,但实际使用中,差异很大,设置300qps时,原计算模型能够计算到334,多线程的时候会计算到更高的qps,这种情况下,后端服务是不可接受的,误差放大太厉害了。

@jasonjoo2010
Copy link
Collaborator

@jasonjoo2010 理论上认可你的较粗粒度的观点,但实际使用中,差异很大,设置300qps时,原计算模型能够计算到334,多线程的时候会计算到更高的qps,这种情况下,后端服务是不可接受的,误差放大太厉害了。

柳,

可能你没完全理解我的意思,简单来说,就是算精睡粗保跨界。

算精,拿纳秒、皮秒来计算没任何问题;
睡粗,以5ms或10ms的倍数来延时睡眠;
保跨界,保证1秒内的数量限制符合预期。

@fingthinking
Copy link
Author

@jasonjoo2010 理论上认可你的较粗粒度的观点,但实际使用中,差异很大,设置300qps时,原计算模型能够计算到334,多线程的时候会计算到更高的qps,这种情况下,后端服务是不可接受的,误差放大太厉害了。

柳,

可能你没完全理解我的意思,简单来说,就是算精睡粗保跨界。

算精,拿纳秒、皮秒来计算没任何问题;
睡粗,以5ms或10ms的倍数来延时睡眠;
保跨界,保证1秒内的数量限制符合预期。

看到这个解释,比较理解了,谢谢指教~

@sczyh30 sczyh30 added the area/flow-control Issues or PRs related to flow control label Jul 16, 2019
@sczyh30 sczyh30 changed the title 同步等待方式统计不够准确 流控规则-匀速排队控制效果统计不够准确 | Throttling (rate limiter controlBehavior) not accurate enough Nov 14, 2022
@sczyh30 sczyh30 added this to the v2.0.0 milestone Nov 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/flow-control Issues or PRs related to flow control kind/enhancement Category issues or prs related to enhancement.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants