Skip to content

Comments

Add sequential thinking processor and switch to OpenAI model#1808

Merged
MrgSub merged 1 commit intostagingfrom
ZEROAdd_sequential_thinking_processor_and_switch_to_OpenAI_model
Jul 23, 2025
Merged

Add sequential thinking processor and switch to OpenAI model#1808
MrgSub merged 1 commit intostagingfrom
ZEROAdd_sequential_thinking_processor_and_switch_to_OpenAI_model

Conversation

@MrgSub
Copy link
Collaborator

@MrgSub MrgSub commented Jul 23, 2025

Description

This PR introduces a new sequential thinking processor for dynamic problem-solving and makes model configuration changes. It adds a SequentialThinkingProcessor class that enables step-by-step reasoning with support for thought revision and branching paths. The PR also configures the agent to use OpenAI models by default instead of Anthropic.

Type of Change

  • ✨ New feature (non-breaking change which adds functionality)
  • ⚡ Performance improvement

Areas Affected

  • User Interface/Experience
  • API Endpoints

Testing Done

  • Manual testing performed

Checklist

  • I have performed a self-review of my code
  • My changes generate no new warnings

Additional Notes

The PR removes a console.log statement from the mail notification provider and adds a new environment variable USE_OPENAI set to "true" by default. The sequential thinking processor provides a structured way to handle complex reasoning tasks with features like thought revision, branching, and dynamic adjustment of the thinking process.


By submitting this pull request, I confirm that my contribution is made under the terms of the project's license.

@jazzberry-ai
Copy link

jazzberry-ai bot commented Jul 23, 2025

Bug Report

Name Severity Example test case Description
Missing Input Validation Medium Call processThought with isRevision=true but without revisesThought validateThoughtData should enforce stricter validation rules for related fields.
Inconsistent State of Total Thoughts Low Adjust totalThoughts mid-process and observe agent planning. totalThoughts adjustments can lead to issues if relied upon for planning.
Uncontrolled Branch Creation Medium Create many branches in SequentialThinkingProcessor. No limit to branches can lead to memory exhaustion.
Unbounded Thought History Medium Run agent for a long time. thoughtHistory grows indefinitely, causing memory issues.
Incomplete ThinkingMCP Initialization High Attempt to use sequential thinking tool. The tool is never registered, so it cannot be used.
Redundant OpenAI Import Low N/A The openai import appears twice.
Potential Model Mismatch Medium Use OPENAI_MODEL to specify an Anthropic model when USE_OPENAI is true, or vice versa. The same env variable is used for both OpenAI and Anthropic models.
Potentially Broken MCP Connection High Try to register the thinking-mcp. MCP connection code is commented out and may not be working correctly.

Comments? Email us. Your free trial ends in 6 days.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 23, 2025

Walkthrough

This update introduces a new sequential thinking processor module with associated classes, adds support for a new MCP endpoint and dynamic AI model selection in the agent route, and adjusts environment configuration for model usage. Additionally, a console log is removed from the mail component, and minor code cleanups are performed.

Changes

File(s) Change Summary
apps/mail/components/party.tsx Removed a debug console log related to mail query invalidation.
apps/server/src/lib/sequential-thinking.ts Added new module with SequentialThinkingProcessor and ThinkingMCP classes for stepwise thought management, branching, and structured processing.
apps/server/src/routes/agent/index.ts Added registerThinkingMCP method, modified MCP connection URLs to include IDs, enabled dynamic AI model selection based on environment, and adjusted commented code.
apps/server/wrangler.jsonc Added USE_OPENAI environment variable under the local configuration.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ZeroAgent
    participant SequentialThinkingProcessor
    participant AI_Model

    User->>ZeroAgent: Initiate connection (onStart)
    ZeroAgent->>SequentialThinkingProcessor: Register Thinking MCP
    Note over ZeroAgent: Conditional model selection
    ZeroAgent->>AI_Model: streamText (OpenAI or Anthropic)
    AI_Model-->>ZeroAgent: Model response
    ZeroAgent-->>User: Streamed response
Loading

Estimated code review effort

3 (~40 minutes)

Possibly related PRs

  • Staging #942: Modifies the same mail component file, adjusting debounced refetch and log removal logic, closely related to the removal of the console log in this PR.

Suggested reviewers

  • ahmetskilinc

Poem

In the warren of code where new thoughts bloom,
A sequential mind now finds its room.
With models that switch by a simple flag,
And logs swept away with a hop and a drag.
🐇 The rabbit reviews, with whiskers a-twitch—
"Onward to clarity, without a glitch!"

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MrgSub MrgSub marked this pull request as ready for review July 23, 2025 18:28
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Collaborator Author

MrgSub commented Jul 23, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Unintended Debug Code and Unused Tool Registration

The commit introduces two issues: a debug console.log('Here!'); statement and a large (93 lines) commented-out block for the sequentialthinking tool registration within the ThinkingMCP class. Both appear to be accidentally committed development code and should be either removed or properly implemented.

apps/server/src/lib/sequential-thinking.ts#L206-L301

console.log('Here!');
// this.server.registerTool(
// 'sequentialthinking',
// {
// description: `A detailed tool for dynamic and reflective problem-solving through thoughts.
// This tool helps analyze problems through a flexible thinking process that can adapt and evolve.
// Each thought can build on, question, or revise previous insights as understanding deepens.
// When to use this tool:
// - Breaking down complex problems into steps
// - Planning and design with room for revision
// - Analysis that might need course correction
// - Problems where the full scope might not be clear initially
// - Problems that require a multi-step solution
// - Tasks that need to maintain context over multiple steps
// - Situations where irrelevant information needs to be filtered out
// Key features:
// - You can adjust total_thoughts up or down as you progress
// - You can question or revise previous thoughts
// - You can add more thoughts even after reaching what seemed like the end
// - You can express uncertainty and explore alternative approaches
// - Not every thought needs to build linearly - you can branch or backtrack
// - Generates a solution hypothesis
// - Verifies the hypothesis based on the Chain of Thought steps
// - Repeats the process until satisfied
// - Provides a correct answer
// Parameters explained:
// - thought: Your current thinking step, which can include:
// * Regular analytical steps
// * Revisions of previous thoughts
// * Questions about previous decisions
// * Realizations about needing more analysis
// * Changes in approach
// * Hypothesis generation
// * Hypothesis verification
// - next_thought_needed: True if you need more thinking, even if at what seemed like the end
// - thought_number: Current number in sequence (can go beyond initial total if needed)
// - total_thoughts: Current estimate of thoughts needed (can be adjusted up/down)
// - is_revision: A boolean indicating if this thought revises previous thinking
// - revises_thought: If is_revision is true, which thought number is being reconsidered
// - branch_from_thought: If branching, which thought number is the branching point
// - branch_id: Identifier for the current branch (if any)
// - needs_more_thoughts: If reaching end but realizing more thoughts needed
// You should:
// 1. Start with an initial estimate of needed thoughts, but be ready to adjust
// 2. Feel free to question or revise previous thoughts
// 3. Don't hesitate to add more thoughts if needed, even at the "end"
// 4. Express uncertainty when present
// 5. Mark thoughts that revise previous thinking or branch into new paths
// 6. Ignore information that is irrelevant to the current step
// 7. Generate a solution hypothesis when appropriate
// 8. Verify the hypothesis based on the Chain of Thought steps
// 9. Repeat the process until satisfied with the solution
// 10. Provide a single, ideally correct answer as the final output
// 11. Only set next_thought_needed to false when truly done and a satisfactory answer is reached`,
// inputSchema: {
// thought: z.string().describe('Your current thinking step'),
// nextThoughtNeeded: z.boolean().describe('Whether another thought step is needed'),
// thoughtNumber: z.number().int().min(1).describe('Current thought number'),
// totalThoughts: z.number().int().min(1).describe('Estimated total thoughts needed'),
// isRevision: z.boolean().optional().describe('Whether this revises previous thinking'),
// revisesThought: z
// .number()
// .int()
// .min(1)
// .optional()
// .describe('Which thought is being reconsidered'),
// branchFromThought: z
// .number()
// .int()
// .min(1)
// .optional()
// .describe('Branching point thought number'),
// branchId: z.string().optional().describe('Branch identifier'),
// needsMoreThoughts: z.boolean().optional().describe('If more thoughts are needed'),
// },
// },
// (params) => {
// return this.thinkingServer.processThought({
// thought: params.thought,
// nextThoughtNeeded: params.nextThoughtNeeded,
// thoughtNumber: params.thoughtNumber,
// totalThoughts: params.totalThoughts,
// isRevision: params.isRevision,
// revisesThought: params.revisesThought,
// branchFromThought: params.branchFromThought,
// branchId: params.branchId,
// needsMoreThoughts: params.needsMoreThoughts,
// });
// },
// );

Fix in CursorFix in Web


Bugbot free trial expires on July 29, 2025
Learn more in the Cursor dashboard.

Was this report helpful? Give feedback by reacting with 👍 or 👎

Copy link
Collaborator Author

MrgSub commented Jul 23, 2025

Merge activity

  • Jul 23, 6:30 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Jul 23, 6:30 PM UTC: @MrgSub merged this pull request with Graphite.

@MrgSub MrgSub merged commit a8e5c82 into staging Jul 23, 2025
7 of 9 checks passed
@MrgSub MrgSub deleted the ZEROAdd_sequential_thinking_processor_and_switch_to_OpenAI_model branch July 23, 2025 18:30
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cubic analysis

1 issue found across 4 files • Review in cubic

React with 👍 or 👎 to teach cubic. You can also tag @cubic-dev-ai to give feedback, ask questions, or re-run the review.

"THREAD_SYNC_LOOP": "false",
"DISABLE_WORKFLOWS": "false",
"AUTORAG_ID": "",
"USE_OPENAI": "true",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

USE_OPENAI is only defined for the local environment, so staging/production will silently fall back to the Anthropic model, leading to inconsistent behavior across environments.

Prompt for AI agents
Address the following comment on apps/server/wrangler.jsonc at line 119:

<comment>USE_OPENAI is only defined for the local environment, so staging/production will silently fall back to the Anthropic model, leading to inconsistent behavior across environments.</comment>

<file context>
@@ -116,6 +116,7 @@
         &quot;THREAD_SYNC_LOOP&quot;: &quot;false&quot;,
         &quot;DISABLE_WORKFLOWS&quot;: &quot;false&quot;,
         &quot;AUTORAG_ID&quot;: &quot;&quot;,
+        &quot;USE_OPENAI&quot;: &quot;true&quot;,
       },
       &quot;kv_namespaces&quot;: [
</file context>

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
apps/server/src/lib/sequential-thinking.ts (3)

118-176: Consider error handling improvements in processThought.

While the error handling catches exceptions, consider logging errors for debugging purposes before returning the error response.

    } catch (error) {
+     console.error('Error processing thought:', error);
      return {
        content: [
          {
            type: 'text' as const,
            text: JSON.stringify(
              {
                error: error instanceof Error ? error.message : String(error),
                status: 'failed',
              },
              null,
              2,
            ),
          },
        ],
        isError: true,
      };
    }

207-207: Remove debug console.log statement.

The debug console.log('Here!') should be removed from production code.

-    console.log('Here!');

209-301: Address commented-out sequential thinking tool.

The large commented-out section contains a detailed implementation plan for the sequentialthinking tool. Consider either implementing this tool or removing the commented code to reduce maintenance burden.

Would you like me to help implement the commented-out sequentialthinking tool or create a separate issue to track this development task?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d9d9a58 and e6bf8a4.

📒 Files selected for processing (4)
  • apps/mail/components/party.tsx (0 hunks)
  • apps/server/src/lib/sequential-thinking.ts (1 hunks)
  • apps/server/src/routes/agent/index.ts (5 hunks)
  • apps/server/wrangler.jsonc (1 hunks)
💤 Files with no reviewable changes (1)
  • apps/mail/components/party.tsx
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (AGENT.md)

**/*.{js,jsx,ts,tsx}: Use 2-space indentation
Use single quotes
Limit lines to 100 characters in width
Semicolons are required

Files:

  • apps/server/src/lib/sequential-thinking.ts
  • apps/server/src/routes/agent/index.ts
**/*.{js,jsx,ts,tsx,css}

📄 CodeRabbit Inference Engine (AGENT.md)

Use Prettier with sort-imports and Tailwind plugins

Files:

  • apps/server/src/lib/sequential-thinking.ts
  • apps/server/src/routes/agent/index.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (AGENT.md)

Enable TypeScript strict mode

Files:

  • apps/server/src/lib/sequential-thinking.ts
  • apps/server/src/routes/agent/index.ts
🪛 GitHub Actions: autofix.ci
apps/server/src/lib/sequential-thinking.ts

[warning] 20-20: ESLint (no-unused-vars): Identifier 'z' is imported but never used. Consider removing this import.

apps/server/src/routes/agent/index.ts

[error] 53-54: Identifier openai has already been declared. The identifier is imported twice in this file, which is not allowed.

🪛 Biome (2.1.2)
apps/server/src/routes/agent/index.ts

[error] 54-54: Shouldn't redeclare 'openai'. Consider to delete it or rename it.

'openai' is defined here:

(lint/suspicious/noRedeclare)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (9)
apps/server/src/lib/sequential-thinking.ts (4)

22-44: Well-structured interfaces for thought processing.

The ThoughtData and SequentialThinkingParams interfaces are well-designed with clear optional fields for branching and revision capabilities. The structure supports the sequential thinking workflow effectively.


55-80: Robust input validation implementation.

The validation logic properly checks required fields and types, providing clear error messages. The validation ensures data integrity before processing thoughts.


82-116: Creative formatting with visual presentation.

The thought formatting creates a visually appealing bordered display with contextual headers. The use of emojis and dynamic border sizing enhances readability.


200-206: Simple initialization with placeholder tool.

The current implementation registers only a basic "Test" tool. This appears to be a temporary implementation while the more comprehensive sequentialthinking tool remains commented out.

apps/server/src/routes/agent/index.ts (5)

1069-1069: Updated MCP connection with explicit ID parameter.

The connection URL now includes mcpId=zero-mcp parameter and sets the OAuth client provider ID explicitly. This change aligns with the new MCP routing structure.


1080-1090: New ThinkingMCP integration method.

The registerThinkingMCP method follows the same pattern as registerZeroMCP but connects to the thinking-mcp endpoint. The implementation is consistent and well-structured.


1125-1128: Dynamic model selection implementation.

The conditional model selection based on USE_OPENAI environment variable provides flexibility between OpenAI and Anthropic models. The fallback model names are appropriate.


1093-1093: Confirm MCP registration strategy
The registerThinkingMCP() invocation in onStart() is currently commented out and you have no other calls to it. Please verify that this omission is intentional. If it’s only disabled for testing or staging, consider adding a TODO (with context and a link to any tracking ticket) explaining when and how it will be re-enabled.

Locations to review:

  • apps/server/src/routes/agent/index.ts, inside onStart() (around line 1093):
    // this.registerThinkingMCP();

1108-1108: Verify MCP tools integration
It looks like the call to this.mcp.unstable_getAITools() and its spread into rawTools remain commented out in apps/server/src/routes/agent/index.ts (around line 1108). Please confirm whether this was intentional—if you intend to wire in MCP’s AI tools as part of the thinking-processor, you should:

  • Uncomment the instantiation:
    const mcpTools = this.mcp.unstable_getAITools();
  • Re-include them when building rawTools:
          const rawTools = {
            ...(await authTools(connectionId)),
    +       ...mcpTools,
          };

If MCP integration is deprecated or postponed, consider removing these commented lines altogether to keep the code clean.

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import type { env } from 'cloudflare:workers';
import { McpAgent } from 'agents/mcp';
import { z } from 'zod';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused import.

The z import from 'zod' is not used in the active code and should be removed to avoid linting errors.

-import { z } from 'zod';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { z } from 'zod';
🧰 Tools
🪛 GitHub Actions: autofix.ci

[warning] 20-20: ESLint (no-unused-vars): Identifier 'z' is imported but never used. Consider removing this import.

🤖 Prompt for AI Agents
In apps/server/src/lib/sequential-thinking.ts at line 20, the import of 'z' from
'zod' is unused and causes linting errors. Remove this import statement entirely
to clean up the code and resolve the lint issue.

import { env } from 'cloudflare:workers';
import type { Connection } from 'agents';
import { openai } from '@ai-sdk/openai';
import { openai } from '@ai-sdk/openai';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate import.

The openai import is declared twice (lines 53 and 54). Remove the duplicate import to fix the compilation error.

-import { openai } from '@ai-sdk/openai';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { openai } from '@ai-sdk/openai';
import { openai } from '@ai-sdk/openai';
🧰 Tools
🪛 Biome (2.1.2)

[error] 54-54: Shouldn't redeclare 'openai'. Consider to delete it or rename it.

'openai' is defined here:

(lint/suspicious/noRedeclare)

🪛 GitHub Actions: autofix.ci

[error] 53-54: Identifier openai has already been declared. The identifier is imported twice in this file, which is not allowed.

🤖 Prompt for AI Agents
In apps/server/src/routes/agent/index.ts at line 54, there is a duplicate import
of `openai` from '@ai-sdk/openai' which causes a compilation error. Remove the
import statement on line 54 to eliminate the duplicate and fix the error.

"THREAD_SYNC_LOOP": "false",
"DISABLE_WORKFLOWS": "false",
"AUTORAG_ID": "",
"USE_OPENAI": "true",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add USE_OPENAI to staging and production environments for consistency.

The USE_OPENAI environment variable is only configured for the local environment. Consider adding this variable to the staging and production environments to ensure consistent model selection behavior across all deployment environments.

# In staging environment vars section (around line 254)
      "vars": {
        "NODE_ENV": "development",
        "COOKIE_DOMAIN": "0.email",
        "VITE_PUBLIC_BACKEND_URL": "https://sapi.0.email",
        "VITE_PUBLIC_APP_URL": "https://staging.0.email",
        "DISABLE_CALLS": "",
        "DROP_AGENT_TABLES": "false",
        "THREAD_SYNC_MAX_COUNT": "20",
        "THREAD_SYNC_LOOP": "true",
        "DISABLE_WORKFLOWS": "true",
+       "USE_OPENAI": "true",
      },

# In production environment vars section (around line 395)
      "vars": {
        "NODE_ENV": "production",
        "COOKIE_DOMAIN": "0.email",
        "VITE_PUBLIC_BACKEND_URL": "https://api.0.email",
        "VITE_PUBLIC_APP_URL": "https://0.email",
        "DISABLE_CALLS": "true",
        "DROP_AGENT_TABLES": "false",
        "THREAD_SYNC_MAX_COUNT": "10",
        "THREAD_SYNC_LOOP": "true",
        "DISABLE_WORKFLOWS": "true",
+       "USE_OPENAI": "true",
      },
🤖 Prompt for AI Agents
In apps/server/wrangler.jsonc at line 119, the USE_OPENAI environment variable
is set only for the local environment. To ensure consistent model selection
behavior, add the "USE_OPENAI": "true" setting to the staging and production
environment configurations as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant