Skip to main content

What is StoryChief MCP

Gregory Claeyssens avatar
Written by Gregory Claeyssens
Updated over a week ago

StoryChief MCP lets AI agents securely interact with StoryChief data. MCP (Model Context Protocol) is a standard that allows AI tools to connect with third-party APIs in a secure, consistent way. In our case, it connects AI tools with StoryChief.

StoryChief MCP is a remote MCP server. It allows users with an individual plan or higher to connect their StoryChief account to supported AI tools.

See our StoryChief MCP use cases and prompt ideas for real examples and prompting tips to help you get the most from your MCP connection.

In this article:


1. MCP Connection URL

https://mcp.storychief.io/mcp

Most popular AI tools support connecting to the remote MCP directly. If you run into issues, check out our step-by-step setup guides:

a. Keys & Authorization

When you connect StoryChief MCP to an AI agent:

  • You’ll see an authorization consent screen.

  • Once approved, the connection will appear in your account with a dedicated API key (tagged with MCP scope).

You can also:

  • Manually generate an MCP key in your account and use it in your AI agent’s configuration.


2. Usage & Limits

Our MCP server is currently offered in beta. During this phase, usage limits are intentionally lightweight to allow flexibility, testing, and iteration as we learn from real-world use.

a. Rate Limits

To ensure platform stability and fair access for all users, the following limit is currently enforced:

  • 10 requests per minute per client

This limit applies across all MCP endpoints and may be adjusted as we continue to optimize performance and reliability.

b. Fair Use

Beyond the rate limit above, there are no fixed quotas or hard usage caps at this time. However, we expect clients to use the service responsibly and in a way that does not negatively impact other users or system stability.

We reserve the right to temporarily restrict or throttle usage that appears abusive, excessively automated, or harmful to the service.

c. Future Changes

As we move out of beta, we may introduce additional limits such as:

  • Request or usage quotas

  • Tiered limits based on plans or use cases

  • More granular rate limiting per capability or endpoint

Any such changes will be communicated in advance when possible and reflected in this documentation.

Did this answer your question?