Blog

DoiT launches its own MCP server for DoiT Cloud Intelligence™

Model Context Protocol or MCP. What is it, and why should you care?

It’s 2025, and the ecosystem around Large Language Models (LLMs) seemingly hasn’t lost steam in its growth. While initially the focus seemed on bigger models trained on more data, throughout 2024, there has been an increased interest in building “compound” AI systems in which we integrate an LLM as a part of a larger, composable, and modular AI system.

There are a few reasons for this shift:

1. Composability: We want to go beyond text generation and automate what are currently manual workflows. For example, you don’t want to copy your manager's reply into ChatGPT; you want the LLM to be integrated into your email client. Or, as a database admin, you don’t want to type up the whole schema and relationships between all the tables in your relational database; you want your LLM to be able to pull that information as needed.

2. Grounding: an LLM (as described here by Andrej Karpathy) is more or less a probabilistic, lossy compression of the whole internet (and other bodies of text part of their training), meaning it’s working with a static snapshot of the world at training time (usually referred to as the knowledge cut-off). Moreover, since it is mainly still predicting “the next best token,” it can make up things that are not grounded in reality, which, albeit funny, are not necessarily desirable (and usually referred to as hallucinations). One way to avoid this is by giving the models retrieval access to newer information they can reference in their answers.

3. Agentic behaviour: In less deterministic scenarios, we might even want to relinquish some control over _what_ to do and _how_ to do a certain task. We want it to be able to choose the tools it needs, retrieve necessary information, make its own decisions , and adapt along the way.

The common thread across these shifts is the communication with external systems, which, without a common standard, would require a custom implementation for each new system, leading to a fragmented landscape. By introducing Model Context Protocol at the end of 2024, Anthropic aimed to turn this N × M integration problem into an N + M integration problem by creating an open standard for two-way connections between data sources (N tools) and AI-powered tools (M clients). It’s comparable to how HTTP standardised communication between web browsers and servers.

For those worried about their LangChain or LlamaIndex codebases, dont’ worry — MCP isn’t here to replace them. As this talk outlines, MCP will allow the agentic frameworks to focus on what they are good at: running the agentic loop and responding to the data brought in by tools.

Now being adopted by AmazonOpenAI, and Google, MCP is shaping up to become the HTTP of agent-native apps, so it’s worth taking a closer look!

What does it take to build your own MCP server?

On modelcontextprotocol.io, you can find a comprehensive developer guide to making your own server. We don’t want to repeat all the steps here, but it seemed worth mentioning the core concepts and types of capabilities a server can offer:

  1. Resources: Any kind of data that an MCP server might want to offer your client is an option, including textual (think database records, source code, etc.) and binary resources (think images, audio, video, etc.). The client can discover, read, and update resources. They are designed to be application-controlled.
  2. Prompts: The server can define reusable prompt templates and workflows, standardising everyday LLM interactions. By baking in some of your domain knowledge, prompts can guide the user to the correct usage of the functionality your MCP server offers. As such, they are designed to be user-controlled.
  3. Tools: through tools, the server offers the client executable functionality, allowing them to interact with external systems and take actions in the real world — think API integrations, data processing, … Since we want to hand control over to the LLM to decide what to do, tools are designed to be model-controlled.

Releasing the DoiT MCP server out in the open

Being DoiT, we wanted to join this thriving community of servers by releasing our own MCP server and meeting our customers where they already are.

So here it is:

github-embed

All you need to do is get your DoiT API key and start playing around!

Setting up our server will give you direct access to talk to your multicloud FinOps data stored in DoiT Cloud Intelligence™. We currently support interacting through the server with incidents, anomalies, reports, queries, and dimensions from the DoiT API. Keep an eye on the repository as we roll out more updates!!

How do I make the most of it?

Imagine it’s your typical Monday morning, and you want a quick pulse check on how your cloud infrastructure is doing after the weekend. Instead of sifting through dashboards and combing through alert emails, you ask your favourite assistant:

Are there any incidents in my Cloud Infrastructure I should be aware of? Please present them in a markdown table and summarise any long text to a few bullet points.

Or you want to see where your money has been going over the last week, and if there are any trends you should be aware of:

Can you give me an overview of my cloud costs across all my cloud usage for the last week? I want the granularity to be daily, I am only interested in the 5 biggest services. Do you notice a trend?

Assuming you want to see the actual data on which Claude did its analysis, you just ask for it:

Can you show me an overview of the data in a table?

With MCP, your assistant, e.g., Claude, knows exactly which DoiT endpoints to call; it will construct the request, as well as execute it. Once it receives the correct information, it will present it to you or analyse the data if you asked.

Join us in making your FinOps journey easier: leave your feedback on the repository or talk to us via doit.com/services!

Subscribe to updates, news and more.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related blogs

Schedule a call with our team

You will receive a calendar invite to the email address provided below for a 15-minute call with one of our team members to discuss your needs.

You will be presented with date and time options on the next step