Blog

DoiT launches Remote MCP Server: Get cloud cost insights with full business context through AI
Cloud cost analysis lacks business context—when costs spike, you’re missing the deployments, conversations, and decisions that caused it. DoiT’s remote MCP server enables AI assistants to query your cost data alongside business context from other tools, delivering the complete story behind every spending change.

Introducing the Anthropic cost & usage integration
Monitor Claude AI spending with DoiT’s Anthropic integration. View AI costs alongside cloud spend, allocate to teams, detect Claude model cost anomalies.

Accessing S3 Buckets Across AWS Regions Using VPC Peering
A practical guide to understanding why cross-region S3 access requires more than VPC Interface Endpoints with private DNS Background

Resize images on-the-fly with GCP Cloud Functions and Google Cloud CDN
In this article, I will explain why you should resize images in your websites, and how you can leverage

Creating Conversational AI Agents with Azure AI Foundry
Introduction Microsoft’s Azure AI Foundry simplifies AI development by providing a unified platform for building, deploying, and managing intelligent applications.

Optimizing ML Costs with Azure Machine Learning
Scaling Machine Learning (ML) initiatives can get expensive. This post outlines common financial challenges in ML and provides actionable

GKE Gateway API and Service Extensions: Your New Toolkit for Tackling Complex Traffic Challenges in GCP
Kubernetes has transformed container orchestration, and Google Kubernetes Engine (GKE) provides a powerful, managed platform for deploying and scaling

Microsoft Fabric: Unified Analytics Platform for AI era
Let me tell you a story of an imaginary company, Tell Me More Telco. The company has more than 100

Introducing the Databricks cost & usage integration for DoiT Cloud Intelligence™
New Databricks integration for DoiT Cloud Intelligence. Manage your Databricks costs alongside cloud infrastructure and other SaaS costs. Track spending, detect anomalies in Databricks spend, and optimize your complete tech portfolio.

LLMs in production: optimising from multi-second to sub-second latency and getting 50x cost reductions for free
When you’re dealing with a critical cloud infrastructure issue, every second counts. You need help fast, and you need