👾 Big Tech Conferences Over: Everything You Need to Know

All agent highlights from Microsoft's Build 2025 & Google's I/O. Generative AI is making waves. How to protect your agents from misuse with guardrails.

Welcome to Edition #8 of Agents Made Simple

Microsoft and Google both had their big developer-centric conferences, where they introduced interesting new agent-based applications. While Microsoft is more focused on enterprise use cases, Google targets consumers and small businesses.

Generative AI leveled up with Veo 3, Google’s video model that can generate ultra-realistic videos of people, including background noises and speech.

At the same time, the AI labs Mistral and Anthropic released new flagship models.

This week’s topics:

  • Microsoft Build 2025 agentic highlights

  • Google I/O agentic highlights

  • Mistral Devstral and Document AI

  • OpenAI’s Responses API

  • Cursor 0.5 with new agentic features

  • Claude 4 Opus and Sonnet are now available

  • Why you need guardrails for your AI agents

  • Plus AI investments, trending AI tools, community highlights, and more

AI Agent News Roundup

💥 Breakthroughs

Microsoft Build 25 Agent Highlights

Microsoft Build 2025 conference

Source: Microsoft

Microsoft announced:

An advanced version of GitHub Copilot with a full coding agent.

Copilot can now learn your company’s brand voice.

Foundry for building apps and agents gets more features.

NLWeb lets you interact with any website using natural language.

Google I/O Agent Highlights

Google I/O conference 2025

Source: Google

Google announced:

AI mode in search is now powered by Gemini 2.5.

Virtual try-on feature, agentic shopping experience, and search in real-time with multimodal voice queries.

Search and Gemini got an agent mode that lets you run tasks in the background.

Coding agent Jules in public beta.

Mistral Devstral & Document AI

Mistral logo and colors with cat

Source: Mistral

Mistral released Devstral, an open-source coding model. It’s outperforming larger rivals on coding tasks. The model is small enough to run on a laptop or a single GPU.

Mistral also released Document AI. It extracts text from documents and images with 99% accuracy via a simple API call. It can process thousands of pages a minute.

OpenAI’s Agentic App-Building API

OpenAI responses API new features

Source: OpenAI

OpenAI enhanced the Responses API with support for remote MCP servers, image generation, code interpreter, background mode, and more.

The Responses API is the core API for building agentic applications.

OpenAI entered the steering committee for MCP, expecting the ecosystem to grow quickly in the coming months.

New Agentic Features for Cursor

Cursor 0.5 new features

Source: Cursor

AI coding IDE Cursor released version 0.5. This update introduces background agents. You can now run many agents in parallel and have them tackle bigger tasks. Also:

Include your entire codebase in context and edit long files faster.

Make multiple codebases available. Export chats to markdown. Improved inline edit and tab feature that works across multiple files.

Claude 4 (Opus & Sonnet) Released

Claude 4 model family

Source: Anthropic

Anthropic launched its next generation of LLMs – Claude Opus 4 and Claude Sonnet 4.

Opus 4 is their most intelligent model with advanced reasoning. Perfect for complex agent applications, advanced coding, or human-quality content creation.

Sonnet 4 performs well for high-volume tasks and most AI use cases.

📈 Investments

🇺🇸 Microsoft also unveiled Discovery at its Build conference, speeding up science itself. Discovery uses agents to generate ideas, simulate results, and learn.

🇺🇸 Google upgraded Gemini 2.5 Pro and Flash. A 2.5 Deep Think reasoning model is tested. Gemma 3n launched in preview, a powerful open-source model optimized for mobile devices. Gemini Live with camera and screen sharing rolled out.

🇺🇸 OpenAI announced its acquisition of io, an AI device start-up co-founded by Apple designer Jony Ive, for $6.5B to bring a new AI-powered device to market. Some details were unveiled about this screen-free product, and Altman predicts it will be a top seller. OpenAI’s data center in Texas secured $11.6B in new funding for its Stargate infrastructure. They also launched Stargate UAE to build data centers in Abu Dhabi.

🇺🇸 Amazon is testing a new generative AI-powered audio feature that synthesizes product summaries and reviews to make shopping easier.

🇺🇸 Oracle is about to invest $40B in Nvidia chips for an OpenAI data center

🇨🇦 Shopify released Summer 25 Edition with AI-driven features, including shopping AI agents, an automated store builder, and upgrades to its Sidekick AI assistant.

🇪🇺 Capegemini and SAP partner with Mistral to deploy AI for sensitive sectors like financial services, public sector, aerospace, and defense.

🇨🇳 Tencent introduced Hunyuan Game, the first industrial-grade AI engine for game production and game asset creation.

🇨🇳 ByteDance released BAGEL, an open-source multimodal foundation model that provides advanced image generation and prompt accuracy.

🇨🇳 Xiaomi started mass production of its in-house mobile 3-nanometer chip, XRING O1, another step for China towards self-sufficiency.

Why You Need Guardrails for Your AI Agents

Futuristic control room with humanoid robots in front of holographic shields symbolizing guardrails

Source: MBA via Imagen 4

AI agents are powerful, but without safeguards, they can process malicious inputs, generate inappropriate outputs, or waste expensive compute resources.

Guardrails solve this. Let’s take the OpenAI Agents SDK as an example. It lets us:

  • Validate user inputs before processing

  • Check agent outputs before delivery

  • Run parallel safety checks with fast/cheap models

Below, let's implement them!

Consider a customer support agent using an expensive GPT-4 model. We don't want users asking it to do their math homework – that's a waste of resources and off-topic.

Input Guardrails run in parallel with your main agent and can immediately halt execution if they detect violations.

Here's how we define one:

from pydantic import BaseModel
from agents import (
    Agent, GuardrailFunctionOutput, InputGuardrailTripwireTriggered,
    RunContextWrapper, Runner, input_guardrail
)

class HomeworkDetectionOutput(BaseModel):
    is_homework: bool
    reasoning: str

# Fast guardrail agent using a cheaper model
guardrail_agent = Agent(
    name="Homework detector",
    instructions="Detect if the user is asking for homework help.",
    output_type=HomeworkDetectionOutput,
)

@input_guardrail
async def homework_guardrail(
    ctx: RunContextWrapper[None], 
    agent: Agent, 
    input: str
) -> GuardrailFunctionOutput:
    result = await Runner.run(guardrail_agent, input, context=ctx.context)
    
    return GuardrailFunctionOutput(
        output_info=result.final_output,
        tripwire_triggered=result.final_output.is_homework,
    )

Next, we attach this guardrail to our main customer support agent:

# Main agent with expensive model
support_agent = Agent(
    name="Customer support agent",
    instructions="Help customers with product questions and issues.",
    input_guardrails=[homework_guardrail],  # 🔒 Guardrail attached here
)

async def main():
    try:
        # This should trigger the guardrail
        await Runner.run(
            support_agent, 
            "Can you solve this equation: 2x + 3 = 11?"
        )
        print("Request processed successfully")
        
    except InputGuardrailTripwireTriggered:
        print("🚨 Guardrail blocked inappropriate request!")

This produces a blocked request – the expensive model never runs!

The guardrail detected homework content and immediately raised an InputGuardrailTripwireTriggered exception, saving compute costs.

But what about checking outputs?

Output Guardrails work similarly but validate the agent's final response:

@output_guardrail
async def safety_guardrail(
    ctx: RunContextWrapper, 
    agent: Agent, 
    output: MessageOutput
) -> GuardrailFunctionOutput:
    # Check if output contains sensitive information
    safety_result = await Runner.run(
        safety_agent, 
        output.response, 
        context=ctx.context
    )
    
    return GuardrailFunctionOutput(
        output_info=safety_result.final_output,
        tripwire_triggered=safety_result.final_output.contains_sensitive_data,
    )

# Attach to agent
agent = Agent(
    name="Support agent",
    instructions="Help customers with their questions.",
    output_guardrails=[safety_guardrail],  # 🔒 Output validation
    output_type=MessageOutput,
)

And that's how we implement Guardrails in the OpenAI Agents SDK!

Key benefits:

Parallel execution - guardrails don't slow down your main agent
💰 Cost savings - block expensive model calls early
🛡️ Layered defense - combine multiple guardrails for robust protection
🔄 Exception handling - clean error management with try/catch blocks

Guardrails are essential for production AI agents, alongside proper authentication, access controls, and monitoring. They're your first line of defense against misuse and unexpected behavior.

Tool Spotlight

👾 AI Apps from Google (Labs)

Source: Google

🧵 Stitch: Let’s you vibe design UI’s for applications. Enter text prompts, and it generates nicely looking designs you can export to Figma.

📒 NotebookLM: Android app released. It lets you generate AI podcasts, study guides, briefing documents, and more via mobile.

🎆 Imagen 4: Google’s advanced image model. Free access via the Gemini App.

🌊 Flow with Veo3: Google’s viral AI video creator. Generates sound effects and speech together with realistic video. Requires the Google AI Ultra plan for $125/month.

Community Highlights

More Resources

Blog: In-depth articles on AI workflows and practical strategies for growth
AI Tool Collection: Discover and compare validated AI solutions
Consultancy: I help you discover AI potential or train your team

See you next time!

Tobias from MadeByAgents