Microsoft Brings OpenAI’s GPT-OSS-20B Model to Windows 11 via AI Foundry

Microsoft integrates OpenAI’s new lightweight GPT model into Windows 11, expanding AI accessibility.

Emmanuella Madu
2 Min Read

Microsoft has announced that OpenAI’s latest open-source model, gpt-oss-20b, is now available to Windows 11 users via the Windows AI Foundry, a platform designed to give developers and consumers access to AI tools, APIs, and popular open-source models directly on their devices.

In a blog post, Microsoft described the model as “tool-savvy and lightweight,” optimized specifically for agentic tasks like code execution and tool use. It runs smoothly on a variety of Windows hardware, even in low-bandwidth environments, making it ideal for building autonomous AI assistants and embedding AI into real-world workflows.

Launched on Tuesday, gpt-oss-20b is designed to run efficiently on consumer-grade PCs and laptops equipped with at least 16GB of VRAM, commonly found in modern Nvidia or AMD Radeon GPUs. The model was trained using high-compute reinforcement learning, enabling it to perform advanced tasks like web searches or Python code execution as part of a chain-of-thought process.

However, gpt-oss-20b is text-only, lacking the multimodal capabilities of other OpenAI models that support image and audio generation. It’s also not without flaws: internal benchmarking at OpenAI revealed a 53% hallucination rate on PersonQA, a test for factual accuracy about people.

Microsoft says support for macOS and additional devices is on the horizon, though no specific timelines or platforms were disclosed. For cloud users, both gpt-oss-20b and the earlier gpt-oss-120b model are also being made available via Azure AI Foundry, Microsoft’s hosted AI service.

In addition to Azure, Amazon Web Services (AWS) has also rolled out access to the gpt-oss models, giving developers and enterprises multiple pathways to explore these open-weight AI tools.

With this release, Microsoft reinforces its commitment to democratizing AI, enabling more users to explore powerful open models locally, without needing massive infrastructure.

Share This Article