DEV Community

Cover image for Build a C# Console Chatbot with Semantic Kernel & Azure OpenAI
Lou Creemers
Lou Creemers

Posted on

Build a C# Console Chatbot with Semantic Kernel & Azure OpenAI

Hey lovely readers,

This guide shows you how to connect Microsoft Semantic Kernel, keep your API key safe with User Secrets, and stream answers from GPT‑4o‑Mini inside a .NET console app.

1. Why use Semantic Kernel?

Semantic Kernel (SK) is a lightweight library that helps your code talk to language models. It works with Azure OpenAI, Azure AI Foundry models like Mixtral or Phi‑3, and even local models. The key benefits are:

  • Easy to swap models: you register each model as a service and switch when needed.
  • Built in chat history: a ChatHistory object remembers what was said and lets you choose how much context to send.
  • Plugins: you can expose your own C# methods so the model can call them.
  • Model agnostic: one code base can run on GPT‑4o today and Llama 3 tomorrow.

In this post we keep things small. No plugins, no tools. We just stream answers from GPT‑4o‑Mini to the console. Just so you know how to set up a quick and easy project with Semantic Kernel.

2. What you need

Tool Version I used Note
.NET SDK 8.0 or 9.0 Everything above 7 works; you can also try the .NET 10 preview
Azure subscription Needed to create an Azure OpenAI resource and an Azure AI Foundry project
GPT‑4o‑Mini deployment Global Standard Real time chat requires this type, not Batch Standard

2.1. Get started with a console app

# Make a new console app
mkdir SKConsole && cd SKConsole
dotnet new console -n SKConsole
cd SKConsole

# Add NuGet packages
 dotnet add package Microsoft.SemanticKernel
 dotnet add package Microsoft.SemanticKernel.AzureOpenAI
 dotnet add package Microsoft.Extensions.Configuration
 dotnet add package Microsoft.Extensions.Configuration.UserSecrets
Enter fullscreen mode Exit fullscreen mode

2.2 Save your API key with User Secrets

dotnet user-secrets init                      # adds <UserSecretsId> to the .csproj
dotnet user-secrets set OPENAI_API_KEY "<your‑key>"
Enter fullscreen mode Exit fullscreen mode

dotnet run can now read the key at runtime, and the secret stays out of Git.

3. The whole program

// Program.cs  (top level statements)

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.Extensions.Configuration;

// 1) Load User Secrets and other config
var configuration = new ConfigurationBuilder()
    .AddUserSecrets<Program>()
    .Build();

// 2) Build the kernel and add our GPT‑4o‑Mini deployment
var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: "o4-mini", // name shown in the Azure AI Foundry portal
        endpoint:       "https://myfoundryresource.openai.azure.com/",   // endpoint in the Azure AI Foundry resource overview
        apiKey:         configuration["OPENAI_API_KEY"], // apikey in the Azure AI Foundry resource
    .Build();

var chat = kernel.GetRequiredService<IChatCompletionService>();

const string systemPrompt = "You are a concise assistant."; //alter this to change the personality of your chatbot
Console.WriteLine("Ask me anything (press Enter on empty line to quit)");

while (true)
{
    string? user = Console.ReadLine();
    if (string.IsNullOrWhiteSpace(user)) break; //if enter is pressed without input, exit the program

    // Build a short chat history: system + last user message
    var turn = new ChatHistory();
    turn.AddSystemMessage(systemPrompt);
    turn.AddUserMessage(user);

    await foreach (var chunk in chat.GetStreamingChatMessageContentsAsync(turn))
        Console.Write(chunk.Content);

    Console.WriteLine();
}
Enter fullscreen mode Exit fullscreen mode

What the important lines do

Line Purpose
AddUserSecrets<Program>() Loads secrets.json into IConfiguration so you can read the key.
AddAzureOpenAIChatCompletion Registers the chat model as a service.
deploymentName Must match the name you see in the Azure AI Foundry portal.
endpoint Use the openai.azure.com host with this builder.
Chat loop Sends the system prompt and the latest user question. The model replies in streaming mode.

4. Frequent errors and quick fixes

Error Why it happens How to fix
404 Resource not found Wrong deployment name or wrong endpoint Use the openai.azure.com URL and the exact deployment name.
400 OperationNotSupported You used a Batch deployment with the chat API Deploy the model as Global Standard.
API key is null Key not loaded before use Make sure AddUserSecrets comes before you read the key.

5. Next steps

  1. Switch models: Use AddAzureAIInferenceChatCompletion and the Foundry endpoint to chat with Phi‑3, Llama 3, and more.
  2. Add memory: Store the full chat and use reducers to keep token costs low.
  3. Try plugins and function calls: Mark C# methods with [KernelFunction] so the model can run them.
  4. Build agents: Combine many skills to reach goals automatically.
  5. Use local models: Connect SK to Ollama or LMStudio with a simple HTTP wrapper.

That's a wrap!

If you liked this guide, please leave a comment or reach out on social media. I plan to write more posts about memory, plugins, agents, local models, and other cool parts of Semantic Kernel. Stay tuned and happy coding! 🍀

Top comments (0)