Ardalis (Steve Smith) speaking at Techorama 2024
Ardalis (Steve Smith)

Software Architect • Microsoft MVP • Clean Code Advocate

MVP Architect Author Speaker Trainer
Work With Me →

Helping other software professionals to keep improving!

Recent Blog Posts

.NET Conf Most Popular Sessions Tool

.NET Conf Most Popular Sessions Tool

DotNetConf (.NET Conf) is a long-running virtual conference hosted each November with the release of a new version of .NET. Its sessions are published to YouTube each year. What have been the top sessions each year? I wrote a tool to pull the stats year by year (since 2021) to see.

Also, why do I keep giving (updated) versions of the same session, Clean Architecture, each year (like last November as covered by Jeremy Sinclair)? There are some who have found it a bit… repetitive (though I do mix it up year to year):

Read More →
AI Benefits - But at What Cost?

AI Benefits - But at What Cost?

In 2026 we can all agree that AI and agentic development are certainly exciting topics which many see yielding great productivity gains. But as the investor-subsidized pricing of these services gives way to realistic and profitable business models, where will the real costs land?

As many businesses downsize staff or pause hiring to see how these new models and tools actually perform in the real world, the trillion dollar question is:

Read More →
Use Asciinema and Powersession on Windows

Use Asciinema and Powersession on Windows

Introduction

I recently became aware of Asciinema when I saw that the Aspire docs are using it to document installing their CLI. As someone who creates a lot of content (or used to, anyway) and docs, this seems like a really useful tool! But it doesn’t “just work” on Windows, so I figured I’d document how to get it working on Windows for future me.

Asciinema Itself

There are basically two parts to Asciinema:

Read More →
Llms Need Mark as Answer

Llms Need Mark as Answer

Introduction

Today’s LLMs have already ingested basically all of the publicly available information they can to build their models. In order to improve further, they’re going to need to seek out additional sources of information. One obvious such source is the countless interactions they have with their users, and while privacy concerns are certainly relevant here, for the purposes of this article I want to focus on another issue: quality signals. How will these LLMs know whether a given exchange led to a solution (defined however the user would define it)? Without this knowledge, there’s no way for LLMs to give more weight to answers that ultimately were fruitful over answers that were useless but the user gave up and ended the session.

Read More →