NullifyNetwork

The blog and home page of Simon Soanes
Skip to content
[ Log On ]

Archive - Historical Articles

You are viewing records from 05/25/2024 22:23:11 to 08/22/2024 21:55:32. I'll be adding support for selecting a date range in future.

Once you've installed PodMan, run the following command in the Windows Terminal to get the Docker CLI installed:-

winget.exe install --id Docker.DockerCLI

With this added, Visual Studio will stop complaining about the lack of Docker Desktop, and start to run things, the container tools will also work.  You can also then change the settings which start/stop Docker Desktop in Visual Studio's Tools, Options menu.

For those in a corporate environment, this should save a lot of money on Docker Desktop seats!

Permalink 

After more issues with MS SQL AlwaysOn due to a fibre provider problem bringing one node down, I decided to move my blog to run off a Postgres cluster with read/write targetting.  Thankfully I'd been planning to do this for a long time and had already set up the database, schema, tables and replicated the data.

I found a few interesting things in doing this and got to use the new KeyedServices feature in .NET 8, since I wanted to have the option to alternating between MS SQL and Postgres and alternating between read and read/write contexts, so I had to make the following changes:-

  1. I created two derived classes from my EF Core model and moved all the Fluent operations into them, initially I just duplicated them.
  2. In the Postgres implementation, I added SnakeCaseNamingConvention, a Default Schema and the citext extension (for case insensitive page names).
  3. Moved all interactions with both EF Core contexts to an interface and extracted all used methods and DBSet<T> declarations to the interface.
  4. In the Startup, I put a condition on which Database:Platform setting was supplied and then set up a scoped database context.  Postgres is very interesting here in that you explicitly set your TargetSessionAttributes on setting up your connection.  Historically it was in the connection string like with MS SQL, but this limits you to only one mode of operation per connection string - now it's dynamic and driven through a DataSourceBuilder.
  5. I also then put a KeyedScoped context with the read-write property set.
  6. Then tagged the constructor parameters that needed the new ReadWrite instance of the context.
  7. Ran things and found it broke in surprisingly few places, mostly where DateTime.Now was used.  I converted these to DateTime.UtcNow like it should be - and it was happy and worked.

Let's go into more depth on these steps...

Context Changes

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    //Requires the addition of the EFCore.NamingConventions Nuget Package
    optionsBuilder = optionsBuilder.UseSnakeCaseNamingConvention();
    base.OnConfiguring(optionsBuilder);
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
  //Since I'm using one database with multiple sub-schemas now
//MSSQL defaults to dbo and schemas are less strict, Postgres really likes having one specified.
    modelBuilder.HasDefaultSchema("blog");
    //Provides case insensitivity
    modelBuilder.HasPostgresExtension("citext");

    //... the rest of your modelBuilder Fluent entity setup

  modelBuilder.Entity<PageContent>(entity => {
         //Enable case insensitive searching on this field, the column in the database has been changed to this type too.
         entity.Property(e => e.PageId).HasColumnType("citext"); 
         // ... more setup
 });
}

Base Context Changes

The base context needed to have a new method to allow it to accept both the MSSQL and the Postgres DBContextOptions<T> types and the new interface - then it was just a list of DbSet<T> properties for the various table representations:-

public partial class NullifyDBEntities : DbContext, INullifyBlog
{
  public NullifyDBEntities() { }
  public NullifyDBEntities(DbContextOptions<NullifyDBEntities> options) : base(options) { }
  protected NullifyDBEntities(DbContextOptions options) : base(options) { }

It was crucial to make the new constructor protected, so that the EFCore components don't break as they expect to only have the generic type passed in that matches the current class.  Since this base context isn't ever going to be called, I'm tempted to change it to protected but I'm wary about the rest of EF being a little unhappy with that.

The New Interface

While most of the interface is specific to my data types for the blog, the were also a number of operations I wanted to expose on the interface to make it easier to call them.  Because of that, I exposed things like SaveChanges, Add, Attach, Update, Remove and SaveChangesAsync - plus inherited from IDisposable and IAsyncDisposable.  Nothing worth providing a code example for, but for completeness:-

public interface INullifyBlog : IInfrastructure<IServiceProvider>, IResettableService, IDisposable, IAsyncDisposable
{
  DbSet<PageContent> PageContent { get; set; }
  // ... lots of other DbSet<T> properties

  int SaveChanges();
  // ... and then a few more utility methods.
}

Controller Changes

The controller changes were extremely minimal, I didn't need to change any business logic - just the type the constructor received (and the property the base class stored), and add the new FromKeyedServices attribute where needed.

    [Authorize(Roles = "Editor")]
    public class ManageArticlesController : BaseController
    {

        //The new attribute that provides keyed services.
        public ManageArticlesController([FromKeyedServices("ReadWrite")] INullifyBlog context) : base(context)
        {
        }

This was surprising, not having to change reams of historic code was great.

One thing I did find when I ran the first time was that I had runtime errors, I had to use UTC for all my dates and times.  Thankfully there were only a few places where I'd used local times to allow future-dated posting:-

[ResponseCache(Duration = 60, VaryByHeader = "id", VaryByQueryKeys = new string[] { "User" })]
public ActionResult Index(int id)
{
  var article = (from a in Database.NewsArticle
                where a.Id == id
               && a.PostedAt < DateTime.UtcNow
                select new NewsArticleViewModel { Article = a, CommentCount = a.Comment.Count, Author = a.U.Username, CanEdit = false }).SingleOrDefault();

  if (article == default(NewsArticleViewModel))
  {
    return StatusCode(404, "Unable to find an article with that ID");
  }
  else
  {
    return View(article);
  }
}

The people writing the Postgres SQL client have quite rightly made the choice to warn you if you use a date affected by a timezone - the number of times not using Utc in a database has bitten people is really high.

Startup Changes

And in the startup class I added support for the new Postgres based datacontext, while retaining the original MSSQL context if needed.

   if (Configuration.GetValue<string>("Database:Platform") == "Postgres")
 {
     services.AddDbContext<NullifyDbPGContext>(
         options => options.UseNpgsql(Configuration.GetValue<string>("Database:ConnectionString"))
     );
     services.AddScoped<INullifyBlog, NullifyDbPGContext>();

     //Enables read-write operations on only those servers that support it, and automatically selects that server when connecting, otherwise it will use any working server in the cluster.
//The new keyed attribute here makes things really neat.
     services.AddKeyedScoped<INullifyBlog>("ReadWrite", new Func<IServiceProvider, object, NullifyDbPGContext>((services, o) =>
     {
         //Uses the Postgres specific data source builder to connect to the correct servers on demand.
         var dsBuilder = new NpgsqlDataSourceBuilder(Configuration.GetValue<string>("Database:ConnectionString"));
         var ds = dsBuilder.BuildMultiHost().CreateConnection(TargetSessionAttributes.ReadWrite);
         var builder = new DbContextOptionsBuilder<NullifyDbPGContext>()
                             .UseNpgsql(ds);
         var context = new NullifyDbPGContext(builder.Options);
         return context;
     }));
   }

Docker Environment Changes

And finally, the docker container can now have Database__ConnectionString and Database__Platform supplied so that these settings can be per environment.  Importantly I moved from using : on the environment variables to __ so that it's consistent across all platforms:-

services:
www:
  image: ServiceProvider.tld/ImageName:Tag
  ports:
    - "80:8080"
  environment:
    Database__ConnectionString: YourConnectionStringHere
    Database__Platform: Postgres
#Your replication settings and network properties here

And a reminder that the containers port number has changed from port 80 internally on prior .NET versions Docker support, to now use port 8080.  This is accompanies by the containers no longer running the dotnet process as root so they're more secure but also harder to troubleshoot.  There's still an option to revert this behaviour but I don't recommend you do it.

Conclusion

Overall, this was a surprisingly light amount of work, all done in an hour with more time spent modernising the deployment pipeline and writing this post - especially given I'd completely redone the schema too to use lowercase snake case during the database migration.  Good fun!

And this codebase has now been upgraded/modernised significantly every year since August 1999 making it 25 years old, which is apt as it was my 41st a few days ago.

Permalink 

Upgrading to .NET 8 revealed a gotcha in the new docker containers, they changed the port number of the webserver inside the container from port 80 to port 8080, so you need to update any compose configurations you have or port mappings.

The link discussing this is here.

Permalink