Click here to Skip to main content
15,867,308 members
Articles / Programming Languages / C#

Adaptive Hierarchical Knowledge Management - Part I

Rate me:
Please Sign up or sign in to vote.
5.00/5 (9 votes)
1 Apr 2021CPOL20 min read 6.7K   139   10   12
What I'm embarking on is writing an actual product that performs the task of what is called "Knowledge Management."
Knowledge management (KM) is multidimensional, with the ability to search for, capture, update, and maintain relevant information all in a single platform. Anyone can curate or update information in real-time, sparking conversation and collaboration among peers and departments. Think of KM as a subset of content management that specializes in knowledge distribution.

Table of Contents

Introduction

I have a particular passion for a few things related to software development:

  • Data - in the abstract sense, what is it, what are its attributes, why does it exist?
  • Data Relationships - discrete data doesn't have much meaning without the context of the relationships to other data.
  • Architecture - mainly because this is a lost art, and like a piece of art, a well designed product can be a thing of beauty.
  • User interfaces - creating an actual usable user interface is another lost art.
  • Abstraction - The more concretely the program implements something, the less flexible it is. The more flexible it is, the more complex it becomes to configure. Where's the balance?

What I'm about to embark on is writing an actual product that performs the task of what is called "Knowledge Management." As stated on Bloomfire (provider of a Knowledge Management application, I have no affiliation):

"Knowledge management (KM) is more multidimensional, with the ability to search for, capture, update, and maintain relevant information all in a single platform. Anyone can curate or update information in real-time, sparking conversation and collaboration among peers and departments. Think of KM as a subset of content management that specializes in knowledge distribution."

I see knowledge management as an umbrella to many different things:

  • Document management
  • Customer relationship management (CRM)
  • Accounting
  • Project management
  • Government (wide field here, from things like police reports to municipality management to state and federal programs)
  • Engineering
  • Records Management (police, fire, emergency, etc.)
  • Health Services

And silly but useful things like:

  • Contact management
  • Recipes
  • Photo albums
  • Music albums
  • Grocery lists

And community / collaboration things like:

  • Ride sharing board
  • Needs & Gifts
  • Coordinate scheduling a meeting
  • Research

and the list goes on and on.

And each of these ends up being:

  • Custom application development that is either broad enough in scope to become a niche product, or remains an in-house tool product.
  • A generalized tool for a particular niche (think Atlassian / JIRA, or SalesForce, OnBase, DocuShare, and so forth.

The goal here is not to create a "low-code" or "no-code" application builder, but rather to create a product on which other products can be built. The long-winded description of what I'm writing about in these articles is: "An Adaptive Hierarchical Knowledge Management Meta-Model."

Requirements

  • Visual Studio 2019 or similar
  • .NET Core 3.0
  • TypeScript version 4.2 or higher (from NuGet)

If you try out the code, you'll have to create the tables (see the Schema section below) and edit the appsetting.json for your own database connection string.

Background

Besides feeling personally qualified to write about this subject given the industries I've worked in and the years of software development (good, bad, and ugly), I've also done some writing about these concepts:

Architecture

Here's the gloriosky picture of what the final product will look like (click here for the full size image):

Image 1

While a picture can convey a thousand words, it can also lead to a thousand questions. So just enjoy the artwork and I'll explain the pieces as I go along. Beware that this overarching diagram may change as the implementation proceeds in subsequent articles.

First Steps

The first step is probably the most controversial one. Instead of having discrete tables and their associated columns to persist the data, we're essentially going to implement a "database within a database." It's certainly possible that non-relational databases like NoSql would work well with this approach, but for now I'm going to implement everything within SQL Server. The idea is this - given:

Image 2

We have a generic way of describing any entity and its fields -- yes, exactly like we describe a table and its columns in SQL. But in this implementation, there are immediate concerns.

Concerns

  1. Performance
    1. Every instance of an entity-field-value will be stored in a "EntityFieldValue" table.
    2. Data type - we need a generic type for the value, and thus lose the actual type of the data value, which affects search performance
  2. Rows vs. Columns - the data for each record is now represented in rows rather than columns.
  3. Harder to write basic queries because pivots are necessary and type conversion for searches are necessary.

As one poster commented on Stack Overflow:

"Database designs, where variant type of data is stored into columns will almost always come back and be a major issue later on. The quality of data almost always suffers. I have seen this lead to non-atomic data (comma separated values), badly formed date values in string fields, multi-use of a single column and a host of other problems. I would suggest that you look into what it is that they want to store and make the appropriate database design with properly normalized tables."

Solutions

These solutions are not necessarily ideal but go a long way to solving the problems described above.

  1. Proper indexing of the "EntityFieldValue" table.
  2. Use the SQL_Variant type for the value column - quite an amazing feature actually.
  3. Wrap queries such that pivots are automatically performed (not sure how this will work with EF)

Early Implementation Decisions

Entity Framework

Pretty much nothing in this approach is well suited for Entity Framework (EF) or any Object Relational Mapper, as we're not really dealing with concrete tables and their respective models, with the exception of the tables that manage the metadata. And even if we did use an ORM like EF, what's the point? All of the interactions with the database can be handled directly -- either returning the table record or records without having to go through the model's serialization/deserialization process, or by directly working with the JSON body of POST and PATCH calls. Which brings me to the second point - JSON serializers.

System.Text.Json

While there are claims that this is a more performant serializer / deserializer, deserializing JSON into an arbitrary dictionary of key-values results in the value being a JsonElement. This class requires you then either work with the raw text or use one of the Get[SomeType] methods, like GetInt32(), GetDecimal(), GetDateTime(), etc., which defeats the purpose of a general purpose REST API.

NewtonSoft

So the implementation decision is to use NewtonSoft, as the JSON body can be deserialized directly into a Dictionary<string, object> representation, for example:

C#
public object Insert(string entity, Dictionary<string, object> data)

Dapper

Dapper provides a useful interface between the database and REST API without requiring backing entities. Furthermore, we can pass in Dictionary<string, object> directly as parameters to Dapper. The disadvantage of Dapper is that we have to generate the SQL for the CRUD operations, as Dapper is not a full-fledged ORM but more of a wrapper around the core ADO.NET methods, but with enough useful smarts such that I don't want to write my own ADO.NET wrapper. And while it can also work with C# models, much of how Dapper is used in this application is model-less.

Table Guidelines

Every table with have:

  1. A primary key column ID
  2. A bit type Deleted column to indicate the record has been deleted. We never physically delete a record.

We rely on this to generate the SQL in a consistent manner.

Initial Schema

The initial schema is really bare bones -- I'm not putting any "smarts" into these tables yet.

Image 3

Here are the initial schemas:

Entity

SQL
CREATE TABLE [dbo].[Entity](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](255) NOT NULL,
[Deleted] [bit] NOT NULL,
CONSTRAINT [PK_Entity] PRIMARY KEY CLUSTERED ...

EntityField

SQL
CREATE TABLE [dbo].[EntityField](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](255) NOT NULL,
[EntityID] [int] NOT NULL,
[Deleted] [bit] NOT NULL,
CONSTRAINT [PK_EntityField] PRIMARY KEY CLUSTERED ...

ALTER TABLE [dbo].[EntityField] WITH CHECK ADD CONSTRAINT [FK_Entity_EntityField] _
      FOREIGN KEY([EntityID])
REFERENCES [dbo].[Entity] ([ID])

EntityInstance

SQL
CREATE TABLE [dbo].[EntityInstance](
[ID] [int] IDENTITY(1,1) NOT NULL,
[EntityID] [int] NOT NULL,
[Deleted] [bit] NOT NULL,
CONSTRAINT [PK_EntityInstance] PRIMARY KEY CLUSTERED 

ALTER TABLE [dbo].[EntityInstance] _
WITH CHECK ADD CONSTRAINT [FK_EntityInstance_Entity] FOREIGN KEY([EntityID])
REFERENCES [dbo].[Entity] ([ID])

EntityFieldValue

SQL
CREATE TABLE [dbo].[EntityFieldValue](
[ID] [int] IDENTITY(1,1) NOT NULL,
[EntityInstanceID] [int] NOT NULL,
[EntityID] [int] NOT NULL,
[EntityFieldID] [int] NOT NULL,
[Value] [sql_variant] NULL,
[Deleted] [bit] NOT NULL,
CONSTRAINT [PK_EntityFieldValue] PRIMARY KEY CLUSTERED ...

ALTER TABLE [dbo].[EntityFieldValue] WITH CHECK ADD CONSTRAINT _
[FK_EntityFieldValue_EntityInstance] FOREIGN KEY([EntityInstanceID])
REFERENCES [dbo].[EntityInstance] ([ID])
ALTER TABLE [dbo].[EntityFieldValue] WITH CHECK ADD CONSTRAINT _
[FK_EntityFieldValue_Entity] FOREIGN KEY([EntityID])
REFERENCES [dbo].[Entity] ([ID])
ALTER TABLE [dbo].[EntityFieldValue] WITH CHECK ADD CONSTRAINT _
[FK_EntityFieldValue_EntityField] FOREIGN KEY([EntityFieldID])
REFERENCES [dbo].[EntityField] ([ID])

Implementation

SysAdmin Controller

The first thing we need is a system administrator controller that can work with the Entity and EntityField tables so that we can begin describing the metadata of our entities.

C#
[ApiController]
[Route("[controller]")]
public class SysAdminController : ControllerBase
{
  private IConfiguration cfg;
  private TableService ts;

  public SysAdminController(IConfiguration configuration, TableService ts)
  {
    cfg = configuration;
    this.ts = ts;
  }

  [HttpGet("{entity}")]
  public object GetEntities(string entity)
  {
    var data = ts.GetAll(cfg, entity);

    return data;
  }

  [HttpGet("{entity}/{entityid}")]
  public object GetEntity(string entity, int entityid)
  {
    var data = ts.GetSingle(cfg, entity, entityid);

    return data;
  }

  [HttpPost("{entity}")]
  public object Insert(string entity, Dictionary<string, object> data)
  {
    var newRecord = ts.Insert(cfg, entity, data);

    return newRecord;
  }

  [HttpPatch("{entity}/{entityid}")]
  public object Update(string entity, int entityid, Dictionary<string, object> data)
  {
    var record = ts.Update(cfg, entity, entityid, data);

    return record;
  }

  [HttpDelete("{entity}/{entityid}")]
  public object Delete(string entity, int entityid)
  {
    ts.Delete(cfg, entity, entityid);

    return NoContent();
  }
}

This implements the CRUD operations on our "system" tables, mapped to GET, POST, PATCH and DELETE actions. At the moment, these endpoints will operate on any table that follows the guidelines stated above.

Roles

Later on, I'll add role validation to verify that the user is actually a system administrator and that the tables being manipulated are actually "system" tables in the meta-model.

The TableService

This service provides the SQL generation for the generic CRUD operations. This is a key component of the application! It is intended to provide general purpose services for manipulating tables without any backing model.

Notice that the List<DapperRow>, which is returned by Dapper's Query method, is cast to List<IDictionary<string, object>>, which DapperRow implements. Because DapperRow is sealed, this cast is necessary so that we can, if necessary, manipulate the collection or single record.

Public Methods

For readability, these types are aliased:

C#
using Record = System.Collections.Generic.IDictionary<string, object>;
using Records = System.Collections.Generic.List
                <System.Collections.Generic.IDictionary<string, object>>;
using Parameters = System.Collections.Generic.Dictionary<string, object>;

The public methods implement the CRUD operations:

C#
/// <summary>
/// Returns the DapperRow collection as a collection of IDictionary string-object pairs.
/// </summary>
public Records GetAll(IConfiguration cfg, string tableName, Conditions where = null)
{
  var ret = Query(cfg, tableName, null, QueryFnc, where).Cast<Record>().ToList();

  return ret;
}

/// <summary>
/// Returns the DapperRow as IDictionary string-object pairs.
/// </summary>
public Record GetSingle(IConfiguration cfg, string tableName, int recordId)
{
  var ret = Query(cfg, tableName, recordId, QueryFnc).Cast<Record>().SingleOrDefault();

  return ret;
}

/// <summary>
/// Returns the DapperRow as IDictionary string-object pairs.
/// </summary>
public Record Insert(IConfiguration cfg, string tableName, Parameters parms)
{
  // Returns the full record.
  var ret = Insert(cfg, tableName, parms, QueryFnc).SingleOrDefault();

  return ret;
}

/// <summary>
/// Returns the DapperRow as IDictionary string-object pairs.
/// </summary>
public Record Update(IConfiguration cfg, string tableName, int id, Parameters parms)
{
  // Returns the full record.
  var ret = Update(cfg, tableName, id, parms, QueryFnc).SingleOrDefault();

  return ret;
}

public void Delete(IConfiguration cfg, string tableName, int id)
{
  var parms = new Parameters() { { "Deleted", true } };

  Update(cfg, tableName, id, parms);
}

Except for the Delete method, each of these returns:

  • For a query, the collection of records or a single record when the record ID is provided.
  • For an insert, the newly inserted record.
  • For an update, the updated record.

Behind the Scenes

The query operation is a method passed in as a parameter in the Read, Update, and Create operations and performs the actual call into Dapper:

C#
private Records QueryFnc(SqlConnection conn, (string sql, Parameters parms) qinfo)
{
  var records = conn.Query(qinfo.sql, qinfo.parms).Cast<Record>().ToList();

  return records;
}

The heart of TableService is really the SQL generation. We first have an intermediate layer that determines the SQL depending on whether the operation is a query, insert, or update. Because the delete operation is actually an update (we set the Deleted flag to true), there is no implementation for physically deleting a record. Each of these methods create the SqlConnection and executes the function passed in as a parameter. Notice

C#
private Records Query(IConfiguration cfg, string tableName, 
int? id, Func<SqlConnection, (string sql, Parameters parms), Records> query)
{
  var cs = cfg.GetConnectionString("DefaultConnection");

  using (var conn = new SqlConnection(cs))
  {
    var qinfo = SqlSelectBuilder(tableName, id);
    var ret = query(conn, qinfo).Cast<Record>().ToList();

    return ret;
  }
}

private Records Insert(IConfiguration cfg, string tableName, 
Parameters parms, Func<SqlConnection, (string sql, Parameters parms), Records> query)
{
  var cs = cfg.GetConnectionString("DefaultConnection");

  using (var conn = new SqlConnection(cs))
  {
    var qinfo = SqlInsertBuilder(tableName, parms);
    var ret = query(conn, qinfo).Cast<Record>().ToList();

    return ret;
  }
}

private Records Update(IConfiguration cfg, string tableName, int id, 
Parameters parms, Func<SqlConnection, (string sql, Parameters parms), Records> query)
{
  var cs = cfg.GetConnectionString("DefaultConnection");

  using (var conn = new SqlConnection(cs))
  {
    var qinfo = SqlUpdateBuilder(tableName, id, parms);
    var ret = query(conn, qinfo).Cast<Record>().ToList();

    return ret;
  }
}

Lastly, we have the methods that actually generate the SQL and, as necessary, manipulate the parameters we pass in to Dapper.

Note that the current implementation is simply a select * which is not necessarily what we want. I will address the issue of requesting specific fields later (future article!) when we look at more general purpose queries. Notice that there are two versions of the SQL "select" builder - one for straight queries, and one to select the record after it has been inserted.

C#
private (string sql, Parameters parms) SqlSelectBuilder(string table, int? id)
{
  StringBuilder sb = new StringBuilder();
  sb.Append($"select * from {table} where Deleted = 0");
  var parms = new Parameters();

  if (id != null)
  {
    sb.Append(" and id = @id");
    parms.Add("id", id.Value);
  }

  return (sb.ToString(), parms);
}

private (string sql, Parameters parms) SqlInsertSelectBuilder(string table)
{
  StringBuilder sb = new StringBuilder();
  sb.Append($"select * from {table} where Deleted = 0");
  sb.Append(" and id in (SELECT CAST(SCOPE_IDENTITY() AS INT))");

  var parms = new Parameters();

  return (sb.ToString(), parms);
}

private (string sql, Dictionary<string, object> parms) SqlInsertBuilder
        (string table, Dictionary<string, object> parms)
{
  parms["Deleted"] = false;
  var cols = String.Join(", ", parms.Keys.Select(k => k));
  var vals = String.Join(", ", parms.Keys.Select(k => $"@{k}"));
  StringBuilder sb = new StringBuilder();
  sb.Append($"insert into {table} ({cols}) values ({vals});");
  var query = SqlInsertSelectBuilder(table).sql;
  sb.Append(query);

  return (sb.ToString(), parms);
}

private (string sql, Dictionary<string, object> parms) SqlUpdateBuilder
        (string table, int id, Dictionary<string, object> parms)
{
  var setters = String.Join(", ", parms.Keys.Select(k=>$"{k}=@{k}"));
  StringBuilder sb = new StringBuilder();
  sb.Append($"update {table} set {setters} where id = @id;");
  var query = SqlSelectBuilder(table, id).sql;
  sb.Append(query);

  // Add at end, as we're not setting the ID.
  parms["id"] = id;

  return (sb.ToString(), parms);
}

Notice how the insert operation passes in true for the scopeIdentity parameter, so the SQL is properly formed to read the record back in after it has been inserted. The primary (no pun intended) reason for this is to return the primary key ID of the newly created record.

Usage Examples

We can use Postman or similar tool to create a simple entity "Name":

curl --location --request POST 'http://localhost:8493/sysadmin/entity' \
--header 'Content-Type: application/json' \
--data-raw '{"Name": "Name"}'

which returns:

JavaScript
{
  "Name": "Name",
  "Id": 5,
  "Deleted": false
}

and that has two fields, "FirstName" and "LastName":

curl --location --request POST 'http://localhost:8493/sysadmin/entityfield' \
--header 'Content-Type: application/json' \
--data-raw '{"Name": "FirstName", "EntityID": 4}'

curl --location --request POST 'http://localhost:8493/sysadmin/entityfield' \
--header 'Content-Type: application/json' \
--data-raw '{"Name": "LastName", "EntityID": 4}'

which returns, respectively:

JavaScript
{
  "Name": "FirstName",
  "EntityId": 4,
  "Id": 1,
  "Deleted": false
}
JavaScript
{
  "Name": "LastName",
  "EntityId": 4,
  "Id": 2,
  "Deleted": false
}

We can then see these entities with simple GET requests:

sysadmin/entity

which returns:

JavaScript
[
  {
    "ID":4,
    "Name":"Name",
    "Deleted":false
  }
]

and:

sysadmin/entityfield

which returns:

JavaScript
[
  {
    "ID":1,
    "Name":"FirstName",
    "EntityID":4,
    "Deleted":false
  },
  {
    "ID":2,
    "Name":"LastName",
    "EntityID":4,
    "Deleted":false
  }
]

or, for example, by specifying the record ID:

sysadmin/entityfield/2

we see:

JavaScript
{
  "ID":2,
  "Name":"LastName",
  "EntityID":4,
  "Deleted":false
}

The fact that the Deleted flag is returned is sort of annoying, but for the moment I'm not going to deal with it. Because these are soft-deletes, it can actually be useful to review deleted records, so for the moment, I'm going to leave it in.

Model Controller

The idea behind a simple model controller is:

  1. Only query operations are supported.
  2. The APIs in this controller are accessible to any authorized user.
  3. The query for the entity is collated with the corresponding entity fields so you get a hierarchy of the entities and their fields.

Endpoint Implementation

In the implementation, note a couple things:

  1. I have added a Conditions feature.
  2. The GetAll method now has the concept of a where clause, albeit very bare bones.
  3. There is a new method called Collate which is used to add hierarchy to the parent table.
  4. Later on, we'll extend this "conditions" feature to GetSingle method as well.
C#
[HttpGet("entity")]
public object GetEntities()
{
  var entities = ts.GetAll(cfg, "Entity");
  var entityFields = ts.GetAll(cfg, "EntityField");
  ts.Collate(entities, entityFields, "ID", "EntityID", "EntityFields");
  entities;

  return entities;
}

[HttpGet("entity/{entityid}")]
public object GetEntity(int entityid)
{
  var entities = ts.GetAll(cfg, "Entity", where: new Conditions() 
                 { new Condition("ID", Condition.Op.Equals, entityid) });
  var entityFields = ts.GetAll(cfg, "EntityField", 
  where: new Conditions() { new Condition("EntityID", Condition.Op.Equals, entityid) });
  ts.Collate(entities, entityFields, "ID", "EntityID", "EntityFields");
  var entity = entities.SingleOrDefault();

  return entity;
}

The Condition Class

We now have the ability to start utilizing conditional, or where, clauses with the following helper class. The reason I don't just explicitly code the where clause is because I think this is bad form. While I might implement an operator that is called "explicit", the idea here is to be able to programmatically create the conditions and then have the Conditions collection determine how to form the SQL.

C#
public class Condition
{
  public enum Op
  {
    Equals,
    NotEqual,
    IsNull,
    IsNotNull,
  }

  public string Field { get; set; }
  public Op Operation { get; set; }
  public object Value { get; set; }

  public Condition(string field, Op operation)
  {
    Field = field;
    Operation = operation;
  }

  public Condition(string field, Op operation, object val)
  {
    Field = field;
    Operation = operation;
    Value = val;
  }
}

The Conditions Class

This class in derived from List<Condition> and implements the SQL generation, with an attempt to make this template driven. I've implemented a few operations as examples.

C#
public class Conditions : List<Condition>
{
  private static Dictionary<Condition.Op, string> opTemplate = 
                            new Dictionary<Condition.Op, string>()
  {
    {Condition.Op.Equals, "[Field] = [FieldParam]" },
    {Condition.Op.NotEqual, "[Field] != [FieldParam]" },
    {Condition.Op.IsNull, "[Field] is null" },
    {Condition.Op.IsNotNull, "[Field] is not null" },
  };

  public void AddConditions(StringBuilder sb, Dictionary<string, object> parms)
  {
    if (this.Count > 0)
    {
      List<string> conditions = new List<string>();

      this.ForEach(c =>
      {
        var parmName = $"p{c.Field}";
        var template = opTemplate[c.Operation];

        template = template
          .Replace("[Field]", c.Field)
          .Replace("[FieldParam]", $"@{parmName}");

        // Add the parameter regardless of whether we use it or not.
        parms[parmName] = c.Value;

        conditions.Add(template);
      });

      var anders = string.Join(" and ", conditions);
      sb.Append($" and {anders}");
    }
  }
}

Lastly, in the SqlSelectBuilder, we add a call to add the condition SQL to the where clause:

C#
private (string sql, Parameters parms) SqlSelectBuilder
                     (string table, int? id, Conditions where = null)
{
  StringBuilder sb = new StringBuilder();
  sb.Append($"select * from {table} where Deleted = 0");
  var parms = new Dictionary<string, object>();

  if (id != null)
  {
    sb.Append(" and id = @id");
    parms.Add("id", id.Value);
  }

  // THIS LINE IS ADDED.
  where?.AddConditions(sb, parms); 

  return (sb.ToString(), parms);
}

Usage Example

If we now perform a GET on model/entity, we see the following JSON response:

JavaScript
[
  {
    "ID":4,
    "Name":"Name",
    "Deleted":false,
    "EntityFields":[
      {
        "ID":1,
        "Name":"FirstName",
        "EntityID":4,
        "Deleted":false
      },
      {
        "ID":2,
        "Name":"LastName",
        "EntityID":4,
        "Deleted":false
      }
    ]
  }
]

And similarly, model/entity/4 will return just the entity and entity fields for the specific entity id.

Dynamic Pivots

We also need the ability to perform CRUD operations on entity instances. While the EntityFieldValue table is a collection of entity-entityfield-value records, we don't want the user exposed to this implementation detail. Instead, the CRUD operations should work with JSON the way we're used to seeing it, for example:

JavaScript
{
  "FirstName": "Marc",
  "LastName": "Clifton"
}

This requires some pivotal (pun intended) magic in the APIs.

First let's look at what we're trying to accomplish. We have a single entity defined so far:

Image 4

and two entity fields defined for that entity:

Image 5

So let's create entity instances manually. First, we need to create three instances of the "Name" entity:

SQL
insert into EntityInstance (EntityID, Deleted) values (4, 0)
insert into EntityInstance (EntityID, Deleted) values (4, 0)
insert into EntityInstance (EntityID, Deleted) values (4, 0)

Since these are the first records being created, I know the resulting primary key IDs are going to be 1, 2, and 3.

Next, we'll create some instance values for these entity instances:

SQL
insert into EntityFieldValue (EntityInstanceID, EntityID, 
EntityFieldID, value, Deleted) values (1, 4, 1, 'Marc', 0)
insert into EntityFieldValue (EntityInstanceID, EntityID, 
EntityFieldID, value, Deleted) values (1, 4, 2, 'Clifton', 0)

insert into EntityFieldValue (EntityInstanceID, EntityID, 
EntityFieldID, value, Deleted) values (2, 4, 1, 'Gregory', 0)
insert into EntityFieldValue (EntityInstanceID, EntityID, 
EntityFieldID, value, Deleted) values (2, 4, 2, 'House', 0)

insert into EntityFieldValue (EntityInstanceID, EntityID, 
EntityFieldID, value, Deleted) values (3, 4, 1, 'Marcus', 0)
insert into EntityFieldValue (EntityInstanceID, EntityID, 
EntityFieldID, value, Deleted) values (3, 4, 2, 'Welby', 0)

The resulting EntityFieldValue records look like this:

Image 6

which is not at all what we want to return to the client. Instead, we want something that looks like this:

Image 7

Which is accomplished with this SQL:

SQL
select *
from
(
  select ei.ID, ef.Name, [value] 
  from EntityFieldValue efv
  join EntityField ef on ef.ID = efv.EntityFieldID
  join EntityInstance ei on ei.ID = efv.EntityInstanceID
) as t
pivot (max([value]) for [Name] in ([FirstName], [LastName])) as p

Note that the Deleted field has disappeared because it is not in the definition of the entity fields for the entity "Name".

The problem is, that SQL hard-codes the entity field names, and the whole point of this architecture is that the entity fields can be created and removed on the fly. With the help of SQLShack and Aveek Das' post on Dynamic Pivot Tables in SQL Server, we create a stored procedure to execute the SQL, parameterizing the column to pivot, and the list of fields to pivot. We can't do this directly in the above statement -- the only way to parameterize a pivot statement is through a stored procedure that executes the SQL dynamically, like this:

SQL
CREATE PROCEDURE dbo.EntityPivotTable
@ColumnToPivot NVARCHAR(255),
@ListToPivot NVARCHAR(MAX)
AS
BEGIN

  DECLARE @SqlStatement NVARCHAR(MAX)
  SET @SqlStatement = N'
  select *
  from
  (
    select ei.ID, ef.Name, [value] 
    from EntityFieldValue efv
    join EntityField ef on ef.ID = efv.EntityFieldID and ef.Deleted = 0
    join EntityInstance ei on ei.ID = efv.EntityInstanceID and ei.Deleted = 0
    where efv.Deleted = 0
  ) as t
  pivot (max([value]) for [' + @ColumnToPivot + '] in ('+@ListToPivot+')) as p';

EXEC(@SqlStatement)

END

And then we can call the SP with a parameterized list of entity field names, which we know based on the definition in the EntityField table for the particular entity. Done manually, it looks like this:

SQL
EXEC dbo.EntityPivotTable 'Name' ,'[FirstName], [LastName]'

The odd aggregation max([value]) is required by the pivot syntax and, since there is only one value for each record in the given entity field value instance corresponding to the entity field name, effectively, max([value]) == value.

Surprisingly, this even works with null values:

SQL
insert into EntityInstance (EntityID, Deleted) values (4, 0)
insert into EntityInstance (EntityID, Deleted) values (4, 0)

insert into EntityFieldValue (EntityInstanceID, EntityID, EntityFieldID, value, Deleted) _
values (4, 4, 1, null, 0)
insert into EntityFieldValue (EntityInstanceID, EntityID, EntityFieldID, value, Deleted) _
values (4, 4, 2, 'Q', 0)

insert into EntityFieldValue (EntityInstanceID, EntityID, EntityFieldID, value, Deleted) _
values (5, 4, 1, null, 0)
insert into EntityFieldValue (EntityInstanceID, EntityID, EntityFieldID, value, Deleted) _
values (5, 4, 2, 'R', 0)

and we see:

Image 8

So now we know what we're doing, at least with regards to queries and how to pivot the records. At the end of the day, I will actually be using SP_EXECUTESQL instead of a stored procedure, as I also need the ability to pass in parameters for the where clause, and other future behaviors, so what will be executed will look like this:

SQL
DECLARE @ColumnToPivot NVARCHAR(255) = 'Name';
DECLARE @ListToPivot NVARCHAR(max) = '[FirstName], [LastName]';
Declare @SqlPivot NVARCHAR(MAX) = N'
  select *
  from
  (
    select ei.ID, ef.Name, [value] 
    from EntityFieldValue efv
    join EntityField ef on ef.ID = efv.EntityFieldID and ef.Deleted = 0
    join EntityInstance ei on ei.ID = efv.EntityInstanceID and ei.Deleted = 0
    where ei.ID = @id
  ) as t
pivot (max([value]) for [' + @ColumnToPivot + '] in ('+@ListToPivot+')) as p';

exec SP_EXECUTESQL @SqlPivot, N'@id int', @id = 1

which will be attempted through templates on the part of the SQL builder.

The Entity Controller

The entity controller is quite basic. At this point, I wish I could just create a map between the REST action, endpoint, and the code to call. There probably is a way, but MapRoute doesn't seem to support calling methods, not to mention with dependency injection on the method. So we have this:

C#
[ApiController]
[Route("[controller]")]
public class EntityController : ControllerBase
{
  private IConfiguration cfg;
  private TableService ts;
  private EntityService es;

  public EntityController(IConfiguration configuration, TableService ts, EntityService es)
  {
    cfg = configuration;
    this.ts = ts;
    this.es = es;
  }

  [HttpGet("{entity}")]
  public object GetEntities(string entity)
  {
    // TODO: This needs to support pagination.
    var data = es.GetAll(cfg, ts, entity);

    return data;
  }

  [HttpGet("{entity}/{entityid}")]
  public object GetEntity(string entity, int entityid)
  {
    var data = es.GetSingle(cfg, ts, entity, entityid);

    return data;
  }

  [HttpPost("{entity}")]
  public object Insert(string entity, Dictionary<string, object> data)
  {
    var newRecord = es.Insert(cfg, ts, entity, data);

    return newRecord;
  }

  [HttpPatch("{entity}/{entityid}")]
  public object Update(string entity, int entityid, Dictionary<string, object> data)
  {
    var record = es.Update(cfg, ts, entity, entityid, data);

    return record;
  }

  [HttpDelete("{entity}/{entityid}")]
  public object Delete(string entity, int entityid)
  {
    es.Delete(cfg, ts, entity, entityid);

    return NoContent();
  }
}

The Entity Service

The entity service is where the interesting stuff happens.

The Basic Pivot Template

SQL
private const string template = @"
DECLARE @ColumnToPivot NVARCHAR(255) = 'Name';
DECLARE @ListToPivot NVARCHAR(max) = '[cols]';
DECKARE @SqlPivot NVARCHAR(MAX) = N'
select *
from
(
  select ei.ID, ef.Name, [value] 
  from EntityFieldValue efv
  join EntityField ef on ef.ID = efv.EntityFieldID and ef.Deleted = 0
  join EntityInstance ei on ei.ID = efv.EntityInstanceID and ei.Deleted = 0
  where efv.Deleted = 0 and [where]
) as t
pivot (max([value]) for [' + @ColumnToPivot + '] in ('+@ListToPivot+')) as p';

exec SP_EXECUTESQL @SqlPivot, [parms]
";

GetAll

Notice that the qualifier is on the entity ID.

C#
public Records GetAll(IConfiguration cfg, TableService ts, string entity)
{
  var entityFieldInfo = GetEntityFieldNames(cfg, ts, entity);
  var cols = String.Join(",", entityFieldInfo.fieldNames.Select(n => $"[{n}]"));

  var sql = template
    .Replace("[cols]", cols)
    .Replace("[where]", "ei.EntityID = @id")
    .Replace("[parms]", "N'@id int', @id = @pid");

  var cs = cfg.GetConnectionString("DefaultConnection");

  using (var conn = new SqlConnection(cs))
  {
    var records = conn.Query_
    (sql, new { pid = entityFieldInfo.entityId }).Cast<Record>().ToList();

    return records;
  }
}

GetSingle

Notice that the qualifier is on the entity instance ID. We don't need the entity ID in the qualifier because the instance ID is specific to the entity, which should match the entity name we pass in.

C#
public Record GetSingle(IConfiguration cfg, TableService ts, string entity, int instanceid)
{
  var entityFieldInfo = GetEntityFieldNames(cfg, ts, entity);
  var cols = String.Join(",", entityFieldInfo.fieldNames.Select(n => $"[{n}]"));

  var sql = template
    .Replace("[cols]", cols)
    .Replace("[where]", "ei.ID = @id")
    .Replace("[parms]", "N'@id int', @id = @pid");

  var cs = cfg.GetConnectionString("DefaultConnection");

  using (var conn = new SqlConnection(cs))
  {
    var record = conn.Query(sql, new { pid = instanceid }).Cast<Record>().SingleOrDefault();

    return record;
  }
}

GetEntityFieldNames

As the comment states, this could be cached so we're not constantly hitting the DB for this information. The problem then becomes, when is the cache refreshed?

C#
private (List<string> fieldNames, int entityId) 
GetEntityFieldNames(IConfiguration cfg, TableService ts, string entity)
{
  // TODO: This could be cached so we're not hitting the DB for the entity fields.
  var entityRec = ts.GetSingle(cfg, "Entity", new Conditions() 
                  { new Condition("Name", Condition.Op.Equals, entity) });
  var entityId = entityRec["ID"].ToInt();
  var entityFields = ts.GetAll(cfg, "EntityField", where: new Conditions() 
                     { new Condition("EntityID", Condition.Op.Equals, entityId) });
  var entityFieldNames = entityFields.Select(fields => fields["Name"].ToString()).ToList();

  return (entityFieldNames, entityId);
}

Notice how this leverages the TableService methods. And yes, I'll replace the hard-coded strings with constants at some point soon.

Usage Examples

With a GET entity/Name, we see the following JSON returned:

[
  {"ID":1, "FirstName":"Marc", "LastName":"Clifton"},
  {"ID":2, "FirstName":"Gregory", "LastName":"House"},
  {"ID":3, "FirstName":"Marcus", "LastName":"Welby"},
  {"ID":4, "FirstName":null, "LastName":"Q"},
  {"ID":5, "FirstName":null, "LastName":"R"}
]

which is exactly what we want returned to the client.

With a GET entity/Name/2, we see just the Name record with ID 2 returned:

{"ID":2,"FirstName":"Gregory","LastName":"House"}

Insert

Insert is actually quite straight forward. We create an EntityInstance record, then create the individual EntityFieldValue records for each key-value pair in the parameters passed in by the JSON body.

C#
public Record Insert(IConfiguration cfg, TableService ts, string entity, Parameters data)
{
  var entityFields = GetEntityFields(cfg, ts, entity);
  var entityFieldInfo = GetEntityFieldNames(cfg, ts, entity);
  var ei = ts.Insert(cfg, "EntityInstance", 
           new Parameters() { { "EntityId", entityFieldInfo.entityId } });
  int entityInstanceId = ei["ID"].ToInt();

  bool parmsMatchEntityDefinition = data.All(kvp => entityFields.records.Any
                                    (r => r["Name"].ToString() == kvp.Key));
  Assertion.IsTrue(parmsMatchEntityDefinition, 
  $"One or more fields to insert is not part of the entity {entity} description. 
  Fields are {String.Join(", ", GetBadFields(data, entityFields.records))}");

  // TODO: We need a batch operation.
  data.ForEach(kvp =>
  {
    ts.Insert(cfg, "EntityFieldValue", new Parameters()
    {
      { "EntityInstanceID", entityInstanceId },
      { "EntityID", entityFieldInfo.entityId },
      { "EntityFieldID", entityFields.records.Single
      (r=>r["Name"].ToString() == kvp.Key)["ID"] },
      { "Value", kvp.Value }
    });
  });

  var ret = GetSingle(cfg, ts, entity, entityInstanceId);

  return ret;
}

Note that we verify that the fields being passed are described in the EntityField descriptor for the specified entity.

Usage Example

Given this cURL:

curl --location --request POST 'http://localhost:8493/entity/Name' \
--header 'Content-Type: application/json' \
--data-raw '{"FirstName": "A", "LastName": "B"}'

We get back the created Name entity:

JavaScript
{"ID": 7, "FirstName": "A", "LastName": "B"}

If we specify in the JSON body a field that does not exist in the EntityField descriptor for Name, like this: {"FirstName": "A", "LastName": "B", "Foo": "C"}, we get back an exception:

System.Exception: One or more fields to insert is not part of the entity Name description. 
Fields are Foo

Update

Again, update is straight forward, except here we need to obtain the EntityFieldValue ID for each field-value being updated, and we don't create a new EntityInstance but instead verify that the instance exists. We also assert for each entity-field that it exists. We could probably do this simply by checking the EntityInstance table, but this is more robust in case a single entity-field was deleted. Also notice the comment block, that we have an issue with entity-field names being the same as the columns in the backing table EntityFieldValue. Something to fix, particularly with the columns ID and Value, as these are potentially reasonable entity-field names.

C#
public Record Update(IConfiguration cfg, TableService ts, string entity, 
                     int entityInstanceId, Parameters data)
{
  var entityFields = GetEntityFields(cfg, ts, entity);
  var entityFieldInfo = GetEntityFieldNames(cfg, ts, entity);
C#
  // We don't want to update the ID, so remove it in case the client passed it in.
  // This is problematic -- if the entity has a field called "ID", 
  // this will remove the legitimate entity-field "ID".
  // Which reveals the more general problem that the columns in the table 
  // that we are updating cannot be used as entity-field names!
  // So we have "reserved" entity-field names that cannot be used 
  // for the entity-field definition, which we should check for when entity-fields are added.
  data.Remove("ID");

  bool parmsMatchEntityDefinition = data.All(kvp => entityFields.records.Any
                                    (r => r["Name"].ToString() == kvp.Key));
  Assertion.IsTrue(parmsMatchEntityDefinition, $"One or more fields to update 
            is not part of the entity {entity} description. 
            Fields are {String.Join(", ", GetBadFields(data, entityFields.records))}");

  // TODO: We need a batch operation.
  data.ForEach(kvp =>
  {
    int efId = entityFields.records.Single
               (r => r["Name"].ToString() == kvp.Key)["ID"].ToInt();

    var efv = ts.GetSingle(cfg, "EntityFieldValue", new Conditions()
    {
      new Condition("EntityInstanceID", Condition.Op.Equals, entityInstanceId),
      new Condition("EntityFieldID", Condition.Op.Equals, efId)
    });

    Assertion.NotNull(efv, $"There is no entity {entity} with ID {entityInstanceId}");

    int efvId = efv["ID"].ToInt();
    ts.Update(cfg, "EntityFieldValue", efvId, new Parameters() { { "Value", kvp.Value } });
  });

  var ret = GetSingle(cfg, ts, entity, entityInstanceId);

  return ret;
}

Usage Example

Using the ID, we got back from the Insert API call above, given this cURL:

curl --location --request PATCH 'http://localhost:8493/entity/Name/7' \
--header 'Content-Type: application/json' \
--data-raw '{"FirstName": "AAA"}'

And we see returned the JSON of the updated Name record:

{"ID": 7, "FirstName": "AAA", "LastName": "B"}

Delete

Delete is interesting, as it is sufficient to soft-delete the entry in the EntityInstance table, which makes me wonder if the Deleted flag in the EntityFieldValue table is necessary. This call is so simple it makes me wonder if it even needs a method in the EntityService class, but for completeness, and future features such as audit records, it will have its use.

C#
public void Delete(IConfiguration cfg, TableService ts, string entity, int entityInstanceId)
{
  ts.Delete(cfg, "EntityInstance", entityInstanceId);
}

Usage Example

with GET entity/Name, we see, among other records, the record with ID 7 that we created earlier:

JavaScript
... other records ...
{"ID":5,"FirstName":null,"LastName":"R"},
{"ID":7,"FirstName":"A4","LastName":"C4"}]

When we delete the entity instance:

curl --location --request DELETE 'http://localhost:8493/entity/Name/7'

We no longer see that record:

... other records ...
{"ID":5,"FirstName":null,"LastName":"R"}
... ID 7 is gone ...

and looking directly in the EntityInstance table, we see that the instance is soft-deleted:

Image 9

Tooling

Having a funky architecture isn't much use if we don't have tools to work with it. Much of each article will be involved with the tooling and setting up the metadata, which the front end can then use to do something practical. For purposes of these articles, I'm going to rely on js-grid (yes, it requires jQuery), as an open source grid editor. Be warned - while this is a perfectly functional grid, it's pretty cheesy looking. However, it's free, lightweight, and works fine for demonstration purposes.

Serving Pages

Because I created the solution as an ASP.NET Core Web API rather than an ASP.NET Core Web App, at the moment, I'm going to serve the built-in pages in the respective controllers as I don't really have the patience to read through Static files in ASP.NET Core and then see if it works. There's also what looks to be a simpler to understand writeup here on StackOverflow. So at the moment, we have:

In the EntityController:

C#
[HttpGet()]
public object GetEntityPage()
{
  string contentRootPath = env.ContentRootPath;
  string filePath = Path.Combine(contentRootPath, "Website\\entity.html");

  return PhysicalFile(filePath, "text/html");
}

and in the SysAdminController:

C#
[HttpGet()]
public object GetSysAdminPage()
{
  string contentRootPath = env.ContentRootPath;
  string filePath = Path.Combine(contentRootPath, "Website\\sysadmin.html");

  return PhysicalFile(filePath, "text/html");
}

and a new WebsiteController:

C#
[ApiController]
[Route("Website")]
[Route("Scripts")]
public class WebsiteController : ControllerBase
{
  private static Dictionary<string, string> mimeMap = new Dictionary<string, string>()
  {
    {".js", "text/javascript" },
    {".ts", "text/javascript" },
    {".css", "text/css" },
    {".map", "text/map" },
  };

  private IWebHostEnvironment env;

  public WebsiteController(IWebHostEnvironment env)
  {
    this.env = env;
  }

  [HttpGet("{fn}")]
  public object GetFile(string fn)
  {
    string contentRootPath = env.ContentRootPath;
    string filePath = Path.Combine(contentRootPath, $"Website\\{fn}");
    string ext = Path.GetExtension(fn);
    string mime = mimeMap[ext];

    return PhysicalFile(filePath, mime);
  }

  [HttpGet("lib/{fn}")]
  public object GetLibFile(string fn)
  {
    string contentRootPath = env.ContentRootPath;
    string filePath = Path.Combine(contentRootPath, $"Website\\lib\\{fn}");
    string ext = Path.GetExtension(fn);
    string mime = mimeMap[ext];

    return PhysicalFile(filePath, mime);
  }
}

If you're cringing at this point, that's fine. The idea here is to provide some built-in functionality for editing the entity model and the actual entity records.

The Front-End

The front-end is a TypeScript application. The only code I'm going to present here is the AhkmService class which interfaces with the API discussed above:

C#
export class AhkmService extends Rest {

  // Entity Model:

  public async getEntities(): Promise<Entity[]> {
    return this.get(`${window.location.origin}/sysadmin/entity`);
  }

  public async getEntityFields(): Promise<EntityField[]> {
    return this.get(`${window.location.origin}/sysadmin/entityfield`);
  }

  public async addEntity<T>(entity: string, data: object): Promise<T> {
    return this.post(`${window.location.origin}/sysadmin/${entity}`, data);
  }

  public async updateEntity<T>(entity: string, id: number, data: object): Promise<T> {
    return this.patch(`${window.location.origin}/sysadmin/${entity}/${id}`, data);
  }

  public async deleteEntity(entity: string, id: number): Promise<void> {
    return this.delete(`${window.location.origin}/sysadmin/${entity}/${id}`);
  }

  // Records:

  public async getRecords(entity: string): Promise<EntityRecord[]> {
    return this.get(`${window.location.origin}/entity/${entity}`);
  }

  public async addRecord(entity: string, data: object): Promise<EntityRecord> {
    return this.post(`${window.location.origin}/entity/${entity}`, data);
  }

  public async updateRecord(entity: string, id: number, data: object): Promise<EntityRecord> {
    return this.patch(`${window.location.origin}/entity/${entity}/${id}`, data);
  }

  public async deleteRecord(entity: string, id: number): Promise<void> {
    return this.delete(`${window.location.origin}/entity/${entity}/${id}`);
  }
}

Editing the Entity Model

Path: /sysadmin

In this screenshot:

Image 10

We can see the Name entity with its two fields, FirstName and LastName, and another entity that I created, Contact:

Image 11

Editing Records

Path: /entity

Notice we have two grids - one for the entity, and on the right, the records associated with that entity:

Image 12

If we click on the entity called "Name", we see the records we created earlier:

Image 13

We can create, edit, update and delete these records as if they were backed by an actual table.

Similarly, for the Contact entity, the records grid shows the fields I defined above and any data:

Image 14

So that it - we have a functional prototype of the core concept of the Adaptive Hierarchical Knowledge Model, particularly the "Adaptive" part.

Conclusion

This starts to bring to fruition, as a web API, the work that I did in 2011 (ten years ago!) on Relationship Oriented Programming.

What was Accomplished?

  • The API presented here allows us to create object models without requiring physical tables of each "entity" in the database.
  • C# class models on the back end are eliminated.
  • The UI is adaptive to the fields specified in the model definition tables.
  • This is accomplished with four small meta-model tables.
  • The API lets us perform CRUD operations on the meta-model tables.
  • The API lets us perform CRUD operations on the model instances in the way the client expects to work with record collections.

Some Powerful Artifacts of this Approach

  • Because all the field values for each entity are stored in a single table, it becomes trivial to search across entities for a particular value.
    • Let's say, in a records management application, I want to find all the records across police, fire, court, and property entities.
    • Assuming the field(s) for the person's name is defined consistently for each of these entities, we can now return the entities where that persons name occurs.
    • This is also typically an artifact of NoSql databases.
  • It becomes trivial to customize entities and entity fields, without touching the schema, the back end, or the front-end.
    • No more migrations.
    • No more code releases.
    • The front-end, if designed to be adaptive to the fields in the entity, doesn't require updating either.
    • No more tables and models that have hundreds of fields that are mostly unused but exist for specific customer requirements.

Business logic of course benefits from entity models, but in cases where business logic is required, it will be demonstrated in a future article how business logic, along with classes that map to the desired models, are implemented as plugin-services, not affecting the core application code, and how extension methods could also be used on the Dictionary<string, object> that represents the record.

What I'll cover in Part II

Part II will implement the hierarchical aspect of the project name Adaptive Hierarchical Knowledge Management Meta-Model.

Image 15

New Features:

  • The next piece to add is the ParentChildMap instances and the AllowableParentChildRelationship model, which will allow us to do a lot more interesting things by creating a hierarchy of entity relationships. I demonstrate how we can create a recipe book with the following entities:
    • The recipe creator (a Name entity)
    • The recipe ingredients (a new RecipeIngredient entity)
    • The recipe directions (a new RecipeStep entity)
  • And we will see how we can work these relationships in different ways:
    • What recipes do we have?
    • What recipes do we have from a particular person?
    • What recipes do we have that use a particular ingredient?
  • Ordering entity fields so we can re-arrange the default order the record's grid columns.

Refactoring:

  • Hard-coded actual table and table column names.
  • Review naming convention as I'm using the term "entity" to refer to both the model and the collection names the user defines in the model.
  • Serving the built-in web pages properly rather than the kludge I have currently implemented.

Constraints:

  • EntityField should be unique for EntityID, Deleted.
  • Entity should be unique for Name, Deleted.
  • EntityInstance should be unique for InstanceID, Deleted.
  • EntityFieldValue should be unique for InstanceID, EntityID, EntityFieldID, Deleted.

Indexing:

  • Indexing needs to be implemented for performance.

That should be enough for Part II!

History

  • 1st April, 2021: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect Interacx
United States United States
Blog: https://marcclifton.wordpress.com/
Home Page: http://www.marcclifton.com
Research: http://www.higherorderprogramming.com/
GitHub: https://github.com/cliftonm

All my life I have been passionate about architecture / software design, as this is the cornerstone to a maintainable and extensible application. As such, I have enjoyed exploring some crazy ideas and discovering that they are not so crazy after all. I also love writing about my ideas and seeing the community response. As a consultant, I've enjoyed working in a wide range of industries such as aerospace, boatyard management, remote sensing, emergency services / data management, and casino operations. I've done a variety of pro-bono work non-profit organizations related to nature conservancy, drug recovery and women's health.

Comments and Discussions

 
SuggestionWhy still opted to use sql as component for your km solution? Pin
gvdpeer5-Apr-21 0:05
gvdpeer5-Apr-21 0:05 
GeneralRe: Why still opted to use sql as component for your km solution? Pin
Marc Clifton5-Apr-21 8:23
mvaMarc Clifton5-Apr-21 8:23 
GeneralMy vote of 5 Pin
Bernhard Simon Bock3-Apr-21 8:11
Bernhard Simon Bock3-Apr-21 8:11 
GeneralMy vote of 5 Pin
LightTempler1-Apr-21 22:14
LightTempler1-Apr-21 22:14 
GeneralRe: My vote of 5 Pin
Marc Clifton2-Apr-21 2:41
mvaMarc Clifton2-Apr-21 2:41 
GeneralRe: My vote of 5 Pin
LightTempler2-Apr-21 6:07
LightTempler2-Apr-21 6:07 
GeneralRe: My vote of 5 Pin
Marc Clifton3-Apr-21 4:45
mvaMarc Clifton3-Apr-21 4:45 
GeneralRe: My vote of 5 Pin
LightTempler3-Apr-21 8:04
LightTempler3-Apr-21 8:04 
GeneralRe: My vote of 5 Pin
Marc Clifton5-Apr-21 12:58
mvaMarc Clifton5-Apr-21 12:58 
GeneralRe: My vote of 5 Pin
LightTempler6-Apr-21 10:46
LightTempler6-Apr-21 10:46 
GeneralRe: My vote of 5 Pin
Marc Clifton7-Apr-21 3:31
mvaMarc Clifton7-Apr-21 3:31 
GeneralRe: My vote of 5 Pin
LightTempler7-Apr-21 9:21
LightTempler7-Apr-21 9:21 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.