|
As part of our build process, I needed to version the assembly with the latest build number. This is the first step in the build pipeline. Initially, I investigated doing this using the dotnet command as below.
dotnet build MyProject.csproj --configuration Release /p:Version=%1 The %1 parameter is the latest build number and is passed into the script via a build step. This command will build the project using the arguments that have been specified and create the build articles in the bin folder. The built assembly that the command has created in the bin folder will correctly have the version number set as per the command. So if %1 has been set to 1.0.0.0 then right-clicking on the assembly (or EXE) in the bin folder will show a version number of 1.0.0.0. All of this works exactly as it should.
The problem I was having however, is that as part of our release pipeline, I deploy the web application to our Azure hosting. To deploy to Azure using the Azure deploy task you need to create a .zip file. I create the .zip file using an MSBUILD command. I don't create the .zip file from the previous dotnet build command. The MSBUILD command uses the current project files and creates the .zip file from them. Therefore the .csproj file needs to have the correct version number before I run the MSBUILD command which then creates the .zip file.
I therefore needed to find a way to update the version number in the .csproj file before I ran the MSBUILD command. That way the version number of the assembly that gets deployed to our Azure hosting will have the correct version number.
Here is part of a .csproj file showing the version number.
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<Version>1.0.0.0</Version>
</PropertyGroup>
</Project> I began to investigate how to update the version number within the .csproj file directly. I thought I could implement a simple PowerShell (PS) script that would do this. Also, by updating the version number directly in the .csproj file (and it being the first step in the build pipeline) then every subsequent step that referenced the version number would be referencing the correct version number.
After some Googling and trial-and-error I came up with the following PS script.
cls
Write-Host "Versioning started"
"Sources directory " + $Env:BUILD_SOURCESDIRECTORY
"Build number " + $Env:BUILD_BUILDNUMBER
$csprojfilename = $Env:BUILD_SOURCESDIRECTORY+"\MyProject.csproj"
"Project file to update " + $csprojfilename
[xml]$csprojcontents = Get-Content -Path $csprojfilename;
"Current version number is" + $csprojcontents.Project.PropertyGroup.Version
$oldversionNumber = $csprojcontents.Project.PropertyGroup.Version
$csprojcontents.Project.PropertyGroup.Version = $Env:BUILD_BUILDNUMBER
$csprojcontents.Save($csprojfilename)
"Version number has been udated from " + $oldversionNumber + " to " + $Env:BUILD_BUILDNUMBER
Write-Host "Finished" The environment variables $Env:BUILD_SOURCESDIRECTORY and $Env:BUILD_BUILDNUMBER are both provided by the build process as part of the Microsoft build environment. You don't need to do or set anything to have access to these. They are provided by the build environment straight out the box for free. In fact, there are many more such variables that are also available that you may find useful in your other build scripts and processes.
The PS script fetches the latest build number and the folder path of where the source files are located on the build server. It then reads the contents of the .csproj file as an XML document. The version number is set to the latest build number (as provided by the build environment) and then closes and saves the updated .csproj file. That's it. That's all you need to do to update the version number in your .csproj file.
I initially thought that this would involve some horrible string search and replace to update the .csproj file. Thankfully, PowerShell contains native support for manipulating XML files, and it was in fact much easier than I thought. So if you need to update your .NET project's .csproj version number, feel free to use this script.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
For anyone who uses Azure Powershell scripts, it will come as no surprise that as of November last year, they deprecated all their existing Powershell functions that related to Azure. These have instead been replaced with new Azure Resource Manager (ARM) equivalent functions. These new functions supercede the previous suite of Azure Powershell functions. Microsoft began initially deprecating these as far back as 2018, but due to pressure from the development community, Microsoft delayed this for a further year to give developers more time to find workarounds and / or solutions.
I was blissfully unaware of this fact, as I do not make any direct use of their Azure Powershell functions anywhere in our development ecosystem. I only became aware of the fact that these Azure Powershell functions were deprecated when our Team Foundation Services (TFS) deployments to Azure started failing. Unknown to me, the TFS Azure Web App Deployment task uses an Azure Powershell script to push deployments to Azure. This Powershell script implemented several of the now deprecated Azure Powershell functions. Hence it began failing as the functions it was attempting to invoke no longer existed.
This left me in the rather awkward predicament of not being to deploy any of our ASP.NET Web API services to Azure. No new features could therefore be deployed for any of the apps that consumed these services. All of our web and mobile apps consume these services, so until I found a solution we would be unable to release any new features and / or maintenance fixes for the apps that consumed these services.
I began scrambling to look for a solution. I came across several articles that outlined the replacement functions and how to implement them. This was all well and good, but I didn't particularly want to have to rewrite the entire Microsoft Azure Powershell script that was responsible for deployments using the new ARM functions. I tried looking for a new version of the original Azure Powershell script, but could only find the existing version on Github. There didn't seem to be an updated version of this script anywhere.
I could either rewrite the original Azure Powershell script myself (which for obvious reasons I wasn't particularly keen on doing), or try to find another solution. After some further head scratching, I had an epiphany. Why didn't I write an FTP script that would deploy to Azure? From the Azure portal for a web app, you are able to download the publishing profile. The publishing profile (among other things) contains the FTP details required to connect to your Azure web app.
So I looked at writing a simple FTP script. I needed an FTP client that could be invoked from the command line (and therefore be scripted) as I didn't want this to be a manual task. I wanted this to work exactly as it already did i.e. to be a TFS task (and also because after each deployment to a particular endpoint I run a suite of unit tests to ensure that the code all works correctly from the Azure hosting).
I had previous experience of using WinSCP[^] so knew that this would do the job. I had only ever previously used WinSCP for uploading single files, never for uploading the hundreds of files our ASP.NET Web API services contained. After some further research I discovered that WinSCP contained a command called synchronize. This command would synchronize a local directory with a remote one, including all sub-directories. This was exactly what I needed.
I eventually wrote the following scripts that deploy our services to Azure. They are invoked from our TFS build so there is no change to our deployment process whatsoever.
Here is the batch script that is invoked from the TFS deployment task.
@echo off
cls
"C:\Program Files (x86)\WinSCP\WinSCP.com" /script=ftpDeployServices.ftp Here is the FTP script containing the WinSCP FTP commands (ftpDeployServices.ftp).
option batch abort
option confirm off
option transfer binary
open ftp:
cd /site/wwwroot
synchronize remote -delete <mylocalfolder> /site/wwwroot
close
exit The synchronize command does all the heavy lifting of checking the timestamps and uploading only those files that have changed. The -delete parameter removes any files from the remote site that don't exist on the local site. If no files have been changed since the last deployment, then no files are uploaded. Only those files that have been changed are uploaded. This results in much faster deployments as only the incrementally changed files are ever uploaded.
By hand rolling my own FTP script means that I am no longer at the mercy of any future changes to any Microsoft related Powershell script(s) or infrastructure. This means I have one less headache to worry about. In fact, the updating of the previous Azure Powershell scripts was a blessing in disguise, as my workaround is faster and reduces our dependency on Microsoft infrastructure. A good result given the severity of the impact that this update had on our deployment process.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I first began writing the ASP.NET WebAPI services for our vehicle telemetry tracking, it was a small project with only a few controllers. The log data that was generated was small in volume and I would query the logs with relative ease. Fast forward to today (over 3 years later), and the ASP.NET WebAPI project now contains services that are consumed by our mobile app, web apps, licence and MOT checking apps etc. Basically, the project has grown substantially since I first created it.
At the time that I first wrote the ASP.NET WebAPI services, I investigated many of the logging frameworks that were targeted for the the .NET Framework at the time. I looked at Log4Net, NLog and Elmah. They all looked feature rich and customisable, but in the end I rolled my own logging framework. My requirements at the time were quite simple as were the sizes of the log data that would be generated. So I rolled a simple logging framework that would meet my requirements.
The log data that is generated today can get very large, and querying it can be time consuming. A single build can load 10k rows of log data. We have console apps, mobile apps and web apps all running on a daily basis that also contribute to the volume of log data that is generated. It is not uncommon to have 100s of thousands of log entries created each day. Trying to diagnose a problem with so much data to query is not always a simple task. This is where structured logging comes into play.
What is structured logging?
Log data is often comprised of log messages. These log messages are unstructured strings of text. Diagnosing an error entails searching through these text strings looking for the information that we need.
Typical log messages might look something like this.
INFORMATION - 2019-12-24 - This is a log message
ERROR - 2019-12-24 - Something went wrong
INFORMATION - 2019-12-24 - This is another log message Such unstructured data makes it difficult to query for useful information. As a developer it would be really helpful to be able to query and filter your log data. For example to query log data by a specific date, customer, log entry type etc. The goal behind structured logging is to solve this problem. The aim of structured logging is to bring structure to unstructured data to allow it be usefully queried and filtered to find the information you need in a timely manner. For log data to be queried and filtered requires the data to be mapped into a format that allows this e.g. XML and JSON would be ideal candidates.
What was clear was that my initial logging framework was no longer fit for purpose. The volumes of log data being created each day were getting increasingly larger, and I was no longer able to query the data in a sensible and timely manner. That's when I came across the notion of structured logging. After much reading around and looking at other frameworks that supported this, I decided that I would update my own logging framework rather than employ one of the .NET logging frameworks (again). Although the frameworks mentioned earlier all now provide support for structured logging, SeriLog seems to be the preferred choice due it having provided support from the very beginning (whereas the others have provided support after the fact). The reason for updating my own logging framework rather than using one of the 3rd party ones was that I had already invested a lot of time and effort into my own framework, and it would probably be more straight forward to update what was already there than to start again with a new logging framework.
How have I implemented structured logging?
I have added a new column to my log table. Previously my log table consisted of a log message column that would contain text strings (as in the examples given previously). All the key information would be contained within the log message as in the following example.
INFORMATION - 2019-12-24 - User Joe has logged onto the system The user name 'Joe' is what I would be looking for in the log message. As can be seen it is part of the log message itself. With structured logging all such information is now stored in JSON (other formats are available). My log table is composed of the following columns - with the Properties column storing the structured logging data in JSON format.
LogType
INFORMATION
Created
2019-12-24 10:45
Message
User has logged onto the system
Properties
{ "username": "Joe" } N.B. one added benefit of rolling my own logging framework is that I can implement as many log types (the column LogTypes in the above example) as I need (I currently have ERROR, INFORMATION, EXCEPTION, DEBUG). I have a .config setting that allows me to switch the DEBUG log data on or off. Currently I have this set to ON for my unit test projects and OFF for production.
The biggest challenge was not so much updating my logging framework to implement structured logging, but to update all the areas of the code where I am sending output to the log (which is everywhere). I am still currently in the process of doing this.
The signature for my log message method looks like this.
protected void LogMessage(string message, LogLevelTypes loglevel, Dictionary<string, string> properties = null)
{
} The LogLevelTypes is an enum containing the values ERROR, EXCEPTION, INFORMATION and DEBUG. The structured log data is passed to my method as a Dictionary of key-value pairs. The Dictionary is then serialised before being sent to the log table.
With your log data now stored in a structured format, you are able to utilise one of the many different tools that are available for viewing structured logging data e.g. Prefix from Stackify[^]
I haven't yet got round to investigating any of the tools available for viewing structured data, but when I do I will be sure to write an article about it on here. Until then, have a Merry Christmas and a Happy New Year.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently came across an issue with several of our ASP.NET WebAPI services which were consuming a third-party set of APIs. These third-party APIs were configured to disable any requests from clients that were using TLS 1.0/1.1. Unfortunately, this included our own APIs. All requests to the third-party API were returning empty responses. After some discussion with one of the developers of the third-party APIs, he suggested the issue may be related to TLS 1.2 not being supported as he had seen the issue before.
Firstly, what is TLS? Here's a definition from a Microsoft article that describes TLS best practices with regards to the .NET Framework.
Quote: The Transport Layer Security (TLS) protocol is an industry standard designed to help protect the privacy of information communicated over the Internet. TLS 1.2 is a standard that provides security improvements over previous versions. TLS 1.2 will eventually be replaced by the newest released standard TLS 1.3 which is faster and has improved security - Transport Layer Security (TLS) best practices with the .NET Framework | Microsoft Docs
I was able to run the third-party APIs from our local test environment, but not when I ran them from our staging / production environments which were hosted on Azure. I had to make several changes, including code changes to the ASP.NET WebAPI services and changes to our Azure hosting environments.
As many current servers are moving towards TLS 1.2/1.3 and removing support for TLS 1.0 /1.1, connectivity issues between newer servers and older (legacy) .NET applications are becoming more common. Installing a newer version of the .NET Framework onto your development environment is not the answer. The solution is down to the version of the .NET Framework used for compiling your project. This is what actually matters when it comes to selecting the supported TLS version during the TLS handshake.
In this article I will describe the changes I have made to our Azure hosting (where our ASP.NET WebAPIs are hosted) and the code changes which enabled TLS 1.2 support.
Upgrading our Azure hosting to support TLS 1.2
More accurately the changes I have made to our Azure hosting have removed support for earlier versions of TLS i.e. TLS 1.0/1.1. Although this change was not strictly necessary to fix the problem I was experiencing, it was appropriate in terms of tightening up the security of our ASP.NET WebAPIs and to ensure that our own APIs can only be accessed by clients that support TLS 1.2. This is quite simply achieved by opening the Azure portal and navigating to the App Service hosting. From there the TLS/SSL Settings blade can be selected.
I have set this to TLS 1.2 for both our staging and production environments. This sets the minimum TLS version. Therefore our hosting environments will no longer accept requests from earlier versions of TLS.
Code changes to support TLS 1.2
Depending on what version of .NET Framework your project uses will dictate the possible solutions available to you. If your project compiles against .NET Framework >= 4.7 then you are already good to go. Applications developed in .NET Framework 4.7 or greater automatically default to whatever the operating system they run on considers safe (which currently is TLS 1.2 and will later include TLS 1.3).
If your application has been developed in a version of the .NET Framework prior to 4.7 then you have two options.
- Recompile your application using .NET Framework 4.7 or greater
- If recompiling your application is not something you can do then you can update your .config file by adding the following.
<configuration>
<runtime>
<AppContextSwitchOverrides value="Switch.System.Net.DontEnableSystemDefaultTlsVersions=false"/>
</runtime>
</configuration> Also make sure you have the following set in your .config file.
<system.web>
<compilation targetFramework="x.y.z" />
<httpRuntime targetFramework="x.y.z" /> <-- this is the important one!
</system.web> It is obviously preferable if x.y.x are the same i.e. that the application is compiled against and runs against the same .NET Framework version. So in the code sample x.y.z could be 4.6.1 or some other version of .NET Framework prior to 4.7.
In the cases where recompilation is not an option and you need to update your .config file instead (as described above), this should be viewed as a temporary workaround. The preferred (and best practice) solution is to recompile your application as soon as possible.
Microsoft have put together a useful document describing the best practices relating to TLS 1.2. The advice in this document should be carefully read and understood in order to fully secure your application.
So if your application doesn't already support TLS 1.2 then it's a good idea to put aside some time to make sure it does. Ensuring your application is up-to-date in terms of security can only be a good thing.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 12-Dec-19 5:22am.
|
|
|
|
|
The latest version of our app is nearly ready to drop into the app stores. It's been a challenging project with more than a few hurdles along the way. The project remit initially was to add the ability for our drivers (the user's of the app) to track their journeys to allow them to make mileage expeneses claims. By logging their journeys, drivers would have evidence of their claimed mileages. At the outset this seemed like a really useful feature that should be straight-forward. Tracking a user's movements using an app is certainly nothing new, so we certainly didn't foresee the issues we would later encounter.
The current app is developed using Xamarin Forms. All functionality and business logic is supplied to the app by way of ASP.NET WebAPI RESTful services all of which are hosted on Azure. All of the services are processed by an Azure Service Bus to ensure we can scale the app and to add resiliency. The majority of the code within the current app is shared code (Xamarin Forms apps allows for the code that is the same across the different platforms to be contained in one project, whilst the Android and iOS specific code are contained in separate projects respecively).
We began the project with the aim of keeping as much of the journey logging code in the shared project to keep the platform specific code to an absolute minimum. This ambition was quickly forgotten when we got down to the details of the project. We soon realised that it wasn't possible to fully realise the journey logging functionality without writing a lot of platform specific code as so much of it was tied to the specific hardware on the devices. Although the cross-platform geolocator service we used ran on both platforms (courtesy of James Montemagno), we wanted the ability to run the tracking service as a background process.
Current devices have very strict constraints as to how they will execute long running processes (and quite rightly too). We needed to run the journey logging in the background, as modern platforms just don't support executing long running processes in the foreground. These constraints were different across the different platforms. Android and iOS handle long running processes differently, and their constraints and solutions are unsurprisingly different too.
We next wanted to enable local push notifications to keep the driver informed that the tracking service was still recording in the background. Especially if the user brought another app to the foreground, made a phone call, or in some way forced our app to the background. Local push notifications are handled completely differently on the different platforms, leading to further platform specific code. Implementing the journey logging service as a background process, and implementing local push notifications all entailed having to write vast swathes of platform specific code.
All of these platform specific deviations brought up brand new problems, and showed the many discrepencies between Android and iOS. Although Xamarin Forms does a magnificent job of hiding as much of these deviations as possible, there were many times on this project when we were fully exposed to the inner workings of the platforms and needed a deep understanding of the native APIs. iOS particularly threw up many problems. In particular, it was incredibly difficult trying to submit large journeys as a background service. On Android, it was relatively straight forward getting large uploads to run in the background. It was far more technically challenging on iOS due it's vastly more restrictive environment and permissions.
The services that support and provide all the functionality to the app are all ASP.NET WebAPI RESTful services. The services needed to support the journey logging functionality were initially thought to be straight forward. All we would need were services that would allow the journeys to be created, updated and deleted from the device. During initial testing with the app we came across several issues when trying to submit journeys from the device to our cloud hosted Azure SQL DB. Initially we ran into issues when submitting journeys from our development environment. After much head scratching and investigation I eventually pinned this down to a rule on our firewall that was set to truncate any outgoing traffic that exceeded 1MB in size. After resolving this issue, we ran into a further similar problem when we attempted to submit journeys from our staging (Azure) environment. We were getting SocketException errors. After further diagnosis we found that we could send smaller packets of data successfully. The error only appeared when attempting to send large journeys in one go. So I had to write a chunking algorithm to decompose larger journeys in multiple smaller journeys. This required making substantial changes to the underlying WebAPI service, as well as changes to the app code itself.
Another new feature of the app is the ability to create the main menu dynamically. Against each company we store a list of the menu options that will be available to them when they open the app. This gives us the ability to turn app features on and off for a company dynamically. Each time the app is launched we check the menu options dynamically at run-time. This allows us to update a driver's list of menu options without them even having to log-off or restart the app. It's all done while the app is running.
We're now in the final stages of testing the app and are hoping to have it in the stores very soon. It has been a real trial-by-fire. We encountered many problems along the way, and with much grit and determination, have managed to overcome all of them. The project has been challenging to say the least, but ultimately successful thanks to the sheer determination of the development team.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of the development of our new app feature, we are adding the ability to allow users to track their journeys. They can Start / Stop the journey tracking and allow the app to record their distance, time taken etc. This is primarily to be used to allow users to support mileage claims.
A journey takes the form of a Model containing properties for storing the user, start date, end date, mileage etc. The journey also contains a list of waypoints. These are the longitude / latitude points that are generated by the user's position. A waypoint is taken every 5 seconds and added to the list of waypoints for the journey. From these waypoints we can then generate a map of their journey and display this to them.
During initial testing everything was fine as we tested the feature on smaller journeys containing a few hundred waypoints. As we began stress testing the feature, we noticed we were getting timouts as we started to exceed 800 or so waypoints. By 1000 waypoints were getting regular timeouts. We discovered the reason was due to the volume of waypoints we were posting back to our service. As the number of waypoints grew, the time taken to POST the data over our RESTful service grew too. And this was causing our timeout problem.
I investigated several options, but the cleanest and most simple was to chunk the waypoints into smaller discrete lists which we would POST. So instead of POSTing all of the waypoints in one large payload, we would instead send multiple smaller payloads.
So how do you chunk your list into a list of smaller lists? There are many ways of achieving this, and I'm sure those of you reading this article will be able to suggest your own versions of the algorithm I have used here. Firstly, instead of using a hard-coded version that only works with journey waypoint lists, I have implemented an extension method that can work with any type of list. This allows me to chunk any type of list data going forwards (I have already got a few ideas in mind of how I will reuse this extension method).
public static IEnumerable<IEnumerable<T>> GetChunk<T>(this IEnumerable<T> source, int chunksize)
{
if (chunksize <= 0 || source == null) yield return null;
var pos = 0;
while (source.Skip(pos).Any())
{
yield return source.Skip(pos).Take(chunksize);
pos += chunksize;
}
} So we can see that what is returned is a list of lists of type T. The extension method is applied to the source list (which is to be chunked into smaller lists). The parameter to the extension method is the number of items to appear in each chunked list. The implementation uses the LINQ methods of Skip() and Take() to iterate over the list. The Skip() method will ignore the first n items in the list. The Take() method will then take the next n elements from the list. These methods therefore when used in conjunction easily iterate over our list. The use of yield return helps to iterate over the list in an efficient manner by processing the next item without having to process the entire list (lazy evaluation).
In our specific case for chunking our journey waypoints, we have set the chunking value to 500. Although the problem didn't appear until at least 800 items, I wanted to keep the value to a safe, low limit just to err on the side of caution.
Here is the code from one of the unit tests I've written that exercises the chunking extension method.
[TestMethod]
public void GetChunk1000Tests()
{
const int waypointcount = 1000;
var journey = ListExtensionsTests.GetTaskTrackedJourneyForUnitTest(waypointcount);
Assert.IsNotNull(journey);
Assert.IsNotNull(journey.Waypoints);
Assert.IsNotNull(journey.Waypoints.Waypoints);
Assert.IsTrue(journey.Waypoints.Waypoints.Any());
Console.WriteLine($"ChunkCount: {ListExtensionsTests.ChunkCount}");
Console.WriteLine($"Number of waypoints: {journey.Waypoints.Waypoints.Count}");
Assert.IsTrue(journey.Waypoints.Waypoints.Count == waypointcount);
var result = journey.Waypoints.Waypoints.GetChunk(500);
Assert.IsNotNull(result);
var enumerable = result.ToList();
Console.WriteLine($"Number of chunks: {enumerable.Count()}");
int incrementalwaypointcount = 0;
foreach (var item in enumerable)
{
Console.WriteLine($"Number of chunks in list {item.Count()}");
incrementalwaypointcount += item.Count();
}
Assert.AreEqual(waypointcount, incrementalwaypointcount);
} So in summary, if you are dealing with large lists of items and need to break them down in smaller, more manageable lists, then chunking them is a simple and very effective solution. This works great in our mobile app where we are sending large lists of data to a backend service using the resource hungry environment of the smart phone where memory and processing power are in short supply. Processing numerous smaller lists is more efficient (and less error prone) than trying to process one large list. It also uses less resources (memory, CPU) to do so.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In lambda calculus, a predicate is an expression that evaluates to either true of false. If you have written any LINQ or a SQL query you have probably written these types of expressions already. If you have written a SQL query that contains a WHERE clause for example, this is a type of predicate. If you've ever used LINQ to filter the contents of a list, this too is an example of a predicate.
Whether you realise it or not, you have probably already used predicates in your code. Whenever you have a need to filter the items in a dataset and / or list, then it is common to use predicates to do this. The notion of a predicate is widely used and understood, even if you weren't necessarily aware of them.
Within the .NET Framework the notion of a predicate is formally identified by
Predicate<T> This is a functional construct providing a convenient way of testing the truthy or falsity of a given expression relating to an instance of type T. If you're familiar with delegates then Predicate<T> is equivalent to
Func<T, bool> For example suppose we have a Car class that represents T. Each instance of T (Car) contains the properties Colour (red, green, black etc) and EngineSize (1000, 1200, 1600 cc etc).
public class Car
{
public string Colour { get; set; }
public int EngineSize { get; set; }
} Let's assume that we have a SQL query that returns a list of all the cars registered for a particular year.
var data = new DataService();
List<Car> = data.GetAllRegisteredCars(new DateTime(2019, 01, 01); The above query will return all cars registered during the year of 2019.
Suppose we want to filter that list of cars to just those that meet certain criteria e.g. those cars with an engine size of 1600cc or are blue in colour. To filter the data we would use predicates as follows.
var matches1 = cars.FindAll(p => p.EngineSize == 1600);
var matches2 = cars.FindAll(p => p.Colour == "Blue"); We could hardcode the predicates and leave them in the code as in the above examples. However, a benefit of using Predicate<T> in your code is that it gives you the ability to separate the data from the expressions used to filter it. Instead of hardcoding filters in your code, you can define these elsewhere and bring them into your code when needed.
Let's assume we have a completely separate class that defines our predicates called PredicateFilters.cs
public static class PredicateFilters
{
public static Predicate<Car> FindBlueCars = (Car p) => p => p.Colour == "Blue";
public static Predicate<Car> Find1600Cars = (Car p) => p => p.EngineSize == 1600;
} In our data code we would now write the following code to filter the cars.
var matches1 = cars.FindAll(PredicateFilters.Find1600Cars);
var matches2 = cars.FindAll(PredicateFilters.FindBlueCars); We can see even from this simple example that separating our queries from our code is straight-forward. We no longer need to pollute our code with hardcoded filters. We also have the ability to reuse those filters elsewhere. For example, we may have more than one function that needs to know which cars are blue. We write the filter once and use it everywhere we need it. If in the future it turns out that it's red cars we need instead of blue, we can change the filter in one place without having to change any of our data code.
Our filters may return a single item or may return a list of items. Alternatively, we may also want to know the number of items returned by our filter. We would probably want to do this for different types of data e.g. cars, drivers, orders etc. This is where we need to get a bit smarter with how we design our filters to allow them to work with different types of data.
Let's start by implementing an interface that defines the filters we want to execute on our data.
public interface IPredicateValue<T>
{
T GetValue(List<T> list, Predicate<T> filter);
List<T> GetValues(List<T> list, Predicate<T> filter);
int GetCount(List<T> list, Predicate<T> filter);
} Here we have defined an interface that takes a type of T. The functions will provide the following functionality.
- T GetValue(List<T> list, Predicate<T> filter) - return a single instance of T for the filter
- List<T> GetValues(List<T> list, Predicate<T> filter) - returns a list of T for the filter
- int GetCount(List<T> list, Predicate<T> filter) - returns the count of items of T that match the filter
For each type of data that we want to filter, we should implement this interface. This will provide a consistent set of methods that we can use to filter our data.
public class CarPredicate : IPredicateValue<Car>
{
public Car GetValue(List<Car> list, Predicate<Car> filter)
{
if (list == null || !list.Any() || filter == null) return null;
return list.Find(filter);
}
public List<Car> GetValues(List<Car> list, Predicate<Car> filter)
{
if (list == null || !list.Any() || filter == null) return null;
return list.FindAll(filter);
}
public int GetCount(List<Car> list, Predicate<Car> filter)
{
if (list == null || !list.Any() || filter == null) return 0;
return list.FindAll(filter).Count;
}
} We can now filter our data as follows.
var data = new DataService();
List<Car> = data.GetAllRegisteredCars(new DateTime(2019, 01, 01);
var predicatevalue = new CarPredicate();
var blueCars = predicatevalue.GetValue(data, PredicateFilters.FindBlueCars); Keeping your code and your predicates separate gives you far more flexibility, as well as giving you a single point of change should one of the expressions used to query your data need to change. You can implement filters for any / all types of data with the added benefit that it allows you to filter your data in a consistent manner.
If you want to get serious about how you filter data, then give predicates a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
|
I actually really enjoy writing stuff like this. I wrote an article about writing flexible RESTful services that was very similar to GraphQL (having since checked out GraphQL I can see the similarity). GraphQL was developed by the multi-billion dollar Facebook empire, my version was developed by little ol me
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from a previous article[^] I wrote as an introduction to writing asynchronous code with .NET, I want to describe a common problem I see developers making when they begin writing asynchronous code beyond the basics. A common mistake developers make when they first start writing asynchronous code using the .NET Framework, is to write blocking asynchronous code. I've seen this problem on Stackoverflow and with developers I have worked with directly (both junior and senior).
Rather than try to explain the problem, I'll give some example code that should hopefully highlight the problem. Here's an ASP.NET Web API RESTful service being invoked from a client application.
The back-end service code is taken from one of our RESTful services that returns vehicle telemetry to a client application. For the purposes of clarity, I have omitted all logging, error checking and authentication code.
public async Task<string> Get(string subscriber, string trackertype)
{
var response = await this.GetData(subscriber, trackertype);
return response;
} And here is a client that invokes the RESTful service. In this example the client is a unit test.
[TestMethod]
public async Task GetVehicleTests()
{
TrackerController controller = new TrackerController();
string subscriber = "testsubscriber";
string vehicle = "testvehicle";
var response = controller.Get(subscriber, vehicle);
Assert.IsNotNull(response);
Assert.IsNotNull(response.Result.ToString());
} The above unit test code will deadlock. Remember, that after you await a Task, when the method continues it will continue in a context.
1. The unit test calls the Get() RESTful service (within the ASP.NET Web API context).
2. The Get() method in turn calls the GetData() method.
3. The GetData() method returns an incomplete Task indicating that the Get() method has not yet completed (with the same context).
4. The Get() method awaits the Task returned by the GetData() method (the context is saved and can be re-instated later).
5. The unit test synchronously blocks on the Task returned by the Get() method which in turn blocks the context thread.
6. Eventually the Get() method will complete. This in turn completes the Task that was returned by the GetData() method.
7. The continuation for Get() is now ready to run, and it waits for the context to be available to allow it to execute in the context.
8. Deadlock. The unit test is blocking the context thread, waiting for the Get() method to complete, and GetData() is waiting for the context to be available so it can complete.
How can this situation be prevented? Simple. Don't block on Tasks.
1. Use async all the way down
2. Make (careful) use of ConfigureAwait(false)
For the first suggestion, awaitable code should always be executed asynchronously. So given the example code here, the unit test was not correctly awaiting the result from the RESTful service. The unit test code should be modified as follows.
[TestMethod]
public async Task GetVehicleTests()
{
TrackerController controller = new TrackerController();
string subscriber = "testsubscriber";
string vehicle = "testvehicle";
var response = await controller.Get(subscriber, vehicle);
Assert.IsNotNull(response);
Assert.IsNotNull(response.ToString());
} Like a handshake, whenever you have an await at one end of a service call, you should have async at the other.
The use of ConfigureAwait(false) is slightly more complicated. When an incomplete Task is awaited, the current context is captured to allow the method to be resumed when the task eventually completes e.g. after the await keyword. The context is null if invoked from a thread that is NOT the UI thread. Otherwise it returns the UI specific context depending on the specific platform you are using e.g. ASP.NET, WinForm etc). It is this constant context switching between UI thread context and worker thread context that can cause performance issues. These issues may lead to a less responsive application, especially as the amount of async code grows (due to the increased volume of context-switching). Yet this is exactly what we are trying to solve by using asynchronous code in the first place.
There are a few rules to bear in mind when using ConfigureAwait(false)
- The UI should always be updated on the UI thread i.e. you should not use ConfigureAwait(false) when the code immediately after the await updates the UI
- Each async method has its own context which means that the calling methods are not affected by ConfigureAwait()
- ConfigureAwait can return back on the original thread if the awaited task completes immediately or is already completed.
A good rule of thumb would be to separate out the context-dependent code from the context-free code. The goal is to reduce the amount of context-dependent code (which can typically include event handlers).
We can modify the Get() RESTful service as follows.
public async Task<string> Get(string subscriber, string trackertype)
{
var response = await this.GetData(subscriber, trackertype).ConfigureAwait(false);
return response;
} Deadlocks such as this arise from not fully understanding asynchronous code, and the developer ends up with code that is partly synchronous and partly asynchronous.
By following the suggestions in this article, you should see performance gains in your own code, as well as better understanding how asynchronous code works under the hood.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from a couple of my previous articles, I would like to both reinforce the ideas I laid out in them, as well as consolidate those ideas. In an article[^] from October 2017 I described a pattern I use for designing and implementing RESTful APIs, specifically with regards to implementing RESTful GET APIs. In an article[^] from July 2018 I described the principle of reducing the client surface area, and how this leads to cleaner, simpler and less complex code, particularly with regards to implementing RESTful APIs.
Where I currently work, we have a library of RESTful ASP.NET Web APIs that our web and mobile applications consume. These cover many different types of query as they are used in many different ways by the particular applications. For example the mobile app (which is aimed at fleet drivers) fetches data for the currently signed-in user, their latest mileage updates, their account manager, journeys they have made etc. The web application fetches data relating to users, roles, permissions, documents etc.
These are all GET methods that perform a variety of different queries against different data types. When designing the client API surface required for all these APIs I wanted to make them all consistent, irrespective of what data was being returned, or what query filters were being specified.
To clarify the problem a little further, I wanted to use the same client API for all data types e.g. mileage, user, company, journey etc. Further to this, I wanted the way in which the data was queried to be consistent. Example queries are listed below.
- Fetch me the mileage data for this user
- Fetch me the mileage data for this date range
- Fetch me the journey data for this date
- Fetch me the journey data for this user
- Fetch me the permissions for this user
- Fetch me the documents for this user
These are all queries that work on different data (mileage data, journey data, permissions data, documents data) and interrogate the data in different ways (by user, by date). Crucially, I wanted all of these queries to map onto a single GET API for consistency, and to reduce the complexity of the client (by reducing the client facing API to one API instead of multiple APIs). Reducing the client facing API is the principle of reducing the surface area of the client.
I finally came up with the following API design.
- I have a single controller with a GET method that accepts two parameters.
- The first parameter is a string that designates the type of query e.g. "getmileagebyuser", "getjourneybydate" etc
- The second parameter is a serialised query object that contains the values needed to query (or filter) the data e.g. the user ID, the date or whatever filters are required to satisfy the request.
- All queries must return their data as a serialised string (which the client can de-serialise back into an object).
For the purposes of clarity the code examples used here have omitted error checking, logging, authentication etc to keep the code as simple as possible. In my own library of RESTful APIs I have separated out the requests made by the mobile app from those made by the web app. I therefore have two controllers, each with a single GET method that does all the heavy lifing of fulfilling the many different query requests. I have created a different controller for each type of client so as to prevent the controllers from bloating. You can separate out the requests any way you want. If you don't have many queries in your application, then you could simply place all of these query requests in a single GET method in a single controller. That is obviously a design decision only the developer can make.
The controllers are called MobileTasksController and WebTasksController. For the purposes of this article I will focus on the latter controller only, although they both employ the same design pattern that I am about to describe.
First let's define our basic controller structure.
public class WebTasksController : BaseController
{
public WebTasksController()
{
}
public string WebGetData(string queryname, string queryterms)
{
}
} You will need to decorate the WebGetData() method for CORS to allow the clients to make requests from your GET method.
[HttpGet]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public string WebGetData(string queryname, string queryterms)
{
} Enable CORS with the appropriate settings for your own particular application.
As we can see, the WebGetData() method has two parameters.
- queryname is a string that designates the type of query e.g. "getmileagebyuser", "getjourneybydate" etc
- queryterms is a serialised query object that contains the values needed to query (filter) the data e.g. the user ID, the date or whatever filters are required to satisfy the query request
Here's the class that I use for passing in the query filters.
[DataContract]
public class WebQueryTasks
{
[DataMember]
public Dictionary<string, object> QuerySearchTerms { get; set; }
public WebQueryTasks()
{
this.QuerySearchTerms = new Dictionary<string, object>();
}
} At its core it comprises a dictionary of named objects. By implementing a dictionary of objects this allows us to pass in filters for any type of data e.g. dates, ints, strings etc. We can also pass in as many filters as we need. We can therefore pass in multiple filters e.g. fetch all the journeys for a specific user for a specific date. In this example, we pass in two filters.
- The user ID
- The date
Once the query is serialised we have the following string which is then passed as the second parameter to the RESTful GET method.
{"QuerySearchTerms":{"email":"test@mycompany.co.uk"}} By implementing our queries in this way makes for very flexible code that allows us to query our data in any way we want.
var user = GetUser(emailaddress);
WebQueryTasks query = new WebQueryTasks();
query.QuerySearchTerms.Add("userid", user.Id);
query.QuerySearchTerms.Add("journeydate", Datetime.Now);
string queryterms = ManagerHelper.SerializerManager().SerializeObject(query); The WebGetData() method then needs to deserialise this object and extract the filters from within. Once we have extracted the filters we can then use them to fetch the data as required by the request.
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
The core of the WebGetData() method is a switch statement that takes the queryname as its input. Then, depending on the type of query, the method will extract the necessary filters from the WebQueryTasks parameter.
The names of the queries are stored as constants but could equally be implemented an an enum if preferred. We don't want to have to hard-code the names of our queries into the method, so any approach that separates these is fine.
In the example below there are two queries. One returns company data for a specified user. The second returns company data for a specified company ID. In each case the code follows the same pattern.
- select the appropriate case statement in the switch
- extract the filters from the query
- invoke the appropriate backend service to fetch the date using the extracted filters (after firstly checking that the filter(s) are not empty)
- serialise the data and return it to the client
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
temp = this._companiesService.Find(webResults);
break;
case WebTasksTypeConstants.GetCompanyById:
webResults = this._userService.GetQuerySearchTerm("companyid", query);
int companyId = Convert.ToInt32(webResults);
temp = this._companiesService.Find(companyId);
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError($"Unknown query type {queryname}.")));
} We then need to serialise the results and return these to the client.
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result; In the production version of this controller, I have implemented many more queries in the switch statement, but for clarity I have only implemented two for the purposes of this article.
Here is the full code listing.
[HttpGet]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public string WebGetData(string queryname, string queryterms)
{
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
temp = this._companiesService.Find(webResults);
break;
case WebTasksTypeConstants.GetCompanyById:
webResults = this._userService.GetQuerySearchTerm("companyid", query);
int companyId = Convert.ToInt32(webResults);
temp = this._companiesService.Find(companyId);
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError($"Unknown query type {queryname}.")));
}
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result;
} Just to repeat, for the purposes of this article, the method above has had all error checking, logging, authentication etc removed for the sake of clarity.
I have implemented this pattern in all my GET APIs to great success. It is very flexible and allows me to query the data in multiple ways as neccesary. It also allows the client code to be simpler too, by reducing the client area (the client only needs to interact with a single endpoint / controller), and enforces consistency by ensuring that all queries are similar to one another (they must all pass in two parameters - the first designating the query type, the second containing the query filters).
This pattern of API design achieves all the following benefits
- Simpler server side code by producing substantially less code due to the generic nature of the pattern
- Simpler client side code by only having a single endpoint to interact with
- High degree of flexibility by allowing the APIs to filter the data any way the application requires
- Consistency by ensuring that all requests to the RESTful API are the same
I have been using this pattern in my own RESTful APIs for several years, including several production mobile apps (that are available in the stores) and line-of-buiness web apps. With the pattern in place, I can quickly and easily add new RESTful APIs. This makes adding new services to the apps more timely, and makes the process of adding value to the apps much quicker and simpler.
Feel free to take this idea and modify it as neccessary in your own RESTful APIs.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been writing asychronous code with the .NET Framework for several years now, and find that the .NET Framework makes a good job of hiding the underlying conceptual details. The concepts are pretty straight forward if you undedrstand how asynchronicity works. As I've found over the years though, these concepts are not always well understood or applied by less experienced developers. By less experienced, I don't always mean junior developers. I've come across senior developers who have struggled with asynchronous code too.
I've helped several developers fix issues with their code that have been due to mis-understandings with asynchronicity, and I've found issues with their code during code reviews that have highlighted basic mis-understandings with implementing asynchronous code using the .NET Framework.
In this article I want to go through the basics of writing asynchronous code using the .NET Framework. I'll use C# to illustrate all examples, but conceptually the code will work the same when transposed to VB.NET or any other .NET language. I'll use examples from our ASP.NET Web API services code base which makes extensive use of asynchronicity to give performant and responsive code. Our mobile apps all rely on these services for delivering functionality to the end user's device. It is therefore incumbent on our apps to be highly responsive and performant. I have therefore made all these services asynchronous to effect these requirements. I may follow this article up in the future with more advanced scenarios, but for now, I will stick to the basics.
What is Asynchronous Programming?Let's start with some basic understanding of asynchronous programming. Most code gets executed in a sequential manner i.e.
- execute line 1
- execute line 2
- execute line 3
We have 3 lines of code that each execute some command, and they each run one after the other i.e. "execute line 1" is executed first. When this has finished execution then "execute line 2" gets executed. When this has finished executing then "execute line 3" is executed. These commands are run sequentially, one after another. This can also be referred to as synchronous code. The next line of code can only be executed when the previous line of code has completed.
var myList = new List<string>();
myList.Add("item1");
myList.Add("item2");
myList.Add("item3");
myList.Remove("item1"); A trivial example could be the code above. The first line creates a string list called myList. When this has completed the next 3 lines then add items to the string list (item1, item2 and item3). Finally, we remove item1 from the list. These lines of code are executed one after the other in a sequential (synchronous) manner.
When code is executed sequentially like this, one command after the other, we say that it has been executed synchronously.
We need to write our code differently when we interact with any kind of I/O device such as a file, a network or database. The same applies when we execute any CPU bound operations such as rendering high-intensity graphics during a game. We cannot make any guarantees about how quickly the device or operation may respond to our request, so we need to factor in waiting time when making requests to I/O devices or CPU intensive requests.
An anology may be making a telephone call to book an appointment to have your car serviced. Immediately after making your booking you need to then write down the date and time of the booking. You may get straight to the front of the telephone queue if you're lucky. Alternatively, you may find you are further down the telephone queue and have to wait to get through to the garage. Either way, you cannot write down the date and time of the booking until you have gotten through to the garage.
In this scernario you don't know exactly when you can write down the date and time of the booking as you may have to wait to get through to the garage.
And this is exactly how asynchronous code works.
When your code accesses I/O devices such as accessing a file, network or database (or makes a request to a CPU intensive operation) you cannot guarantee when your request will be serviced. For example, if you are accessing a database, there may be latency on the network, it may be hosted on legacy hardware, the record you are accessing may be locked and so on. Any one of these will affect the timeliness (or otherwise) of the response to your request.
If your network or database is busy and under extreme load, any request sent over it will be slower than requests made during less busy times. So it should be obvious that executing a command that relies on an I/O device immediately after submitting a request to that I/O device is likely to fail, as you may not have received any response from the I/O device.
Example
- connect to database
- fetch records from database
- close database connection
If you were to execute the above code synchronously, you could easily run into the situation where you are trying to fetch the database records before you have fully connected to the database. This would fail resulting in an exception being thrown. What you instead need to do is attempt to connect to the database, and ONLY when that has succeeded should you attempt to fetch the records from the database. Once you have fetched the records from the database, then you can close the database connection.
This is exactly how asynchronous code works. We can rewrite the above pseudo-code asynchronously.
- connect to the database
- wait for connection to database to be established
- once connected to the database fetch records from database
- close database connection
The two sets of pseudo-code look very similar, with the key difference being that the latter waits for the connection to the database to be established BEFORE making any attempts to fetch records from the database.
Hopefully by this point the goals of asynchronous programming should be clear. The goal of asynchronous programming is to allow our code to wait for responses from I/O or CPU bound recources such as files, networks, databases etc.
Asynchronous programming with C#Now that we understand the principles and goals behind asynchronous programming, how do we write asynchronous code in C#?
Asynchronous programming is implemented in C# using Task and Task<t>. These model asynchronous operations, and are supported by the keywords async and await. Task and Task<t> are return values from asynchronous operations that can be awaited.
Here's a function that POSTs data to a RESTful endpoint, and does so asynchronously. For the purposes of simplicity I have removed all authentication etc from the code samples I will use.
public async Task<HttpResponseMessage> PostData(string url, HttpContent content)
{
using (var client = new HttpClient())
{
return await client.PostAsync(new Uri(url), content);
}
} Things to note.
- The method returns a Task of type HttpResponseMessage to the calling program i.e. the method is returning an instance of HttpResponseMessage (e.g. an HTTP 200 if the method was successful).
- The async keyword in the method signature is required because the method invokes the PostAsync() method in the method body i.e. the method needs to await the response from the RESTful API before the response can be handed back to the calling program.
To call this function we write the following code.
var response = await PostData(url, content); The calling code (above) needs to await the response from the PostData() method and does so using the await keyword. Whenever you invoke an asynchronous method such as PostAsync(), you need to await the response. The two keywords go hand in hand. Asynchronous methods need to be awaited when they are invoked.
Here's another RESTful API method that fetches some data from a RESTful endpoint. The RESTful endpoint returns data in the form of a serialised JSON string (which the calling program will then de-serialise back into an object).
public async Task<string> GetData(string url)
{
using (var client = new HttpClient())
{
using (var r = await client.GetAsync(new Uri(url)))
{
string result = await r.Content.ReadAsStringAsync();
return result;
}
}
} Things to note.
- The method returns a Task of type string to the calling program (the JSON serialised response from the RESTful endpoint).
- The async keyword in the method signature is required because the method invokes the GetAsync() and ReadAsStringAsync() methods in the method body.
To call this function we write the following code.
string response = await GetData(url);
if (!string.IsNullOrEmpty(response))
{
} The calling code (above) needs to await the response from the GetData() method and does so using the await keyword.
Key takeaways- Async code be can used for both I/O bound as well as CPU bound code
- Async code uses Task and Task<t> which represent asynchronous methods and are the return values from asynchronous methods (as we saw in the PostData() and GetData() methods)
- The async keyword turns a method into an async method which then allows you to use the await keyword in its method body (as we saw in the PostData() and GetData() methods).
- Invoking an asynchronous method using the await keyword suspends the calling program and yields control back to the calling program until the awaited task is complete
- The await keyword can only be used within an async method
This is the first in what will hopefully be a series of articles I intend to write on asynchronous programming. I will cover other areas of asynchronous programming in future articles (giving tips, advice, advanced scenarios and even Javascript). Hopefully for now, this has given a taster of how to implement the basics of asynchronous programming with C#. Watch this space for further articles.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently came across some strange behaviour in our ASP.NET Core 2.2 web application. A colleague of mine who was working on some new functionality, had checked in several Javascript files. These were 3rd party Javascript files to add support for drag & drop. The majority of the files for this 3rd party library were already minified, with the exception of one.
For some reason this one particular Javascript file was not minified. So we added the file to bundleconfig.json in Visual Studio so that our build process would minify the file. The bundleconfig.json minifies several Javascript files and outputs the aggragated file as site.min.js. Whilst I was testing the latest version of the app I was getting all sorts of errors in the browser as many of the Javascript functions were not being found. This seemed strange, as everything had been working perfectly, and all we had done was check in a few Javascript files.
Looking at the site.min.js file that was on the build and test servers, it became apparent that the site.min.js file contained only the contents of the un-minified 3rd party Javascript file. All of the other files we were minifying had somehow been removed from the resultant site.min.js file.
After much investigation I narrowed down the issue to the following command in our build pipeline.
dotnet publish -c release This command was recreating the site.min.js file, but failing to include all the files specified in the bundleconfig.json with the exception of the un-minified 3rd party Javascript file. I excluded this step from the build process to check, and sure enough, the culprit was definitely this build command.
I managed to solve the problem by manually minifying the culprit Javascript file and adding it to the project in its minified form. I then excluded it from the bundleconfig.json minification process. This has now solved the problem, and everything works perfectly again.
So basically, if you're including 3rd party Javascript files, make sure you add them to your Visual Studio project in minified form (unless you're using a CDN of course). Don't attempt to minify 3rd party files in your build process. Only minify your own Javascript files in your build process. It took me a few hours to diagnose and fix the problem, so hopefully by reading this, I may save someone else the same pain I went through fixing the problem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I'm mentoring a more junior member of the software development team, there are always two primary traits that I encourage them to learn. These are traits that transcend programming language, methodology, technical stack or anything else that may be relevant to their role. These are structure and dilligence. Both of these should permeate through everything they do in their every day work. Being mindful of these will help them become better as software developers. I will explain why these traits are so important to the software developer.
Structure
Approaching your work with a structured mindset allows you to demarcate and separate out the various elements to the problem you are solving. From grouping the different areas of the requirements specification, to grouping the components and classes in the class hierarchy, to grouping the related unit tests....having structure allows you demarcate the boundaries between these different elements. Everything has a structure. The trick is to clearly define it and communicate this to the rest of the team. If you are documenting the requirements to a piece of functionality, clearly structure the document to demarcate these different areas e.g. functional requirements, non-functional requirements, UI considerations etc. If developing a new component, your class structure should clearly demarcate the different behaviours and areas of responsibility from the class structure and their interactions. Anyone reading through the code should be able to quickly determine what the different classes do and how they relate to each other from the structure you have implemented. Group similarly related elements together and enforce this in your coding standards document. Everything you do should be structured, logical, and consistent.
Dilligence
Approaching your work with due care and dilligence will help in eliminating mistakes and make you a better developer. Be conscientious and mindful of what you are doing at all times. Before checking in that code, make sure you do a diff, run a full rebuild and execute all dirty unit tests. This may take additional time, but it will always be quicker than the time it will take to fix a broken build. If writing a document such as a requirements specification, take the time to proof read it, check it over for spelling and grammar as well as accuracy. Work smart, not fast. Reducing the number of mistakes you make by being more dilligent will earn you a reputation as someone who is dependable, produces high quality and takes their role seriously. Don't be that person who is known to constantly make mistakes, breaks the build or submits code that doesn't work because they didn't test it sufficiently enough.
By applying structure and dilligence to everything you do will have positive benefits on your work. These can be applied irrespective of your particular role (developer, tester, designer) or what tools and / or technologies you use. I would prefer to work with a developer who took these traits seriously than a developer who thinks producing more lines of code than the next developer makes them more productive. I would always pick quality over quantity. A customer is far more likely to forgive a slippage of a dealine if they eventually get something that is of high quality, rather than something delivered on time that contains bugs.
Be structured and dilligent and apply these with rigour and I can guarantee that the quality of the code produced by yourself and your team will increase.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
At what point should you consider rewriting a software application? Just because an application is old, doesn't mean it should be rewritten. So what factors ought to be considered when attempting to justify rewriting an application? There are many things to consider, and rewriting any application should never be taken lightly. In fact, doing a cost-benefit analysis is probably a good starting point.
In today's fast paced software development environment, today's latest fad can quickly become tomorrow's long forgotten hype. What does it even mean to be legacy? According to Wikipedia
Quote: a legacy system is an old method, technology, computer system, or application program "of, relating to, or being a previous or outdated computer system" This article is not intended to be a detailed discussion of the considerations to take into account when looking to rewrite a legacy applicaiton. That would be a considerably lengthy article. Rather it is to look at some examples from my own experiences involving legacy applications. Like most developers, I thrive on working with the latest shiny new tools, but there are also times when you need to work with that legacy application too. I have heard many developers berating these legacy applications. Sometimes for good reason too. But quite often, the legacy application has been working away for years, quietly, solidly and not caused a fuss.
I've worked with many legacy applications over the years. Some were surprisingly good, some just plain awful. Some of them, despite their age, were rock solid and were capable of running far into the future. Others spluttered and juddered their way along and needed a lot of man-handling to keep them running.
Just because an applications is legacy is not reason enough to justify a rewrite. I remember working for one particular company where the business critical back-end application was developed in COBOL. It was over twenty years old but rock solid. It rarely caused problems or generated errors. It just worked.
Another company I worked for many years ago also had a lot of legacy code (and according to sources at the company, much of the legacy code is still there to this day). The code was part of their core business logic and had been around for over a decade. This was accountancy and financials logic, and whilst the code had been updated with bug fixes over the years, it didn't require much man-handling to keep it up and running. In fact, when they decided to upgrade the application to use newer development environments and tooling, they kept much of the legacy code as they knew it worked. They didn't want to risk screwing up their core business logic by rewriting it.
Age alone is not a deciding factor when considering whether to rewrite an application. There are many legacy applications that run just fine with few problems. Alternatively, there are a great many applications developed with modern technology and tooling that are plain awful.
A few things to consider.
- Is the application code buggy and / or cause regular problems or errors?
- Does it require man-handling to keep it up and running?
- Does it meet non-functional requirements i.e. is secure, performant etc.
- Is it easy to extend and add new features?
- Does it require legacy hardware that may be insecure?
- What are the running costs of the application (development costs fixing bugs, server / hardware costs, third-party costs etc)
- Does it interact with third-party applications that may have updated their APIs?
- Has it been developed using outdated environments or tools?
Not all of these considerations will be applicable to every scenario, so don't take the list in its entirety. They are merely intended to be conversation starters to elicit further discussion. Deciding whether or not to rewrite an applicaiton is not a decision that should ever be taken lightly, but equally you need to take into account many different pieces of information and assess them in the context of the bigger picture. Age alone is not a compelling argument for a rewrite, but taken in the context of other factors, may form one of them.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
There's an approach that I have been using for several years now that has helped me improve and simplify my stored procedures. This is for stored procedures that return data i.e. SELECT stored procedures as opposed to INSERT or UPDATE stored procedures. This approach is particularly useful where a stored procedure needs to reference more than one table i.e. where there is a JOIN between one or more tables.
Firstly I create a VIEW of the data that I want to query. The VIEW contains all the tables, columns, JOINs etc as necessary. It is from this VIEW that the stored procedure will SELECT its data as necessary. All the stored procedure needs to do then is filter the data from the VIEW with a WHERE clause.
The advantages of this approach is that the VIEW hides the underlying details of all the JOINs. The stored procedures then become simple affairs as they simply SELECT from the VIEW. This leads to simpler stored procedures, and allows a VIEW to be reused across multiple stored procedures. Therefore you don't need to repeat the same complicated JOINs in each of your stored procedures.
Example VIEW
CREATE VIEW [dbo].[v_CardDefinitions] AS
SELECT
CardDefinitions.*,
Cards.ID AS CardID,
Cards.ParentID,
Cards.[Index],
Cards.UserID,
Cards.CardDefinitionID,
Users.Email AS UserEmail,
Modules.Name AS ModuleName
FROM
CardDefinitions
LEFT JOIN
Cards ON CardDefinitions.ID = Cards.CardDefinitionID
JOIN
Modules ON CardDefinitions.ModuleID = Modules.ID
LEFT JOIN
Users ON Cards.UserID = Users.ID
WHERE
CardDefinitions.Active = 1 Example stored procedure
CREATE PROCEDURE [dbo].[Cards_GetById]
@cardId INT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT
DISTINCT ID, Name, [Permissions]
FROM
v_CardDefinitions
WHERE
ID = @cardId
END So to summarise the approach.
- Create a VIEW of the data that JOINs all the necessary tables
- Create a stored procedure that SELECTs data from the VIEW by filtering the VIEW using WHERE clauses
This is an approach that I use regularly as it simplifies the stored procedures I need to create.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a requirement to update multiple tables with the same value. We have a table that stores information about documents (Excel documents, Word documents, text documents, images, reports etc). Every document has an owner associated with it. This person has admin privileges over the document. After a discussion with one of our users, they wanted the ability to change the owner of a document. Doing this at the level of a single document is straight forward. However, the user wanted this for multiple documents. For example, if a user is due to leave the business, they wanted the ability to change the owner of all their documents to a new owner.
I therefore needed the ability to pass a list of document IDs into a stored procedure. The stored procedure would then change the owner for all the documents in the list to the specified owner. Passing in the comma-delimited list of document IDs wouldn't be difficult, as this is essentially a long string. The tricky part would be to iterate through the items in the list i.e. to fetch each document ID from the comma-delimited list so that the owner can be updated.
The first thing I needed to do was to create a function that could iterate through the list. I create a Table-Valued-Function (TVF) called Split to achieve this. If you don't already know, a TVF is a function that returns a table (as the name suggests). In our case, we will return a two column table containing a unique ID and an item from the list. So if there are 10 items in the list, then there will be 10 rows in the table returned by our TVF.
CREATE FUNCTION [dbo].[Split]
(
@List nvarchar(2000),
@SplitOn nvarchar(5)
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Value nvarchar(100)
)
AS
BEGIN
While (Charindex(@SplitOn,@List)>0)
Begin
Insert Into @RtnValue (value)
Select
Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1)))
Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List))
End
Insert Into @RtnValue (Value)
Select Value = ltrim(rtrim(@List))
Return
END The function has two paramters. The first is the comma-delimited list of document IDs
@List = '1, 2, 3, 4, 5' The second parameter is the delimiter. In this case we are passing a comma-delimited list hence the delimiter is a comma.
@SplitOn = ',' The function loops through the list locating the next item by searching for the next occurrence of the delimiter. It keeps doing this until it cannot find any more occurrences of the delimiter. Each item it finds between the current and next delimiter is inserted into the table that will be returned by the TVF.
We next need to write a stored procedure that invokes our Split Table-Valued-Function.
CREATE PROCEDURE [dbo].[Documents_UpdateOwner]
@owner INT,
@documentids NVARCHAR(1000)
AS
BEGIN
UPDATE
Documents
SET
UploadedBy = @owner
WHERE
ID IN (SELECT CONVERT(INT, Value) FROM Split(@documentids, ','))
END There are two parameters to the stored procedure. The first one is the ID of the new owner for the documents. The second parameter is a comma-delimited list of document IDs for which we wish to change the owner. The items returned from the Split TVF are stored in string format. Therefore if we need to update data in another format we need to do a conversion. In our case, we are updating an INT and therefore need to convert the item from an NVARCHAR to an INT. Obviously we wouldn't need to do any conversion if we were comparing against string data.
I have since used this Table-Valued-Function in other stored procedures where I need to iterate through a list of items. It's a very efficient way of updating multiple tables. Instead of having to make multiple calls to a stored procedure to update each document owner, I can instead make one call to a stored procedure and update all of them at once. This is a neat way to allow for those scenarios where you need to update data from a list of items.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had some plumbing work done in my house that made me think of a similarity between software development and plumbing. I realise they are fundamentally different beasts, but bear with me. Whilst talking to my plumber, he was showing me the differences between the work he had done, and the work done on one of the other houses in the street where I live. Even as a complete novice I could see the differences he was describing. He wasn't trying to be disrespectful or mean to the other plumber (he didn't know him as he had never met him), but merely demonstrating how high his quality of work was using a direct example.
- The holes made in the brickwork in my house were neat and the pipes fitted tightly through with no gaps. In the other house they were rough and there were gaps where the pipes came through.
- Where my brickwork needed replacing outside my house, these has been replaced with identically coloured bricks and you couldn't see any differences when looking at the wall. On the other house, the bricks had been replaced with differently coloured bricks and the bricks had been replaced so the interlacing (bricks are laid in an overlapping manner vertically for strength) had been broken.
- There were no pipes running outside my house. The pipes running outside the other house were left totally exposed to the elements as they were not protected with lagging.
I'm sure there were similar differences inside the houses too.
The point I am making is that my plumber showed care. His work was of a very high standard and demonstrated diligence and work ethic. The other plumber was satisfied with far lower standards. For him, close was good enough.
This same comparison can also be made with software development. When I write code, I take care to ensure that my code is well organised, structured and readable. I ensure that there are unit tests that exercise an adequate level of code coverage. I implement best practices and aim to be consistent.
When I look at a piece of code, I can very quickly determine if there was care put into it. Sloppy, ill thought out code that is inconsistent and unstructured are amongst some of the signals that reveal such a lack of care. Even as a novice, you can still demonstrate a level of care within your work. This is not about how knowledgeable or experienced you are, but how dilligent you are. It is still entirely possible to write code with care and attention to detail despite being inexperienced.
As a professional software engineer, I want others to look at my code and think "Hey this guy has put a lot of effort and care into writing this". It will have my name against it. I have high standards, and I expect the same from every other developer on the team. I have taken it upon myself to write the coding standards document that we all follow as a team. Not by dictatorship, but by democracy.
When you have checked in your code, take a moment to reflect what another developer would think of it. What would they think when looking at your code? What does your code say about you and your work ethic? Our bread and butter is our code. The care, love and dilligence that we use to craft it speaks volumes about us as professional software developers. Make sure that when another developer looks at your code, that at the very least they will say that this guy cared about what they were doing.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 8-Mar-19 12:24pm.
|
|
|
|
|
Following on from an earlier article[^] I wrote about versioning a .NET Core 2.0 application, I have now had to revise this since the method I used for that version of the application is not supported in .NET Core 2.2. In that article, I demonstrated how to use a tool called setversion[^] for versioning a .NET Core 2.0 application. After upgrading our application to .NET Core 2.2 I found out that this is not currently supported any more.
Instead of using the setversion tool, I am using the dotnet publish command-line utility. When using this command-line utility, you are able to specify a version number.
I am still using the same build script as described in my previous article, and this is invoked from our TFS build server in the same manner. Just to reiterate, within TFS you have the ability to pass arguments to your Windows batch files. I am passing the build version number $(Build.BuildNumber) as the argument.
I then invoke my Windows batch file (called setversion.bat)
@echo off
cls
ECHO Setting version number to %1
cd <projectFolder>
dotnet restore
dotnet publish <project>.csproj --configuration Release /p:Version=%1 This all works perfectly, and the deployed application assemblies are stamped with the correct version number.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In my previous article Sending Push Notifications with Azure Notification Hub[^] I briefly described our rationale for selecting Azure Notification Hub over alternatives. I have now fully implemented an ASP.NET Web API service for sending push notifications as well as managing their associated tags.
The service provides the following functionality.
- Send push notifications to either Android or iOS devices (with or without tags)
- Adds tags
- Removes tags
If you aren't familiar with the concept of tags where push notifications are concerned, you aren't alone. I hadn't heard of them either until I started working with push notifications. The concept is surprisingly simple, yet provides great flexibility in how you target where your push notifications are sent.
When a device is registered for push notifications (via code running on the device), you can optionally assign tags with the device registration. This is a list of characteristics (or interests) that the device wishes to receive push notifications about. Tags can either be set by the user (perhaps via a system preferences page where they can tick boxes to select the items they wish to receive push notifications about) or by the backend (where we can set characteristics to allow us to target specific devices(s) when sending push notifications).
In our case, we have implemented the latter i.e. we are adding tags that relate to the user's device to allow us to send targetted push notifications. For example, we have added tags that specify the user's ID, their company ID etc. This allows us to send a push notification to a specific user's device (by specifying the user's ID) or to all the user's for a specific company (by specifying the company ID).
When a push notification is sent, you can specify a tag alongside your push notification message. The push notification is then only sent to any registered devices that have expressed an interest in that particular tag. So in our case, we can send a message to a specific user by supplying their ID as the tag. Or we can send a push notification and supply the company ID, thus ensuring that the push notification is only sent to user's of that specific company. We can slice and dice the demographics of our user base in any way that we find meaningful by simply registering the device with the desired tag(s).
This is a powerful way of decomposing the demographics of your user base. You can now explicitly categorise your user base by the tags they have registered with. By doing so, this then allows us to send targetted push notifications, right the way down to a specific user's device.
The service that I have implemented manages these tags, as well as providing the ability to send the push notifications themselves. The service therefore allows the backend to add and / or remove tags from a user's device. For example, when a user logs in on a device, the service is invoked to register them with various tags according to the information we hold on them. Likewise, we will remove those tags when they sign out.
This process is very straight forward, yet gives us an incredible level of flexibility for sending targetted push notifications to our users. If you have't already looked into the concept of push notification tags, then I'd definitely have a look at them. They're a great idea.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In the latest version of the Xamarin Forms app that I am working on, we wanted to send push notifications to the devices. There were a couple of approaches that we could have taken. The key ones being Twilio (which we are already using for sending SMS messages) and Azure Notification Hub. After some initial exploration, the clear choice was Azure Notification Hub. Unsurprisingly it had tight integration with Xamarin Forms and the Microsoft ecosystem, and was very straight-forward to configure and get working.
There were also very good examples of how to make the necessary code changes to the respective Android and iOS projects to ensure we got this working quickly.
The beauty of working with Azure Notification Hub, is that this abstracts us away from the underlying details of the Android and iOS platforms. Instead, once we had made the necessary configurations and setup changes to enable push notifications for each platform, we then integrated the platform specific push notification engines into Azure Notification Hub. From this point onwards, we only have to work with Azure Notification Hub. This gives us a far simpler and cleaner abstraction onto our notification setup.
It is very simple to setup and send test push notifications to your registered devices using Azure Notification Hub. We have also intergrated App Center event tracking for all device registrations and sending of push notifications. This gives us a helicopter view of what our code is doing under the hood, and to help us diagnosing any errors should they arise.
The step-by-step tutorials I used can be found here[^].
So if you're looking to implement push notifications in your mobile app, give Azure Notification Hub a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
With the imminent release of our latest mobile app, I thought I'd summarise how we ensured high levels of quality, and proved that the software was correct. I'm not going to write an article justifying the case for unit testing (it should go without saying that unit testing is a fundamental part of the development process - if not you're doing it wrong), but rather to explain how we implemented unit testng within the software for the app.
The architecture I favour when designing an application, is to firstly reduce the surface area of the client[^]. Simply put, this entails keeping the UI code as sparse as possible, and removing any / all code that is involved with the domain. The UI should ONLY contain code that relates to the UI. While this sounds straight forward, I have lost count of the number of times I've come across code bases where the UI contains code from the domain and / or the data layer.
In relation to a Xamarin Forms mobile app, you should keep the code in the Views as sparse as possible. The UI code should only invoke your domain code, it should NEVER implement it. Your Xamarin Views should contain code for manipulating the various UI controls, populating them with data etc. As soon as there is a need for anything beyond this, then refactor the code and place this code in a completely separate layer of the app. Within the context of a Xamarin Forms app, I created separate folders for such things as the models, services, entities etc. These were completely separate to the Views.
To enforce this separation of concerns, we adopted the MVVM design pattern. I won't go into great detail here about this pattern (as there are many articles out there already). The MVVM pattern stands for
Model -> View -> View-Model
More correctly it could be named VVMM (View -> View-Model -> Model) as this is the order in which they relate to each other (in terms of dependency). The Model should have no knowledge of the View-Model. The View-Model should have no knowledge of the View. This is important when implementing an MVVM application, as it reduces the dependencies between the various parts of the application.
The View in a MVVM designed app is the UI element, or in the case of a Xamarin Forms app, they are the Views. Only UI code should be placed in the Views.
The View-Model is the place where domain logic will reside. All UI controls should be bound to properties in the View-Model. The code that provides your UI controls with data, hides/shows the UI element etc should all be implemented here. This way, you can unit test those rules and ensure that they are correct. And this is done without the need for the UI to be present. This means you don't have to keep using the simulator or physical device to test the domain rules of your app. You should be able to unit test these rules in the absence of the UI, and in complete isolation from other parts of the application. The unit tests should require minimal setup, and any dependencies should be injected into the methods to remove hard-wired dependencies. This is good old fashioned Dependency-Injection, and it is a vital design pattern when implementing unit tests. This ensures the correctness of your domain.
The Model is concerned with the data, and therefore maps your data entities into classes. The Model will contain such things as definitions for customer, order, supplier etc. The Model should not be concerned with how it is used by the View-Model or View. For example, you may have an Order class which contains an Order-date. This is stored within the Model as a Date type. The fact that this date is displayed as a string in the UI is of no concen to the Model. Any conversions needed to map Model properties into UI elements should be implemented by the View-Model (you may have a conversion needed by several elements or Views, so it makes sense to place this conversion code within a View-Model where it can be invoked from multiple places). Again, these conversions can be unit tested with complete independence from the UI by placing them in the View-Model. You can write unit tests against the Model to ensure that the values you set against it match those that are returned. So if you set the Order-date of your Order to a specific date, you can assert that this date is returned by the unit test. This ensures the correctness of your underlying data.
Unit testing a mobile app need not be difficult as long as you have carefully designed and architected the various moving parts and separated the key concerns. Implementing an architecture that supports separating out the various concerns is vital (layering). It's also useful to implement a design pattern that enforces such layering (such as MVC, MVVM). You should aim to keep your UI as sparse as possible, and place all code that is not involved in the UI elsewhere within the application.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been developing mobile apps for the Android and iOS platforms for several years now. I have used both Telerik Platform (now retired) and Xamarin Forms. Both of these are excellent development platforms. Most recently, I have been developing apps using Xamarin Forms. Most of the code for the app in a Xamarin Forms app is contained within a single, shared project. This code is shared between both the Android and iOS apps. When you require platform specific behaviour, you place this code in the Android or iOS specific project as required.
During the development of the latest app, we have hit several issues as you would expect. Some small, some not so small. Android development is pretty painless, intuitive, conforms to well defined best practices and standards. We have hit a few snags with Android, but these have been relatively small and easy to fix.
Apple however is a whole different can of worms. Nothing they do seems to conform to any well defined standard or best practice. They have this habit of almost deliberately ignoring the well defined and understood patterns and practices from other development platforms, and doing it "their way". It's fair to say that the "Apple way" is usually vastly more time consuming, complicated and error prone. The Apple motto seems to be the total inverse of Occam's razor.
When given two or more ways of solving a problem, always choose the worst option.
From provisioning profiles and certificates to asset catalogues (I have never encountered a worse way of storing images than this), the "Apple way" is never simple, straight-forward or intuitive.
Nearly every issue or bug we have encounterted has been with the iOS version of the app (on both Telerik Platform and Xamarin Forms). The Apple platform just doesn't seem as robust as Android (which just works).
I am assuming that the majority of Apple developers don't get much exposure to other development environments, and probably build mainly Apple apps. They therefore never get to experience how things "should" be. If you only know the "Apple way" of doing things, then you have nothing else for comparison.
I have worked within development for approaching 20 years now, and in that time have used pretty much every platform, tool and technology at some point. I therefore have a broad knowledge of what is considered "best practice" by my exposure to the huge number of technologies over the years. I know what works, and how things ought to work. I can spot efficiency, good design, simplicity and elegance from afar.
This is why I am of the opinion that the Apple way just sucks. Doing something differently merely for the sake of it is not innovative. There are very good reasons why certain ideas become best practice within the development field. It's because they work. And not just work, but are well understood and accepted by those working within the industry. They have been put to the test, and been successful.
In all my years as a professional software developer, engineer and architect, I can honestly say that I have never come across a development platform as poor as that provided by Apple. If you genuinely think Apple make great development products, then I'd suggest having a look at how everyone else builds their development tools. Microsoft and Google for example build excellent development tools, and they employ industry best practices and standards in their processes and workflows.
Unfortunately, while Apple remains a player in the mobile app space, developers such as myself will just have to put up with the "Apple way" of doing things. I think Apple would do well to take a look around at the other players in their industry and take some inspiration from them. Until they do, they will continue to frustrate developers who find the "Apple way" cumbersome, time consuming and inefficient.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I hear discussions relating to the prevalent censorship and bias at the hands of the tech giants (Facebook, Twitter, Google et al), an argument I hear repeated is that they're private companies and can do whatever they want. Yes they are private companies, but I don't think that's a sufficiently powerful nor persuasive argument for allowing them off the hook. If you're unaware of the bias and censorship within Silicon Valley then read read my article[^] where I cover these issues.
Here's why I think anyone proposing that particular argument is wrong.
- Google is the number one search engine across the entire planet, and as such has a large share of the internet-search market. They can control (and censor / filter) their searches to disseminate their own political narrative with ease. Unlike going to the local baker's to buy a cake, if you get refused for some reason, you can just go to the baker next door and try again. Saying Google is a private company and can therefore have total control over what they do is a little naive. Google are very secretive about how their algorithms work and will no doubt refute any claim that their searches are biased. But you only need to compare the results from Google with that of a neutral search engine (such as DuckDuckGo) and you will see the stark contrast when comparing searches for political terms (I covered this in my previous article).
- The tech giants are more than just tech companies. They are highly influential agents that shape our cultural, political and social landscapes. They step far outside the technical arena in how they shape and influence our day-to-day lives. Many people today get their news from their social media platform of choice e.g. Facebook, Twitter or via organic search via Google. This places them in very influential positions. Rather than merely informing us about the state of current events, they can influence them to fit their own political agenda. This is no longer acting as a neutral observer, but an agent of change and influence.
- As we have recently seen with the de-platforming of Gab.com, the tech giants will collude to crush their competitors. Gab has been de-platformed by (amongst others) Microsoft, Apple, Google, Paypal and Patreon. If this happended in any other industry, there would quite rightly be a public outcry. For some reason, this behaviour seems to be accepted within the tech industry (but only if you have the "right" politics). You can't have choice in the marketplace, when the technical oligarchs at Silicon Valley will actively crush that competition. So the argument for "Private companies can do what they want" only really applies when there is true competition and an open and fair marketplace. Silicon Valley provides none of these.
So stating that the tech giants are private companies, for me at least, doesn't constitute a valid argument when considered against the points I've made here. They do not operate within the boundaries of a market where there is anything approaching competition. They have huge power and influence that they wield to perpetuate their political agenda. It is this same power that they use (in collusion with other tech giants) to silence and crush their competitors.
I'll keep posting my usual technical articles, but from time to time I will continue to delve into the political side of things with articles such as these. I'm genuinely interested to hear other people's opinions on these matters so feel free to share and discuss your own views on these topics.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
The latest version of the app (which will replace the current app that is in the app stores) is nearing completion. We are into user-acceptance with key stake-holders from around the business. The journey from beginning the app several months ago, to now, has involved a great deal of learning. Although we had an existing app on which to base our development efforts, that's where the similarities ended. Many of the technologies used for the new app were either brand new, or very different from when we last used them.
- Xamarin: Although I have used Xamarin previously (long before Microsoft decided to acquire it), it is vastly different now than it was then. It's fair to say that in its current Microsoft incarnation, much of the Android and iOS specifics are abstracted away from the developer and bore little resemblance to the version I used all those years ago. So whilst I needed to refresh my knowledge of Xamarin as it had changed substantially since I had last used it, it was brand new to the rest of the development team.
- App Center: This is Microsoft's build / test / deploy center for mobile apps. This is an absolutely brilliant tool. We used this throughout our development lifecycle for all of our diagnostics and debugging. We added tracking for all our events, service calls and exception handling. App Center allows you to setup and configure analytics for your crash reporting as well as for event tracking. This was very useful when we needed to diagnose exceptions and errors during the development cycle. We also configured our Azure DevOps build to deploy to App Center. So with each code check-in, upon a successful build, we would have an Android and iOS release ready for testing.
- Telerik DataForm: Is a means of simplifying the development of your data-entry forms. You define the properties of your data-entry form in your model class (and decorate your properties with the necessary validation rules and label-text). This model then forms the basis of your data-entry form. Telerik DataForm then takes your model and generates the necessary UI controls for your model, and hence generates your data-entry form. Including the validation rules and label-texts. Your UI is therefore built from the programmatic definition of the underlying model. This is an incredibly powerful paradigm. It frees up the developer to focus on the model's rules and validation, and delegates the building of the UI to Telerik. This paradigm is not suitable for every form, but for simple, static data-entry forms it is perfect. Telerik DataForm implements the MVVM design pattern, thus your forms consist of the following logical pieces.
- View (the XAML layout and code-behind)
- View-Model (where you define the rules for your data-entry form)
- Model (where you define the data to which your UI elements are to be bound)
- Azure AD B2C (Identity Provision): We have previously setup Azure AD B2C (Busines-2-Consumer) for one of our line-of-business web apps. This allowed us to delegate the login functionality to Azure. Rather than implementing our own login functionality, we configured the web app to use Azure AD B2C instead. This gives us an incredibly secure app as you would expect. We are leveraging the same login functionality that is used daily by 2 billion Office365 users. We decided to use the same Azure AD B2C functionality in our mobile app. This gives us far higher security, scalability and we don't have to write a single line of code. Perfect!
We also trialled Azure DevOps for this project. All our source code, build and release definitions were defined here. Although I have used Team Foundation Services previously, this was my first time using Azure DevOps, and was my first time defining builds and releases for Android and iOS.
So it's fair to say that we had many (steep) learning curves on this project. Despite that though, they were the right decisions, as the new app puts us in a far stronger position both technically and strategically. From the development platform to the technology ecosystem, the new app is a far stronger proposition.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|