|
I actually really enjoy writing stuff like this. I wrote an article about writing flexible RESTful services that was very similar to GraphQL (having since checked out GraphQL I can see the similarity). GraphQL was developed by the multi-billion dollar Facebook empire, my version was developed by little ol me
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from a previous article[^] I wrote as an introduction to writing asynchronous code with .NET, I want to describe a common problem I see developers making when they begin writing asynchronous code beyond the basics. A common mistake developers make when they first start writing asynchronous code using the .NET Framework, is to write blocking asynchronous code. I've seen this problem on Stackoverflow and with developers I have worked with directly (both junior and senior).
Rather than try to explain the problem, I'll give some example code that should hopefully highlight the problem. Here's an ASP.NET Web API RESTful service being invoked from a client application.
The back-end service code is taken from one of our RESTful services that returns vehicle telemetry to a client application. For the purposes of clarity, I have omitted all logging, error checking and authentication code.
public async Task<string> Get(string subscriber, string trackertype)
{
var response = await this.GetData(subscriber, trackertype);
return response;
} And here is a client that invokes the RESTful service. In this example the client is a unit test.
[TestMethod]
public async Task GetVehicleTests()
{
TrackerController controller = new TrackerController();
string subscriber = "testsubscriber";
string vehicle = "testvehicle";
var response = controller.Get(subscriber, vehicle);
Assert.IsNotNull(response);
Assert.IsNotNull(response.Result.ToString());
} The above unit test code will deadlock. Remember, that after you await a Task, when the method continues it will continue in a context.
1. The unit test calls the Get() RESTful service (within the ASP.NET Web API context).
2. The Get() method in turn calls the GetData() method.
3. The GetData() method returns an incomplete Task indicating that the Get() method has not yet completed (with the same context).
4. The Get() method awaits the Task returned by the GetData() method (the context is saved and can be re-instated later).
5. The unit test synchronously blocks on the Task returned by the Get() method which in turn blocks the context thread.
6. Eventually the Get() method will complete. This in turn completes the Task that was returned by the GetData() method.
7. The continuation for Get() is now ready to run, and it waits for the context to be available to allow it to execute in the context.
8. Deadlock. The unit test is blocking the context thread, waiting for the Get() method to complete, and GetData() is waiting for the context to be available so it can complete.
How can this situation be prevented? Simple. Don't block on Tasks.
1. Use async all the way down
2. Make (careful) use of ConfigureAwait(false)
For the first suggestion, awaitable code should always be executed asynchronously. So given the example code here, the unit test was not correctly awaiting the result from the RESTful service. The unit test code should be modified as follows.
[TestMethod]
public async Task GetVehicleTests()
{
TrackerController controller = new TrackerController();
string subscriber = "testsubscriber";
string vehicle = "testvehicle";
var response = await controller.Get(subscriber, vehicle);
Assert.IsNotNull(response);
Assert.IsNotNull(response.ToString());
} Like a handshake, whenever you have an await at one end of a service call, you should have async at the other.
The use of ConfigureAwait(false) is slightly more complicated. When an incomplete Task is awaited, the current context is captured to allow the method to be resumed when the task eventually completes e.g. after the await keyword. The context is null if invoked from a thread that is NOT the UI thread. Otherwise it returns the UI specific context depending on the specific platform you are using e.g. ASP.NET, WinForm etc). It is this constant context switching between UI thread context and worker thread context that can cause performance issues. These issues may lead to a less responsive application, especially as the amount of async code grows (due to the increased volume of context-switching). Yet this is exactly what we are trying to solve by using asynchronous code in the first place.
There are a few rules to bear in mind when using ConfigureAwait(false)
- The UI should always be updated on the UI thread i.e. you should not use ConfigureAwait(false) when the code immediately after the await updates the UI
- Each async method has its own context which means that the calling methods are not affected by ConfigureAwait()
- ConfigureAwait can return back on the original thread if the awaited task completes immediately or is already completed.
A good rule of thumb would be to separate out the context-dependent code from the context-free code. The goal is to reduce the amount of context-dependent code (which can typically include event handlers).
We can modify the Get() RESTful service as follows.
public async Task<string> Get(string subscriber, string trackertype)
{
var response = await this.GetData(subscriber, trackertype).ConfigureAwait(false);
return response;
} Deadlocks such as this arise from not fully understanding asynchronous code, and the developer ends up with code that is partly synchronous and partly asynchronous.
By following the suggestions in this article, you should see performance gains in your own code, as well as better understanding how asynchronous code works under the hood.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from a couple of my previous articles, I would like to both reinforce the ideas I laid out in them, as well as consolidate those ideas. In an article[^] from October 2017 I described a pattern I use for designing and implementing RESTful APIs, specifically with regards to implementing RESTful GET APIs. In an article[^] from July 2018 I described the principle of reducing the client surface area, and how this leads to cleaner, simpler and less complex code, particularly with regards to implementing RESTful APIs.
Where I currently work, we have a library of RESTful ASP.NET Web APIs that our web and mobile applications consume. These cover many different types of query as they are used in many different ways by the particular applications. For example the mobile app (which is aimed at fleet drivers) fetches data for the currently signed-in user, their latest mileage updates, their account manager, journeys they have made etc. The web application fetches data relating to users, roles, permissions, documents etc.
These are all GET methods that perform a variety of different queries against different data types. When designing the client API surface required for all these APIs I wanted to make them all consistent, irrespective of what data was being returned, or what query filters were being specified.
To clarify the problem a little further, I wanted to use the same client API for all data types e.g. mileage, user, company, journey etc. Further to this, I wanted the way in which the data was queried to be consistent. Example queries are listed below.
- Fetch me the mileage data for this user
- Fetch me the mileage data for this date range
- Fetch me the journey data for this date
- Fetch me the journey data for this user
- Fetch me the permissions for this user
- Fetch me the documents for this user
These are all queries that work on different data (mileage data, journey data, permissions data, documents data) and interrogate the data in different ways (by user, by date). Crucially, I wanted all of these queries to map onto a single GET API for consistency, and to reduce the complexity of the client (by reducing the client facing API to one API instead of multiple APIs). Reducing the client facing API is the principle of reducing the surface area of the client.
I finally came up with the following API design.
- I have a single controller with a GET method that accepts two parameters.
- The first parameter is a string that designates the type of query e.g. "getmileagebyuser", "getjourneybydate" etc
- The second parameter is a serialised query object that contains the values needed to query (or filter) the data e.g. the user ID, the date or whatever filters are required to satisfy the request.
- All queries must return their data as a serialised string (which the client can de-serialise back into an object).
For the purposes of clarity the code examples used here have omitted error checking, logging, authentication etc to keep the code as simple as possible. In my own library of RESTful APIs I have separated out the requests made by the mobile app from those made by the web app. I therefore have two controllers, each with a single GET method that does all the heavy lifing of fulfilling the many different query requests. I have created a different controller for each type of client so as to prevent the controllers from bloating. You can separate out the requests any way you want. If you don't have many queries in your application, then you could simply place all of these query requests in a single GET method in a single controller. That is obviously a design decision only the developer can make.
The controllers are called MobileTasksController and WebTasksController. For the purposes of this article I will focus on the latter controller only, although they both employ the same design pattern that I am about to describe.
First let's define our basic controller structure.
public class WebTasksController : BaseController
{
public WebTasksController()
{
}
public string WebGetData(string queryname, string queryterms)
{
}
} You will need to decorate the WebGetData() method for CORS to allow the clients to make requests from your GET method.
[HttpGet]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public string WebGetData(string queryname, string queryterms)
{
} Enable CORS with the appropriate settings for your own particular application.
As we can see, the WebGetData() method has two parameters.
- queryname is a string that designates the type of query e.g. "getmileagebyuser", "getjourneybydate" etc
- queryterms is a serialised query object that contains the values needed to query (filter) the data e.g. the user ID, the date or whatever filters are required to satisfy the query request
Here's the class that I use for passing in the query filters.
[DataContract]
public class WebQueryTasks
{
[DataMember]
public Dictionary<string, object> QuerySearchTerms { get; set; }
public WebQueryTasks()
{
this.QuerySearchTerms = new Dictionary<string, object>();
}
} At its core it comprises a dictionary of named objects. By implementing a dictionary of objects this allows us to pass in filters for any type of data e.g. dates, ints, strings etc. We can also pass in as many filters as we need. We can therefore pass in multiple filters e.g. fetch all the journeys for a specific user for a specific date. In this example, we pass in two filters.
- The user ID
- The date
Once the query is serialised we have the following string which is then passed as the second parameter to the RESTful GET method.
{"QuerySearchTerms":{"email":"test@mycompany.co.uk"}} By implementing our queries in this way makes for very flexible code that allows us to query our data in any way we want.
var user = GetUser(emailaddress);
WebQueryTasks query = new WebQueryTasks();
query.QuerySearchTerms.Add("userid", user.Id);
query.QuerySearchTerms.Add("journeydate", Datetime.Now);
string queryterms = ManagerHelper.SerializerManager().SerializeObject(query); The WebGetData() method then needs to deserialise this object and extract the filters from within. Once we have extracted the filters we can then use them to fetch the data as required by the request.
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
The core of the WebGetData() method is a switch statement that takes the queryname as its input. Then, depending on the type of query, the method will extract the necessary filters from the WebQueryTasks parameter.
The names of the queries are stored as constants but could equally be implemented an an enum if preferred. We don't want to have to hard-code the names of our queries into the method, so any approach that separates these is fine.
In the example below there are two queries. One returns company data for a specified user. The second returns company data for a specified company ID. In each case the code follows the same pattern.
- select the appropriate case statement in the switch
- extract the filters from the query
- invoke the appropriate backend service to fetch the date using the extracted filters (after firstly checking that the filter(s) are not empty)
- serialise the data and return it to the client
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
temp = this._companiesService.Find(webResults);
break;
case WebTasksTypeConstants.GetCompanyById:
webResults = this._userService.GetQuerySearchTerm("companyid", query);
int companyId = Convert.ToInt32(webResults);
temp = this._companiesService.Find(companyId);
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError($"Unknown query type {queryname}.")));
} We then need to serialise the results and return these to the client.
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result; In the production version of this controller, I have implemented many more queries in the switch statement, but for clarity I have only implemented two for the purposes of this article.
Here is the full code listing.
[HttpGet]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public string WebGetData(string queryname, string queryterms)
{
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
temp = this._companiesService.Find(webResults);
break;
case WebTasksTypeConstants.GetCompanyById:
webResults = this._userService.GetQuerySearchTerm("companyid", query);
int companyId = Convert.ToInt32(webResults);
temp = this._companiesService.Find(companyId);
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError($"Unknown query type {queryname}.")));
}
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result;
} Just to repeat, for the purposes of this article, the method above has had all error checking, logging, authentication etc removed for the sake of clarity.
I have implemented this pattern in all my GET APIs to great success. It is very flexible and allows me to query the data in multiple ways as neccesary. It also allows the client code to be simpler too, by reducing the client area (the client only needs to interact with a single endpoint / controller), and enforces consistency by ensuring that all queries are similar to one another (they must all pass in two parameters - the first designating the query type, the second containing the query filters).
This pattern of API design achieves all the following benefits
- Simpler server side code by producing substantially less code due to the generic nature of the pattern
- Simpler client side code by only having a single endpoint to interact with
- High degree of flexibility by allowing the APIs to filter the data any way the application requires
- Consistency by ensuring that all requests to the RESTful API are the same
I have been using this pattern in my own RESTful APIs for several years, including several production mobile apps (that are available in the stores) and line-of-buiness web apps. With the pattern in place, I can quickly and easily add new RESTful APIs. This makes adding new services to the apps more timely, and makes the process of adding value to the apps much quicker and simpler.
Feel free to take this idea and modify it as neccessary in your own RESTful APIs.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been writing asychronous code with the .NET Framework for several years now, and find that the .NET Framework makes a good job of hiding the underlying conceptual details. The concepts are pretty straight forward if you undedrstand how asynchronicity works. As I've found over the years though, these concepts are not always well understood or applied by less experienced developers. By less experienced, I don't always mean junior developers. I've come across senior developers who have struggled with asynchronous code too.
I've helped several developers fix issues with their code that have been due to mis-understandings with asynchronicity, and I've found issues with their code during code reviews that have highlighted basic mis-understandings with implementing asynchronous code using the .NET Framework.
In this article I want to go through the basics of writing asynchronous code using the .NET Framework. I'll use C# to illustrate all examples, but conceptually the code will work the same when transposed to VB.NET or any other .NET language. I'll use examples from our ASP.NET Web API services code base which makes extensive use of asynchronicity to give performant and responsive code. Our mobile apps all rely on these services for delivering functionality to the end user's device. It is therefore incumbent on our apps to be highly responsive and performant. I have therefore made all these services asynchronous to effect these requirements. I may follow this article up in the future with more advanced scenarios, but for now, I will stick to the basics.
What is Asynchronous Programming?Let's start with some basic understanding of asynchronous programming. Most code gets executed in a sequential manner i.e.
- execute line 1
- execute line 2
- execute line 3
We have 3 lines of code that each execute some command, and they each run one after the other i.e. "execute line 1" is executed first. When this has finished execution then "execute line 2" gets executed. When this has finished executing then "execute line 3" is executed. These commands are run sequentially, one after another. This can also be referred to as synchronous code. The next line of code can only be executed when the previous line of code has completed.
var myList = new List<string>();
myList.Add("item1");
myList.Add("item2");
myList.Add("item3");
myList.Remove("item1"); A trivial example could be the code above. The first line creates a string list called myList. When this has completed the next 3 lines then add items to the string list (item1, item2 and item3). Finally, we remove item1 from the list. These lines of code are executed one after the other in a sequential (synchronous) manner.
When code is executed sequentially like this, one command after the other, we say that it has been executed synchronously.
We need to write our code differently when we interact with any kind of I/O device such as a file, a network or database. The same applies when we execute any CPU bound operations such as rendering high-intensity graphics during a game. We cannot make any guarantees about how quickly the device or operation may respond to our request, so we need to factor in waiting time when making requests to I/O devices or CPU intensive requests.
An anology may be making a telephone call to book an appointment to have your car serviced. Immediately after making your booking you need to then write down the date and time of the booking. You may get straight to the front of the telephone queue if you're lucky. Alternatively, you may find you are further down the telephone queue and have to wait to get through to the garage. Either way, you cannot write down the date and time of the booking until you have gotten through to the garage.
In this scernario you don't know exactly when you can write down the date and time of the booking as you may have to wait to get through to the garage.
And this is exactly how asynchronous code works.
When your code accesses I/O devices such as accessing a file, network or database (or makes a request to a CPU intensive operation) you cannot guarantee when your request will be serviced. For example, if you are accessing a database, there may be latency on the network, it may be hosted on legacy hardware, the record you are accessing may be locked and so on. Any one of these will affect the timeliness (or otherwise) of the response to your request.
If your network or database is busy and under extreme load, any request sent over it will be slower than requests made during less busy times. So it should be obvious that executing a command that relies on an I/O device immediately after submitting a request to that I/O device is likely to fail, as you may not have received any response from the I/O device.
Example
- connect to database
- fetch records from database
- close database connection
If you were to execute the above code synchronously, you could easily run into the situation where you are trying to fetch the database records before you have fully connected to the database. This would fail resulting in an exception being thrown. What you instead need to do is attempt to connect to the database, and ONLY when that has succeeded should you attempt to fetch the records from the database. Once you have fetched the records from the database, then you can close the database connection.
This is exactly how asynchronous code works. We can rewrite the above pseudo-code asynchronously.
- connect to the database
- wait for connection to database to be established
- once connected to the database fetch records from database
- close database connection
The two sets of pseudo-code look very similar, with the key difference being that the latter waits for the connection to the database to be established BEFORE making any attempts to fetch records from the database.
Hopefully by this point the goals of asynchronous programming should be clear. The goal of asynchronous programming is to allow our code to wait for responses from I/O or CPU bound recources such as files, networks, databases etc.
Asynchronous programming with C#Now that we understand the principles and goals behind asynchronous programming, how do we write asynchronous code in C#?
Asynchronous programming is implemented in C# using Task and Task<t>. These model asynchronous operations, and are supported by the keywords async and await. Task and Task<t> are return values from asynchronous operations that can be awaited.
Here's a function that POSTs data to a RESTful endpoint, and does so asynchronously. For the purposes of simplicity I have removed all authentication etc from the code samples I will use.
public async Task<HttpResponseMessage> PostData(string url, HttpContent content)
{
using (var client = new HttpClient())
{
return await client.PostAsync(new Uri(url), content);
}
} Things to note.
- The method returns a Task of type HttpResponseMessage to the calling program i.e. the method is returning an instance of HttpResponseMessage (e.g. an HTTP 200 if the method was successful).
- The async keyword in the method signature is required because the method invokes the PostAsync() method in the method body i.e. the method needs to await the response from the RESTful API before the response can be handed back to the calling program.
To call this function we write the following code.
var response = await PostData(url, content); The calling code (above) needs to await the response from the PostData() method and does so using the await keyword. Whenever you invoke an asynchronous method such as PostAsync(), you need to await the response. The two keywords go hand in hand. Asynchronous methods need to be awaited when they are invoked.
Here's another RESTful API method that fetches some data from a RESTful endpoint. The RESTful endpoint returns data in the form of a serialised JSON string (which the calling program will then de-serialise back into an object).
public async Task<string> GetData(string url)
{
using (var client = new HttpClient())
{
using (var r = await client.GetAsync(new Uri(url)))
{
string result = await r.Content.ReadAsStringAsync();
return result;
}
}
} Things to note.
- The method returns a Task of type string to the calling program (the JSON serialised response from the RESTful endpoint).
- The async keyword in the method signature is required because the method invokes the GetAsync() and ReadAsStringAsync() methods in the method body.
To call this function we write the following code.
string response = await GetData(url);
if (!string.IsNullOrEmpty(response))
{
} The calling code (above) needs to await the response from the GetData() method and does so using the await keyword.
Key takeaways- Async code be can used for both I/O bound as well as CPU bound code
- Async code uses Task and Task<t> which represent asynchronous methods and are the return values from asynchronous methods (as we saw in the PostData() and GetData() methods)
- The async keyword turns a method into an async method which then allows you to use the await keyword in its method body (as we saw in the PostData() and GetData() methods).
- Invoking an asynchronous method using the await keyword suspends the calling program and yields control back to the calling program until the awaited task is complete
- The await keyword can only be used within an async method
This is the first in what will hopefully be a series of articles I intend to write on asynchronous programming. I will cover other areas of asynchronous programming in future articles (giving tips, advice, advanced scenarios and even Javascript). Hopefully for now, this has given a taster of how to implement the basics of asynchronous programming with C#. Watch this space for further articles.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently came across some strange behaviour in our ASP.NET Core 2.2 web application. A colleague of mine who was working on some new functionality, had checked in several Javascript files. These were 3rd party Javascript files to add support for drag & drop. The majority of the files for this 3rd party library were already minified, with the exception of one.
For some reason this one particular Javascript file was not minified. So we added the file to bundleconfig.json in Visual Studio so that our build process would minify the file. The bundleconfig.json minifies several Javascript files and outputs the aggragated file as site.min.js. Whilst I was testing the latest version of the app I was getting all sorts of errors in the browser as many of the Javascript functions were not being found. This seemed strange, as everything had been working perfectly, and all we had done was check in a few Javascript files.
Looking at the site.min.js file that was on the build and test servers, it became apparent that the site.min.js file contained only the contents of the un-minified 3rd party Javascript file. All of the other files we were minifying had somehow been removed from the resultant site.min.js file.
After much investigation I narrowed down the issue to the following command in our build pipeline.
dotnet publish -c release This command was recreating the site.min.js file, but failing to include all the files specified in the bundleconfig.json with the exception of the un-minified 3rd party Javascript file. I excluded this step from the build process to check, and sure enough, the culprit was definitely this build command.
I managed to solve the problem by manually minifying the culprit Javascript file and adding it to the project in its minified form. I then excluded it from the bundleconfig.json minification process. This has now solved the problem, and everything works perfectly again.
So basically, if you're including 3rd party Javascript files, make sure you add them to your Visual Studio project in minified form (unless you're using a CDN of course). Don't attempt to minify 3rd party files in your build process. Only minify your own Javascript files in your build process. It took me a few hours to diagnose and fix the problem, so hopefully by reading this, I may save someone else the same pain I went through fixing the problem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I'm mentoring a more junior member of the software development team, there are always two primary traits that I encourage them to learn. These are traits that transcend programming language, methodology, technical stack or anything else that may be relevant to their role. These are structure and dilligence. Both of these should permeate through everything they do in their every day work. Being mindful of these will help them become better as software developers. I will explain why these traits are so important to the software developer.
Structure
Approaching your work with a structured mindset allows you to demarcate and separate out the various elements to the problem you are solving. From grouping the different areas of the requirements specification, to grouping the components and classes in the class hierarchy, to grouping the related unit tests....having structure allows you demarcate the boundaries between these different elements. Everything has a structure. The trick is to clearly define it and communicate this to the rest of the team. If you are documenting the requirements to a piece of functionality, clearly structure the document to demarcate these different areas e.g. functional requirements, non-functional requirements, UI considerations etc. If developing a new component, your class structure should clearly demarcate the different behaviours and areas of responsibility from the class structure and their interactions. Anyone reading through the code should be able to quickly determine what the different classes do and how they relate to each other from the structure you have implemented. Group similarly related elements together and enforce this in your coding standards document. Everything you do should be structured, logical, and consistent.
Dilligence
Approaching your work with due care and dilligence will help in eliminating mistakes and make you a better developer. Be conscientious and mindful of what you are doing at all times. Before checking in that code, make sure you do a diff, run a full rebuild and execute all dirty unit tests. This may take additional time, but it will always be quicker than the time it will take to fix a broken build. If writing a document such as a requirements specification, take the time to proof read it, check it over for spelling and grammar as well as accuracy. Work smart, not fast. Reducing the number of mistakes you make by being more dilligent will earn you a reputation as someone who is dependable, produces high quality and takes their role seriously. Don't be that person who is known to constantly make mistakes, breaks the build or submits code that doesn't work because they didn't test it sufficiently enough.
By applying structure and dilligence to everything you do will have positive benefits on your work. These can be applied irrespective of your particular role (developer, tester, designer) or what tools and / or technologies you use. I would prefer to work with a developer who took these traits seriously than a developer who thinks producing more lines of code than the next developer makes them more productive. I would always pick quality over quantity. A customer is far more likely to forgive a slippage of a dealine if they eventually get something that is of high quality, rather than something delivered on time that contains bugs.
Be structured and dilligent and apply these with rigour and I can guarantee that the quality of the code produced by yourself and your team will increase.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
At what point should you consider rewriting a software application? Just because an application is old, doesn't mean it should be rewritten. So what factors ought to be considered when attempting to justify rewriting an application? There are many things to consider, and rewriting any application should never be taken lightly. In fact, doing a cost-benefit analysis is probably a good starting point.
In today's fast paced software development environment, today's latest fad can quickly become tomorrow's long forgotten hype. What does it even mean to be legacy? According to Wikipedia
Quote: a legacy system is an old method, technology, computer system, or application program "of, relating to, or being a previous or outdated computer system" This article is not intended to be a detailed discussion of the considerations to take into account when looking to rewrite a legacy applicaiton. That would be a considerably lengthy article. Rather it is to look at some examples from my own experiences involving legacy applications. Like most developers, I thrive on working with the latest shiny new tools, but there are also times when you need to work with that legacy application too. I have heard many developers berating these legacy applications. Sometimes for good reason too. But quite often, the legacy application has been working away for years, quietly, solidly and not caused a fuss.
I've worked with many legacy applications over the years. Some were surprisingly good, some just plain awful. Some of them, despite their age, were rock solid and were capable of running far into the future. Others spluttered and juddered their way along and needed a lot of man-handling to keep them running.
Just because an applications is legacy is not reason enough to justify a rewrite. I remember working for one particular company where the business critical back-end application was developed in COBOL. It was over twenty years old but rock solid. It rarely caused problems or generated errors. It just worked.
Another company I worked for many years ago also had a lot of legacy code (and according to sources at the company, much of the legacy code is still there to this day). The code was part of their core business logic and had been around for over a decade. This was accountancy and financials logic, and whilst the code had been updated with bug fixes over the years, it didn't require much man-handling to keep it up and running. In fact, when they decided to upgrade the application to use newer development environments and tooling, they kept much of the legacy code as they knew it worked. They didn't want to risk screwing up their core business logic by rewriting it.
Age alone is not a deciding factor when considering whether to rewrite an application. There are many legacy applications that run just fine with few problems. Alternatively, there are a great many applications developed with modern technology and tooling that are plain awful.
A few things to consider.
- Is the application code buggy and / or cause regular problems or errors?
- Does it require man-handling to keep it up and running?
- Does it meet non-functional requirements i.e. is secure, performant etc.
- Is it easy to extend and add new features?
- Does it require legacy hardware that may be insecure?
- What are the running costs of the application (development costs fixing bugs, server / hardware costs, third-party costs etc)
- Does it interact with third-party applications that may have updated their APIs?
- Has it been developed using outdated environments or tools?
Not all of these considerations will be applicable to every scenario, so don't take the list in its entirety. They are merely intended to be conversation starters to elicit further discussion. Deciding whether or not to rewrite an applicaiton is not a decision that should ever be taken lightly, but equally you need to take into account many different pieces of information and assess them in the context of the bigger picture. Age alone is not a compelling argument for a rewrite, but taken in the context of other factors, may form one of them.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
There's an approach that I have been using for several years now that has helped me improve and simplify my stored procedures. This is for stored procedures that return data i.e. SELECT stored procedures as opposed to INSERT or UPDATE stored procedures. This approach is particularly useful where a stored procedure needs to reference more than one table i.e. where there is a JOIN between one or more tables.
Firstly I create a VIEW of the data that I want to query. The VIEW contains all the tables, columns, JOINs etc as necessary. It is from this VIEW that the stored procedure will SELECT its data as necessary. All the stored procedure needs to do then is filter the data from the VIEW with a WHERE clause.
The advantages of this approach is that the VIEW hides the underlying details of all the JOINs. The stored procedures then become simple affairs as they simply SELECT from the VIEW. This leads to simpler stored procedures, and allows a VIEW to be reused across multiple stored procedures. Therefore you don't need to repeat the same complicated JOINs in each of your stored procedures.
Example VIEW
CREATE VIEW [dbo].[v_CardDefinitions] AS
SELECT
CardDefinitions.*,
Cards.ID AS CardID,
Cards.ParentID,
Cards.[Index],
Cards.UserID,
Cards.CardDefinitionID,
Users.Email AS UserEmail,
Modules.Name AS ModuleName
FROM
CardDefinitions
LEFT JOIN
Cards ON CardDefinitions.ID = Cards.CardDefinitionID
JOIN
Modules ON CardDefinitions.ModuleID = Modules.ID
LEFT JOIN
Users ON Cards.UserID = Users.ID
WHERE
CardDefinitions.Active = 1 Example stored procedure
CREATE PROCEDURE [dbo].[Cards_GetById]
@cardId INT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT
DISTINCT ID, Name, [Permissions]
FROM
v_CardDefinitions
WHERE
ID = @cardId
END So to summarise the approach.
- Create a VIEW of the data that JOINs all the necessary tables
- Create a stored procedure that SELECTs data from the VIEW by filtering the VIEW using WHERE clauses
This is an approach that I use regularly as it simplifies the stored procedures I need to create.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a requirement to update multiple tables with the same value. We have a table that stores information about documents (Excel documents, Word documents, text documents, images, reports etc). Every document has an owner associated with it. This person has admin privileges over the document. After a discussion with one of our users, they wanted the ability to change the owner of a document. Doing this at the level of a single document is straight forward. However, the user wanted this for multiple documents. For example, if a user is due to leave the business, they wanted the ability to change the owner of all their documents to a new owner.
I therefore needed the ability to pass a list of document IDs into a stored procedure. The stored procedure would then change the owner for all the documents in the list to the specified owner. Passing in the comma-delimited list of document IDs wouldn't be difficult, as this is essentially a long string. The tricky part would be to iterate through the items in the list i.e. to fetch each document ID from the comma-delimited list so that the owner can be updated.
The first thing I needed to do was to create a function that could iterate through the list. I create a Table-Valued-Function (TVF) called Split to achieve this. If you don't already know, a TVF is a function that returns a table (as the name suggests). In our case, we will return a two column table containing a unique ID and an item from the list. So if there are 10 items in the list, then there will be 10 rows in the table returned by our TVF.
CREATE FUNCTION [dbo].[Split]
(
@List nvarchar(2000),
@SplitOn nvarchar(5)
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Value nvarchar(100)
)
AS
BEGIN
While (Charindex(@SplitOn,@List)>0)
Begin
Insert Into @RtnValue (value)
Select
Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1)))
Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List))
End
Insert Into @RtnValue (Value)
Select Value = ltrim(rtrim(@List))
Return
END The function has two paramters. The first is the comma-delimited list of document IDs
@List = '1, 2, 3, 4, 5' The second parameter is the delimiter. In this case we are passing a comma-delimited list hence the delimiter is a comma.
@SplitOn = ',' The function loops through the list locating the next item by searching for the next occurrence of the delimiter. It keeps doing this until it cannot find any more occurrences of the delimiter. Each item it finds between the current and next delimiter is inserted into the table that will be returned by the TVF.
We next need to write a stored procedure that invokes our Split Table-Valued-Function.
CREATE PROCEDURE [dbo].[Documents_UpdateOwner]
@owner INT,
@documentids NVARCHAR(1000)
AS
BEGIN
UPDATE
Documents
SET
UploadedBy = @owner
WHERE
ID IN (SELECT CONVERT(INT, Value) FROM Split(@documentids, ','))
END There are two parameters to the stored procedure. The first one is the ID of the new owner for the documents. The second parameter is a comma-delimited list of document IDs for which we wish to change the owner. The items returned from the Split TVF are stored in string format. Therefore if we need to update data in another format we need to do a conversion. In our case, we are updating an INT and therefore need to convert the item from an NVARCHAR to an INT. Obviously we wouldn't need to do any conversion if we were comparing against string data.
I have since used this Table-Valued-Function in other stored procedures where I need to iterate through a list of items. It's a very efficient way of updating multiple tables. Instead of having to make multiple calls to a stored procedure to update each document owner, I can instead make one call to a stored procedure and update all of them at once. This is a neat way to allow for those scenarios where you need to update data from a list of items.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had some plumbing work done in my house that made me think of a similarity between software development and plumbing. I realise they are fundamentally different beasts, but bear with me. Whilst talking to my plumber, he was showing me the differences between the work he had done, and the work done on one of the other houses in the street where I live. Even as a complete novice I could see the differences he was describing. He wasn't trying to be disrespectful or mean to the other plumber (he didn't know him as he had never met him), but merely demonstrating how high his quality of work was using a direct example.
- The holes made in the brickwork in my house were neat and the pipes fitted tightly through with no gaps. In the other house they were rough and there were gaps where the pipes came through.
- Where my brickwork needed replacing outside my house, these has been replaced with identically coloured bricks and you couldn't see any differences when looking at the wall. On the other house, the bricks had been replaced with differently coloured bricks and the bricks had been replaced so the interlacing (bricks are laid in an overlapping manner vertically for strength) had been broken.
- There were no pipes running outside my house. The pipes running outside the other house were left totally exposed to the elements as they were not protected with lagging.
I'm sure there were similar differences inside the houses too.
The point I am making is that my plumber showed care. His work was of a very high standard and demonstrated diligence and work ethic. The other plumber was satisfied with far lower standards. For him, close was good enough.
This same comparison can also be made with software development. When I write code, I take care to ensure that my code is well organised, structured and readable. I ensure that there are unit tests that exercise an adequate level of code coverage. I implement best practices and aim to be consistent.
When I look at a piece of code, I can very quickly determine if there was care put into it. Sloppy, ill thought out code that is inconsistent and unstructured are amongst some of the signals that reveal such a lack of care. Even as a novice, you can still demonstrate a level of care within your work. This is not about how knowledgeable or experienced you are, but how dilligent you are. It is still entirely possible to write code with care and attention to detail despite being inexperienced.
As a professional software engineer, I want others to look at my code and think "Hey this guy has put a lot of effort and care into writing this". It will have my name against it. I have high standards, and I expect the same from every other developer on the team. I have taken it upon myself to write the coding standards document that we all follow as a team. Not by dictatorship, but by democracy.
When you have checked in your code, take a moment to reflect what another developer would think of it. What would they think when looking at your code? What does your code say about you and your work ethic? Our bread and butter is our code. The care, love and dilligence that we use to craft it speaks volumes about us as professional software developers. Make sure that when another developer looks at your code, that at the very least they will say that this guy cared about what they were doing.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 8-Mar-19 12:24pm.
|
|
|
|
|
Following on from an earlier article[^] I wrote about versioning a .NET Core 2.0 application, I have now had to revise this since the method I used for that version of the application is not supported in .NET Core 2.2. In that article, I demonstrated how to use a tool called setversion[^] for versioning a .NET Core 2.0 application. After upgrading our application to .NET Core 2.2 I found out that this is not currently supported any more.
Instead of using the setversion tool, I am using the dotnet publish command-line utility. When using this command-line utility, you are able to specify a version number.
I am still using the same build script as described in my previous article, and this is invoked from our TFS build server in the same manner. Just to reiterate, within TFS you have the ability to pass arguments to your Windows batch files. I am passing the build version number $(Build.BuildNumber) as the argument.
I then invoke my Windows batch file (called setversion.bat)
@echo off
cls
ECHO Setting version number to %1
cd <projectFolder>
dotnet restore
dotnet publish <project>.csproj --configuration Release /p:Version=%1 This all works perfectly, and the deployed application assemblies are stamped with the correct version number.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In my previous article Sending Push Notifications with Azure Notification Hub[^] I briefly described our rationale for selecting Azure Notification Hub over alternatives. I have now fully implemented an ASP.NET Web API service for sending push notifications as well as managing their associated tags.
The service provides the following functionality.
- Send push notifications to either Android or iOS devices (with or without tags)
- Adds tags
- Removes tags
If you aren't familiar with the concept of tags where push notifications are concerned, you aren't alone. I hadn't heard of them either until I started working with push notifications. The concept is surprisingly simple, yet provides great flexibility in how you target where your push notifications are sent.
When a device is registered for push notifications (via code running on the device), you can optionally assign tags with the device registration. This is a list of characteristics (or interests) that the device wishes to receive push notifications about. Tags can either be set by the user (perhaps via a system preferences page where they can tick boxes to select the items they wish to receive push notifications about) or by the backend (where we can set characteristics to allow us to target specific devices(s) when sending push notifications).
In our case, we have implemented the latter i.e. we are adding tags that relate to the user's device to allow us to send targetted push notifications. For example, we have added tags that specify the user's ID, their company ID etc. This allows us to send a push notification to a specific user's device (by specifying the user's ID) or to all the user's for a specific company (by specifying the company ID).
When a push notification is sent, you can specify a tag alongside your push notification message. The push notification is then only sent to any registered devices that have expressed an interest in that particular tag. So in our case, we can send a message to a specific user by supplying their ID as the tag. Or we can send a push notification and supply the company ID, thus ensuring that the push notification is only sent to user's of that specific company. We can slice and dice the demographics of our user base in any way that we find meaningful by simply registering the device with the desired tag(s).
This is a powerful way of decomposing the demographics of your user base. You can now explicitly categorise your user base by the tags they have registered with. By doing so, this then allows us to send targetted push notifications, right the way down to a specific user's device.
The service that I have implemented manages these tags, as well as providing the ability to send the push notifications themselves. The service therefore allows the backend to add and / or remove tags from a user's device. For example, when a user logs in on a device, the service is invoked to register them with various tags according to the information we hold on them. Likewise, we will remove those tags when they sign out.
This process is very straight forward, yet gives us an incredible level of flexibility for sending targetted push notifications to our users. If you have't already looked into the concept of push notification tags, then I'd definitely have a look at them. They're a great idea.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In the latest version of the Xamarin Forms app that I am working on, we wanted to send push notifications to the devices. There were a couple of approaches that we could have taken. The key ones being Twilio (which we are already using for sending SMS messages) and Azure Notification Hub. After some initial exploration, the clear choice was Azure Notification Hub. Unsurprisingly it had tight integration with Xamarin Forms and the Microsoft ecosystem, and was very straight-forward to configure and get working.
There were also very good examples of how to make the necessary code changes to the respective Android and iOS projects to ensure we got this working quickly.
The beauty of working with Azure Notification Hub, is that this abstracts us away from the underlying details of the Android and iOS platforms. Instead, once we had made the necessary configurations and setup changes to enable push notifications for each platform, we then integrated the platform specific push notification engines into Azure Notification Hub. From this point onwards, we only have to work with Azure Notification Hub. This gives us a far simpler and cleaner abstraction onto our notification setup.
It is very simple to setup and send test push notifications to your registered devices using Azure Notification Hub. We have also intergrated App Center event tracking for all device registrations and sending of push notifications. This gives us a helicopter view of what our code is doing under the hood, and to help us diagnosing any errors should they arise.
The step-by-step tutorials I used can be found here[^].
So if you're looking to implement push notifications in your mobile app, give Azure Notification Hub a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
With the imminent release of our latest mobile app, I thought I'd summarise how we ensured high levels of quality, and proved that the software was correct. I'm not going to write an article justifying the case for unit testing (it should go without saying that unit testing is a fundamental part of the development process - if not you're doing it wrong), but rather to explain how we implemented unit testng within the software for the app.
The architecture I favour when designing an application, is to firstly reduce the surface area of the client[^]. Simply put, this entails keeping the UI code as sparse as possible, and removing any / all code that is involved with the domain. The UI should ONLY contain code that relates to the UI. While this sounds straight forward, I have lost count of the number of times I've come across code bases where the UI contains code from the domain and / or the data layer.
In relation to a Xamarin Forms mobile app, you should keep the code in the Views as sparse as possible. The UI code should only invoke your domain code, it should NEVER implement it. Your Xamarin Views should contain code for manipulating the various UI controls, populating them with data etc. As soon as there is a need for anything beyond this, then refactor the code and place this code in a completely separate layer of the app. Within the context of a Xamarin Forms app, I created separate folders for such things as the models, services, entities etc. These were completely separate to the Views.
To enforce this separation of concerns, we adopted the MVVM design pattern. I won't go into great detail here about this pattern (as there are many articles out there already). The MVVM pattern stands for
Model -> View -> View-Model
More correctly it could be named VVMM (View -> View-Model -> Model) as this is the order in which they relate to each other (in terms of dependency). The Model should have no knowledge of the View-Model. The View-Model should have no knowledge of the View. This is important when implementing an MVVM application, as it reduces the dependencies between the various parts of the application.
The View in a MVVM designed app is the UI element, or in the case of a Xamarin Forms app, they are the Views. Only UI code should be placed in the Views.
The View-Model is the place where domain logic will reside. All UI controls should be bound to properties in the View-Model. The code that provides your UI controls with data, hides/shows the UI element etc should all be implemented here. This way, you can unit test those rules and ensure that they are correct. And this is done without the need for the UI to be present. This means you don't have to keep using the simulator or physical device to test the domain rules of your app. You should be able to unit test these rules in the absence of the UI, and in complete isolation from other parts of the application. The unit tests should require minimal setup, and any dependencies should be injected into the methods to remove hard-wired dependencies. This is good old fashioned Dependency-Injection, and it is a vital design pattern when implementing unit tests. This ensures the correctness of your domain.
The Model is concerned with the data, and therefore maps your data entities into classes. The Model will contain such things as definitions for customer, order, supplier etc. The Model should not be concerned with how it is used by the View-Model or View. For example, you may have an Order class which contains an Order-date. This is stored within the Model as a Date type. The fact that this date is displayed as a string in the UI is of no concen to the Model. Any conversions needed to map Model properties into UI elements should be implemented by the View-Model (you may have a conversion needed by several elements or Views, so it makes sense to place this conversion code within a View-Model where it can be invoked from multiple places). Again, these conversions can be unit tested with complete independence from the UI by placing them in the View-Model. You can write unit tests against the Model to ensure that the values you set against it match those that are returned. So if you set the Order-date of your Order to a specific date, you can assert that this date is returned by the unit test. This ensures the correctness of your underlying data.
Unit testing a mobile app need not be difficult as long as you have carefully designed and architected the various moving parts and separated the key concerns. Implementing an architecture that supports separating out the various concerns is vital (layering). It's also useful to implement a design pattern that enforces such layering (such as MVC, MVVM). You should aim to keep your UI as sparse as possible, and place all code that is not involved in the UI elsewhere within the application.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been developing mobile apps for the Android and iOS platforms for several years now. I have used both Telerik Platform (now retired) and Xamarin Forms. Both of these are excellent development platforms. Most recently, I have been developing apps using Xamarin Forms. Most of the code for the app in a Xamarin Forms app is contained within a single, shared project. This code is shared between both the Android and iOS apps. When you require platform specific behaviour, you place this code in the Android or iOS specific project as required.
During the development of the latest app, we have hit several issues as you would expect. Some small, some not so small. Android development is pretty painless, intuitive, conforms to well defined best practices and standards. We have hit a few snags with Android, but these have been relatively small and easy to fix.
Apple however is a whole different can of worms. Nothing they do seems to conform to any well defined standard or best practice. They have this habit of almost deliberately ignoring the well defined and understood patterns and practices from other development platforms, and doing it "their way". It's fair to say that the "Apple way" is usually vastly more time consuming, complicated and error prone. The Apple motto seems to be the total inverse of Occam's razor.
When given two or more ways of solving a problem, always choose the worst option.
From provisioning profiles and certificates to asset catalogues (I have never encountered a worse way of storing images than this), the "Apple way" is never simple, straight-forward or intuitive.
Nearly every issue or bug we have encounterted has been with the iOS version of the app (on both Telerik Platform and Xamarin Forms). The Apple platform just doesn't seem as robust as Android (which just works).
I am assuming that the majority of Apple developers don't get much exposure to other development environments, and probably build mainly Apple apps. They therefore never get to experience how things "should" be. If you only know the "Apple way" of doing things, then you have nothing else for comparison.
I have worked within development for approaching 20 years now, and in that time have used pretty much every platform, tool and technology at some point. I therefore have a broad knowledge of what is considered "best practice" by my exposure to the huge number of technologies over the years. I know what works, and how things ought to work. I can spot efficiency, good design, simplicity and elegance from afar.
This is why I am of the opinion that the Apple way just sucks. Doing something differently merely for the sake of it is not innovative. There are very good reasons why certain ideas become best practice within the development field. It's because they work. And not just work, but are well understood and accepted by those working within the industry. They have been put to the test, and been successful.
In all my years as a professional software developer, engineer and architect, I can honestly say that I have never come across a development platform as poor as that provided by Apple. If you genuinely think Apple make great development products, then I'd suggest having a look at how everyone else builds their development tools. Microsoft and Google for example build excellent development tools, and they employ industry best practices and standards in their processes and workflows.
Unfortunately, while Apple remains a player in the mobile app space, developers such as myself will just have to put up with the "Apple way" of doing things. I think Apple would do well to take a look around at the other players in their industry and take some inspiration from them. Until they do, they will continue to frustrate developers who find the "Apple way" cumbersome, time consuming and inefficient.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I hear discussions relating to the prevalent censorship and bias at the hands of the tech giants (Facebook, Twitter, Google et al), an argument I hear repeated is that they're private companies and can do whatever they want. Yes they are private companies, but I don't think that's a sufficiently powerful nor persuasive argument for allowing them off the hook. If you're unaware of the bias and censorship within Silicon Valley then read read my article[^] where I cover these issues.
Here's why I think anyone proposing that particular argument is wrong.
- Google is the number one search engine across the entire planet, and as such has a large share of the internet-search market. They can control (and censor / filter) their searches to disseminate their own political narrative with ease. Unlike going to the local baker's to buy a cake, if you get refused for some reason, you can just go to the baker next door and try again. Saying Google is a private company and can therefore have total control over what they do is a little naive. Google are very secretive about how their algorithms work and will no doubt refute any claim that their searches are biased. But you only need to compare the results from Google with that of a neutral search engine (such as DuckDuckGo) and you will see the stark contrast when comparing searches for political terms (I covered this in my previous article).
- The tech giants are more than just tech companies. They are highly influential agents that shape our cultural, political and social landscapes. They step far outside the technical arena in how they shape and influence our day-to-day lives. Many people today get their news from their social media platform of choice e.g. Facebook, Twitter or via organic search via Google. This places them in very influential positions. Rather than merely informing us about the state of current events, they can influence them to fit their own political agenda. This is no longer acting as a neutral observer, but an agent of change and influence.
- As we have recently seen with the de-platforming of Gab.com, the tech giants will collude to crush their competitors. Gab has been de-platformed by (amongst others) Microsoft, Apple, Google, Paypal and Patreon. If this happended in any other industry, there would quite rightly be a public outcry. For some reason, this behaviour seems to be accepted within the tech industry (but only if you have the "right" politics). You can't have choice in the marketplace, when the technical oligarchs at Silicon Valley will actively crush that competition. So the argument for "Private companies can do what they want" only really applies when there is true competition and an open and fair marketplace. Silicon Valley provides none of these.
So stating that the tech giants are private companies, for me at least, doesn't constitute a valid argument when considered against the points I've made here. They do not operate within the boundaries of a market where there is anything approaching competition. They have huge power and influence that they wield to perpetuate their political agenda. It is this same power that they use (in collusion with other tech giants) to silence and crush their competitors.
I'll keep posting my usual technical articles, but from time to time I will continue to delve into the political side of things with articles such as these. I'm genuinely interested to hear other people's opinions on these matters so feel free to share and discuss your own views on these topics.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
The latest version of the app (which will replace the current app that is in the app stores) is nearing completion. We are into user-acceptance with key stake-holders from around the business. The journey from beginning the app several months ago, to now, has involved a great deal of learning. Although we had an existing app on which to base our development efforts, that's where the similarities ended. Many of the technologies used for the new app were either brand new, or very different from when we last used them.
- Xamarin: Although I have used Xamarin previously (long before Microsoft decided to acquire it), it is vastly different now than it was then. It's fair to say that in its current Microsoft incarnation, much of the Android and iOS specifics are abstracted away from the developer and bore little resemblance to the version I used all those years ago. So whilst I needed to refresh my knowledge of Xamarin as it had changed substantially since I had last used it, it was brand new to the rest of the development team.
- App Center: This is Microsoft's build / test / deploy center for mobile apps. This is an absolutely brilliant tool. We used this throughout our development lifecycle for all of our diagnostics and debugging. We added tracking for all our events, service calls and exception handling. App Center allows you to setup and configure analytics for your crash reporting as well as for event tracking. This was very useful when we needed to diagnose exceptions and errors during the development cycle. We also configured our Azure DevOps build to deploy to App Center. So with each code check-in, upon a successful build, we would have an Android and iOS release ready for testing.
- Telerik DataForm: Is a means of simplifying the development of your data-entry forms. You define the properties of your data-entry form in your model class (and decorate your properties with the necessary validation rules and label-text). This model then forms the basis of your data-entry form. Telerik DataForm then takes your model and generates the necessary UI controls for your model, and hence generates your data-entry form. Including the validation rules and label-texts. Your UI is therefore built from the programmatic definition of the underlying model. This is an incredibly powerful paradigm. It frees up the developer to focus on the model's rules and validation, and delegates the building of the UI to Telerik. This paradigm is not suitable for every form, but for simple, static data-entry forms it is perfect. Telerik DataForm implements the MVVM design pattern, thus your forms consist of the following logical pieces.
- View (the XAML layout and code-behind)
- View-Model (where you define the rules for your data-entry form)
- Model (where you define the data to which your UI elements are to be bound)
- Azure AD B2C (Identity Provision): We have previously setup Azure AD B2C (Busines-2-Consumer) for one of our line-of-business web apps. This allowed us to delegate the login functionality to Azure. Rather than implementing our own login functionality, we configured the web app to use Azure AD B2C instead. This gives us an incredibly secure app as you would expect. We are leveraging the same login functionality that is used daily by 2 billion Office365 users. We decided to use the same Azure AD B2C functionality in our mobile app. This gives us far higher security, scalability and we don't have to write a single line of code. Perfect!
We also trialled Azure DevOps for this project. All our source code, build and release definitions were defined here. Although I have used Team Foundation Services previously, this was my first time using Azure DevOps, and was my first time defining builds and releases for Android and iOS.
So it's fair to say that we had many (steep) learning curves on this project. Despite that though, they were the right decisions, as the new app puts us in a far stronger position both technically and strategically. From the development platform to the technology ecosystem, the new app is a far stronger proposition.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
For the record, and before I embark on this article, I would like it noted that I am a professional software engineer who works within the field of software development. I have done so for nearly two decades. I am a geek with a genuine passion for technology. I get enthused by technology, and wouldn't want to be in any other field.
With that out the way, let's get on with the article. I don't generally write about politics, and for very good reason. Like religion, politics can be a very controversial subject. It can be polemic and can often escalate to hyperboplic arguments. I have my political views, but don't wish to use this platform to air them. I do, however, from time to time, voice them over on my Twitter and Gab feeds. Over the last decade, I have seen many small, incremental changes from many of the tech giants that have made me question whether they provide a net positive for the world. Unless you have lived under a rock for the past few decades, you cannot have failed to realise how immersive technology is in our every day lives. We use technology for our personal lives, social lives, communications, gaming, entertainment, searching for news and information and so on.
Over the past decade, the tech giants including Google, Facebook and Twitter have come to dominate not just the technical arena, but the social, cultural and political ones as well. It is no secret that these technical corporations are liberal and left leaning in their political makeup. How can an organisation that is composed of thousands of people be said to have a single political bias? Surely with so many people working for them, you would think there would be large variation in political diversity? It would seem that this is far from the case. Despite being told that "Diversity is our strength" by those on the political left, this doesn't apply to political diversity. Yes there may be gender, religious and racial diversity, but there is very little in the way of political diversity. And herein lies the problem.
Twitter CEO Jack Dorsey has openly admitted that there is 'left leaning bias' within Twitter, but then goes on to state that this doesn't influence company policy. I think Jack is being more than a little economical with the truth if he thinks Twitter's left leaning bias doesn't affect company policy. If you're a conservative, a Trump supporter, Republican, or right-of-centre in your political compass, it is fair to say that Twitter can be a very unwelcoming place. In fact, it can often be a downright hostile place. Many right leaning Twitter users have faced bans, shadow bans or been outright kicked off the platform (Alex Jones, Milo Yiannopoulos, Gavin McInnes, James Woods (the actor - although he has since been reinstated) and Jesse Kelly) to name just a few. Even President Trump is not immune from the threat of being kicked off the platform[^].
New York Times op-ed Sarah Jeong made many openly, anti-white, anti-male tweets[^] earlier this year but didn't receive a ban or even a suspension. Some of her tweets included:
- “#cancelwhitepeople”
- “1. White men are bulls—. 2. No one cares about women. 3. You can threaten anyone on the internet except cops.”
- “Oh man. It’s sick how much joy I get from being cruel to old white men”
- Dumba— f—ing white people marking up the internet with their opinions like dogs pissing on fire hydrants.”
It should be noted that Sarah Jeong's account is a verified, blue check-marked account. So whilst Twitter bans people from its platform for wrong-think in many other areas (particularly identity politics), it rewards people like Sarah Jeong by verifying their accounts. As long as your racism is towards white people, and your sexism is towards men, then you're all good. In the world of Twitter, hate speech does not include white men.
Back in 2017 Google sacked one of its software engineers - James Damore - for sending out a memo that related to Google's diversity policies. Specifically, it related to the gender differences between men and women, and why women were under-represented in the field of software engineering. To anyone who has read (and understood) the science of gender differences, it won't come as any surprise that men have a greater interest in this field than women. Men (on average) have a greater interest in "things" (cars, computers etc) and will tend to gravitate to those professions including STEM (science-technology-engineering-mathematics). Whereas women (on average) have a greater interest in "people" and tend to gravitate to those professions such as law, medicine, social care etc. There is nothing inherently wrong with any of this. If you accept that men and women are different (and there are many who don't accept this self-evident premise), then it stands to reason that their biological differences will lead to differences in their average proclivities and interests. Google it would seem however, don't seem to accept this. It is this hive mind that has been referred to as Google's Ideological Echo Chamber[^].
Other examples of Google's bias include the fact that they recognise International Women's Day (by displaying an appropriate image on their home page), but don't recognise International Men's Day. There are more virtue signalling points to be gained from recognising the former than the latter.
Google searches are notoriously biased in the search results they return. In just one specific example, when asked to define the term "nationalism", the results between Google (politically biased) and DuckDuckGo (politically neutral) couldn't be more stark[^]. This was just for a single term. Imagine scaling this up to the millions of search results carried out on the Google platform everyday. At this point Google stops being a search engine, and instead becomes a political tool. Giving you the results it wants you to have. To me this is terrifyinhg. Google is the most powerful internet platform on the planet (forget Twitter, Facebook, Microsoft). Google owns the internet. The fact that it is so blatantly partisan reminds me of Big Brother in 1984. I no longer use Google for my search engine. I now use DuckDuckGo.
In the US, free speech is protected under their First Amendment. This covers speech that could be defined by some as offensive. However, none of the tech giants allow free speech on their platforms. All of them have very strict policies that set out rules for what is permissable speech. These are in fact, rules for policing speech. I am an ardent advocate of free speech. I would much rather all ideas (both good and bad) were transparent, and out in the open in the marketplace of ideas. Not all ideas or ideologies are equal, and the best way to counter the bad ideas is to subject them to public criticism and ridicule. I think the US First Amendment protecting free speech is one the greatest inventions of our time. Something I would dearly love to see protected in the UK (where I live).
The problem with defining hate speech and / or offensive speech, is that hate and offence are very subjective terms. And who gets to decide what is hateful / offensive? What one person may find offensive, another person may not. To my mind at least, the best way to counter this is to let all speech be accepted (apart from speech that directly advocates violence). Then allow people to exercise their free speech to criticise and ridicule that idea or ideology. Protecting certain ideas whilst allowing criticism of others is both prejudicial and counter to free speech, not to mention utterly hypocritical. But this is exactly where all socal media platforms are right now. The worst offender for this is surely Twitter.
Enter Gab. Gab is a social media platform not too dis-similar to Twitter. It hit the headlines recently when it came to light that the Pittsburgh shooter had vented many of his extreme views on the platform before going on his shooting rampage[^] at a synogogue killing 11 people. Gab came into a lot of controversy over the events. The entire tech industry promptly rounded on Gab. The hosting providers (including Microsoft), their app was de-platformed by both Google and Apple, payment processor Paypal and the list goes on. Gab advocates free speech (and is the only social media platform that does), but it certainly does NOT advocate violence. It's creator Andrew Torba is very clear on this. I suspect that many of the tech giants were simply looking for a reason to de-platform Gab, and the shootings played right into their hands. It is worth noting that the shooter also had accounts on Facebook and Twitter too. Having a competitor that advocated free speech (when they don't) was always going to end in a retaliatory strike from the elites at Silicon Valley. In my opinion, the (over) reaction from the Silicon Valley tech giants was unfair, unjust and completely unfounded.
There's a famous phrase that states "If you're not the one paying for the service, then you're not the customer". And this phrase could almost be Facebook's mission statement. What started as an ambitious social media platform with some great features and concepts, has over the years transformed into little more than a marketing tool for businesses to sell us their products and services. It's impossible to scroll through your timeline without being bombarded with ads. Many of these ads it is worth noting come directly from your Google searches. It was reported in early 2018 that the big data company Cambridge Analytica had harvested the personal data of millions of Facebook profiles without their consent[^] and used the data for political purposes. The scandal eventually led to Facebook founder Mark Zuckerberg appearing before the United States Congress to testify. However, as this was a voluntary agreement on his part, many simply dismissed the hearing as a dog and pony trick which was never going to trigger any criminal proceedings. Are social media giants held to different standards than everyone else? I wonder what the outcome would have been had the scandal involved a tobacco company for example. It's easy to see how hitting on a tobacco company could generate much kudos and back patting.
In a recent survey it was found that a majority of Americans don’t think social networks are good for the world[^]
Quote: the number of people who think social media is a net positive for society is down to 40 percent. This is not entirely unexpected. Many people are beginning to now see how much power these tech giants wield, and how much influence they hold. Not just politically, but socially and culturally. They dominate our landscape and every part of our lives. I recognise and appreciate the technical advances made by the tech giants, but I have genuine concerns that they are now over stepping their boundaries of responsibility. We are slowly and inexorablly sleep walking into a dystopian, Orwellian world where we are under constant surveillance. Where our personal data renders us to mere commodity. Where we are told what to think and what to say. Where the social, cultural and political norms are dictated to us. Free thought and free expression are being eroded by the tech industry. They promulgate their own political narratives, and destroy all those they disagree with. They don't take kindly to any form of competition, and will beat into submission anyone that dares to create a competitive technology. Is this really where we want to be? Technology naturally has a part to play in shaping our social and cultural fabric, but that should not include dictating it by force. We are giving far too much power and influence to the Silicon Valley elites. It is high time we put ourselves back in charge.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
We are currently in the middle of re-building our existing mobile app. Probably the most important form in the app is the Vehicle Inspection form. This form allows a driver to fill out a vehicle inspection from their mobile device and to submit the results. In our current app (which is an Apache Cordova hybrid app developed using Javascript in conjunction with Kendo UI controls) we generate an HTML page from the inspection metadata. This allows us to use all the HTML controls such as
- textboxes
- checkboxes
- dates
- radiobuttons
- dropdowns
We then capture the driver's responses using Javascript, and submit these responses to our backend system.
Our current mobile app however is being developed using Xamarin Forms. All of our form controls use Telerik UI controls. We knew we wanted to replicate as closely as possible the implementation of the current app. The vehicle inspection is a critical piece of functionality, and it works extremely well. The challenge therefore would be to try to find something that replicated this same impementation in Xamarin Forms.
Whilst investigating how we would reproduce this I came across the WebView. This is a view for displaying HTML content inside the app. Unlike the OpenUri() method wich navigates the user to a web page using the app's in-built browser, the WebView displays HTML content "inside" the app. This sounded like what I needed.
Generating the HTML to render the vehicle inspection was the easy part. I had this working quite quickly. Using the same logic for creating the HTML controls in our existing app (which uses Javascript) I was able to mimic this using C# to achieve exactly the same output in the current app. The problem came when I wanted to sumbit my responses. I looked at the simple example on the Microsoft documentation, but this didn't provide nearly enough clarity of how to proceed. I tried injecting Javascript functions into the generated HTML but this only seemed to work for functions that didn't interact with the DOM. However, to retrieve the responses required interaction with the DOM.
There doesn't seem to be much information anywhee on this particular topic. I looked through the usual suspects (Stackoverflow, Xamarin forums) but to no avail.
I then stumbled across an article that went into a lot more detail on how to Use Javascript with a WebView[^]. Reading through this and looking at the example code gave me sufficient knowledge to work out how to retrieve the responses from the HTML generated vehicle inspection.
Here are the functions I wrote that enable me to retrieve the responses.
private async Task<string> GetValueFromTextbox(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById('{controlId}').value;");
}
private async Task<string> GetValueFromCheckbox(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById('{controlId}').checked;");
}
private async Task<string> GetValueFromRadioButton(string controlname)
{
return await WebView.EvaluateJavaScriptAsync($"document.querySelector(\'input[name=\"{controlname}\"]:checked\').value;");
}
private async Task<string> GetValueFromDropdownn(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById(\'{controlId}\').options[document.getElementById(\'{controlId}\').selectedIndex].value;");
} I have now got this working and am able to submit the responses that have been entered into the HTML generated vehicle inspection.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This article assumes that the reader is already familair with the MVVM software design pattern. If you are not familair with this design pattern, then it's worth reading up on that first, before proceeding with this article. There are many descriptions of this article, including this one[^]. It is useful to understand the design pattern from a purely conceptual perspective, before looking at the various technical impementations of it. By understanding the design pattern at a conceptual level, you will far easier comprehend its implementation details.
I have used the MVVM design pattern previously. In fact, I have used the MVVM pattern within our current mobile app. For this, I used Kendo UI controls in conjunction with javascript. This particular implementation uses what is known as an observable. An observable (which is based on the Observer design pattern[^] is an object that maintains a list of dependents (called observers) and notifies them of any changes in state. It is this notification system that provides the two-way notification (or binding) that is essential to the MVVM design pattern.
With our latest incarnation of the mobile app now well underway, we have come to the point where we can start building our data entry forms. I have so far implemented the underpinning infrastructure and architecture which enables the app to consume our services, save data to local storage using SQL Lite and send emails from the app. All of this is now fully implemented and working.
We have several data entry forms within our app that allow the user to submit data to our backend services. These include forms for submitting:
- mileages
- service, repair and MOT bookings
- vehicle inspections
As we have already done so in our previous mobile app, we will be using the MVVM design pattern to implement these data entry forms.
We will impement the data entry forms using XAML and Telerik controls. We could have used the native Xamarin UI controls, but there is a greater selection of Telerik controls, and they provide a consistent API and are easily themable. Although the implementation uses Telerik controls and XAML, the underlying concepts can be applied with any UI technology.
I'll use an example that refers to a simple data entry form that allows a user to enter a message which is sent to the backend service. The message may be to request information for example. This trivial example containing just the one UI control should suffice to demonstrate how the MVVM pattern can be implemented.
I tend to begin the development of a new data entry form from the Model and work backwards from there i.e. Model -> ViewModel -> View.
All Models inherit from the the same base Model class. This base Model class inherits from NotifyPropertyChangedBase which is a Telerik class that supports behaviour similar to INotifyPropertyChanged.
public class BaseFormModel : NotifyPropertyChangedBase
{
} This ensures that all Models used by the data entry forms will support the ability to raise events when a property on the Model changes. These changes to the Model will be notified to the ViewModel.
Models used by the data entry forms also inherit from the following interface.
public interface IFormData<T>
{
T CreateDefaultModel();
} By implementing this interface, the Model must therefore contain the method CreateDefaultModel(). This method is used by the ViewModel to supply a default Model (containing default values) which can be used when the View (the XAML form) is first displayed to the user. It implements generics which therefore allows it to work with any type of Model.
Here's the Model for the "Message Us" data entry form. For the purposes of this simple example I have removed much of the code for clarity.
public class MessageUsModel : BaseFormModel, IFormData<MessageUsModel>
{
private string _messageToSend;
[DisplayOptions(Header = MessageUsModelConstants.MessageHeader)]
[NonEmptyValidator(MessageUsModelConstants.MessageError)]
public string MessageToSend
{
get => _messageToSend;
set
{
if (_messageToSend == value) return;
_messageToSend = value;
OnPropertyChanged();
}
}
public MessageUsModel CreateDefaultModel()
{
return new MessageUsModel
{
_messageToSend = ""
};
}
} The decorations on the public property MessageToSend are Telerik specific and define the validation rules / messages for the property. These rules / messages are then enforced by the View. Using this particular implementation of MVVM, the data rules are therefore defined at the level of the Model (which makes sense). Whenever a new value is set on the MessageToSend property, the OnPropertyChanged() event is raised. This updates the state of the ViewModel that is bound to the Model.
Moving onto the ViewModel, we define the base behaviour for all our ViewModels in our base class.
public abstract class ViewModelBase<T> : NotifyPropertyChangedBase where T : new()
{
public T FormModel = new T();
public abstract Task PostCompleteTask();
} I have used an abstract class that inherits from the same Telerik class as the base Model class i.e. NotifyPropertyChangedBase. The public property FormModel is a reference to the Model. This property is used by the ViewModel when it needs to refer to the Model. The method PostCompleteTask() is invoked by the ViewModel when the form is ready to be submitted. As this is an abstract method, it must therefore be implemented by each inheriting subclass. This provides consistency to all of our ViewModels. The actual work performed by each ViewModel will always be defined within this method.
Here's the ViewModel for the "Message Us" class. For the purposes of this simple example I have removed much of the code for clarity.
public class MessageUsViewModel : ViewModelBase<MessageUsModel>
{
public MessageUsModel MessageUsModel;
public MessageUsViewModel()
{
this.MessageUsModel = this.FormModel.CreateDefaultModel();
}
public override async Task PostCompleteTask()
{
}
} The public property MessageUsModel is the reference to our Model. This is initially populated with a default instance in the class constructor by invoking the method CreateDefaultModel() (which we saw earlier) using the public property FormModel (which we also saw earlier).
this.MessageUsModel = this.FormModel.CreateDefaultModel(); When the user has finished entering their message and is ready to submit the form, clicking on the form's submit button will invoke the PostCompleteTask() method that will perform whatever processing as necessary (in our case all form data is submitted to our backend services using RESTful Web API services).
Finally, here's the XAML for the View and the code-behind.
[XamlCompilation(XamlCompilationOptions.Compile)]
public partial class MessageUsView : ContentPage
{
public MessageUsViewModel Muvm;
public MessageUsView()
{
InitializeComponent();
this.Muvm = new MessageUsViewModel();
this.BindingContext = this.Muvm;
}
private async void DataFormValidationCompleted(object sender, FormValidationCompletedEventArgs e)
{
dataForm.FormValidationCompleted -= this.DataFormValidationCompleted;
if (e.IsValid)
{
await this.Muvm.PostCompleteTask();
}
}
private void CommitButtonClicked(object sender, EventArgs e)
{
dataForm.FormValidationCompleted += this.DataFormValidationCompleted;
dataForm.CommitAll();
}
} And the XAML code.
<input:RadDataForm x:Name="dataForm" CommitMode="Immediate" />
<input:RadButton x:Name="CommitButton" Text="Save" Clicked="CommitButtonClicked" IsEnabled="True"/> The important parts to note are the setting up of the binding between the View and the ViewModel in the constructor. This sets up the two-way binding, such that any changes in the View are reflected in the ViewModel and vice-versa. These changes are also reflected in the underlying Model (if that wasn't already clear).
this.Muvm = new MessageUsViewModel();
this.BindingContext = this.Muvm; When the user clicks the Submit button, the actions implemented within the ViewModel's PostCompleteTask() method are invoked.
This is a fairly simple example. In a real world use case there would undoubtedly be more complexity, but this should serve as a useful example of using the MVVM design pattern within a Xamarin mobile app. The fact that we are using Telerik UI controls doesn't change the core concepts discussed. The MVVM design pattern is a very powerful design pattern that is perfect for use within data entry forms.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a need to consume a private nuget feed in one of our Azure DevOps build pipelines. This was for our Xamarin Forms mobile app's build pipeline. We wanted to use a Telerik UI nuget package in our app. In order to add a reference to this nuget package to your project, you firstly need to add your Telerik credentials into Visual Studio. This ensures that you are a fully paid up Telerik subscriber with access to the nuget package.
I needed to update the build pipeline therefore to fetch this private nuget package. After a bit of trial and error (and a few failed builds) I got this working. In Azure DevOps I needed to update the nuget restore build task to also fetch the Telerik nuget package.
- Add a Nuget restore task to your build pipeline (if you don't already have one). This task needs to come before you build the project.
- Set the path to the project in the relevant textbox
- Set the option for Feeds in my Nuget.config (this is important as this allows you to specify credentials for consuming external nuget packages)
You should now see a Manage link which will allow you to configure the credentials to your private nuget package. Clicking on this link opens up the Service Connections that are available for your build pipeline. Add a new service connection of type Nuget. In the dialog box that is now displayed click the option for Basic Authentication and enter the following information.
- Connection name
- Feed URL
- Username
- Password
Click OK to save these credentials.
Back in your build pipeline's nuget restore task, you should now be able to select these credentials in the dropdown. What Azure DevOps will now do, is merge these credentials into it's default nuget.config file (or into the one you have specified under the Path to Nuget.config). Either way, whatever credentials you have specified will be merged into the nuget.config file.
And that's basically all there is to it. Your build pipeline is now able to consume nuget packages from private feeds.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have been setting up the Azure DevOps builds required for our new mobile app. One for Android and one for iOS. In this article I will focus on the iOS app as this is the one that caused me the most difficulty. There is a degree more difficulty when developing for the Apple platform as you need to have a Mac, certificates and provisioning profiles, so configuring a build for iOS is a little more complex. This is definitely borne out by the number of Stackoverflow posts I found on the various issues I encountered.
Before proceeding, I want to fully clarify something that caught me out. This may seem self evident, but judging by the posts I came across on this issue, perhaps not so much. When running your iOS app from Visual Studio there are two methods of provisioning the app.
- Automatic provisioning - This is useful during development. You need to install a Mac onto your network that is visible to your Visual Studio environment and pair with it. Your Visual Studio environment will then read the necessary provisioning information directly from the Mac (be sure to disable the screen-saver on the Mac or else you'll lose your pairing with it).
- Manual provisioning - This is needed when you intend to build your app from a build server. Unlike automatic provisioning (where your Visual Studio environment just fetches what it needs from the paired Mac), you instead enter the necessary signing identity and provisioning profile information into Visual Studio.
So if you are setting up your iOS app to be built on a build server such as Azure DevOps, you will to use manual provisioning.
When setting up an iOS build you firstly need to select the correct agent pool from Azure DevOps. In this case select the Hosted macOS agent pool. Selecting this provides you with a template consisting of the core tasks necessary for building your iOS app.
- Install an Apple certificate
- Install an Apple provisioning profile
- Build Xamarin.iOS solution
- Copy files to the artifacts staging directory
- Publish the build artifacts
We are also using Visual Studio App Center so I have the following task defined too.
- Deploy to Visual Studio App Center
We intend to use App Center for testing but we haven't set this up just yet.
Installing the Apple certificate and provisioning profile
=========================================================
The Apple certificate and provisioning profile can both be downloaded from your Apple developer account and uploaded to your Azure DevOps build pipeline. The certificate should be in the form of a .p12 file which differs from the .cer file. You may need to open the certificate in XCode on a Mac to generate the required .p12 file. Either way, once you have these files, they need to be uploaded to Azure DevOps. Your build will fail without them.
Build the Xamarin.iOS solution
==============================
Before you proceed to this step, ensure you have set your Xamarin.Forms iOS project to use Manual Provisioning, and set values for the Signing Identity and Provisioning Profile (and these match the previously uploaded certificate and provisioning profile from earlier).On the build task check the box Create app package if you want to create an .ipa file (which is the file that is actually installed onto the devices). If you intend to test your app in any way, then presumably this needs to be checked.
The output from this task should be the required .ipa file.
Copy files to the artifacts staging directory
=============================================
The template makes a good job of this, so this task should need very little configuration. Basically, all the task is doing is copying the generated .ipa file from the build folder to the artifacts folder from where it can be used by subsequent build tasks.
- Source folder - $(system.defaultworkingdirectory)
- Contents - **/*.ipa
- Target folder - $(build.artifactstagingdirectory)
Publish the build artifacts
===========================
This task simply publishes the contents of the artifacts folder from above - $(build.artifactstagingdirectory)
At this point we now have a complete build process that has generated an .ipa file using the latest code changes, and published that .ipa file so that it is available for subsequent build processes such as testing and / or deployment. So at this point you can use your preferred testing / deployment tools of choice. In my case, I have deployed the generated .ipa file to App Center for testing and deployment.
Deploy to Visual Studio App Center
==================================
You will need to configure your build with an App Center token. This authorises your build process to access App Center studio on your behalf. I will write a future article on App Center, but for now it is sufficient to know that I have two apps configured in App Center - one for iOS and one for Android. Once configured, enter the name of the App Center connection into your Azure DevOps task.
If you are using App Center as part of a team then it's a good idea to create an organisation, and assign your developers to the organisation. Then in the App slug you would enter {organisation-name}/{app-name} e.g. myOrganisation/MyAppName.
Now for each build that is triggered, we have a full build pipeline that builds the iOS app and deploys it to App Center from where we can deploy it to actual physical devices (allowing us to monotor analytics, crashes and push notifications).
Setting up this build process has been far from straight-forward. I encountered several problems along the way, and didn't always find answers to my questions. Many times it was down to good old fashioned trial and error, along with a dash of perseverance.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of our new mobile app offering that we are busy developing, I need to deploy the backend Azure mobile app services. These are the backend services that will provide all the main business logic that the app will need to function and provide value to the user. The app itself will essentially be a dumb set of screens that will have no smarts in and of themselves. The smarts will come from the backend services that the app will consume. And these services will be hosted on Azure in the form of mobile app services.
I have previously written about how I setup the build pipeline using Azure DevOps[^]. The next step was therefore to deploy the build articles to Azure using the same Azure DevOps pipeline.
The main steps needed to deploy your app to Azure are actually defined in your build pipeline:
- create a zip file containing the deployed build artifacts
- publish the zip file so it is available for the release pipeline
In my build pipeline I have these two tasks defined as the last tasks in the pipeline. To create the zip file I use MSBUILD using the parameters
WebPublishMethod=Package;
PackageFileName=$(Build.ArtifactStagingDirectory)\package.zip;
DesktopBuildPackageLocation=$(Build.ArtifactStagingDirectory)\package.zip;
PackageAsSingleFile=true;
PackageLocation=$(Build.ArtifactStagingDirectory)\package.zip;
DeployOnBuild=true;
DeployTarget=Package I therefore added an MSBUILD task to the build pipeline. You may also need to add other build parameters for specifying the OutputPath, Configuration and Platform and any other parameters as necessary.
You will then need to add a Publish Build Artifacts task to your build pipeline. This makes your zip file available to the release pipeline. In the textbox for Path to publish I have entered
$(Build.ArtifactStagingDirectory) as this is where I want the zip file to be published.
There are various templates you can use for setting up your release pipeline. For the purposes of this article I will keep it simple and refer to the Deploy Azure App Service template. Here you will need to authorise your Azure subscription. Once this has been completed you will need to enter other details including:
- app type
- app service name
- package folder (the filename and path where the zip file is located)
- optionally you can specify a slot if you are deploying to slots (which I highly recommend you do)
There are some subtle differences between how TFS handles deployments to Azure and how Azure DevOps handles them that threw me when I first setup the release pipeline. For example ensuring the release pipeline has access to the zip file threw me at first, until I discovered you need to publish it for it to be available to the release pipeline.
Other than that, the process itself is fairly straight-forward and I didn't encounter any major problems.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have long had an interest in the DevOps side of the software development lifecycle. As much as I love to write software, I love to build and deploy it too. After all, it doesn't really matter how great your code is, if people aren't using it then it's an irrelevance. The process of building and deploying software has long been a passion of mine. I love to setup and configure build processes. DevOps automates and simplifies the process by which software can be deployed onto user's machines. From the developer checking in their code, the process of versioning the software, executing and publishing unit tests, analysing code coverage, and depoying the final article onto a release server are all part of the DevOps process. These can (and should) all be automated. The developer shouldn't have to worry about any of this (unless like me, they actually enjoy setting up these processes). I have previously used CruiseControl, TeamCity, Team Foundation Services and most recently Azure DevOps.
We have recently begun the task of re-building our next generation mobile app. For this we are using Xamarin in conjunction with Azure services. We currently use Team Foundation Services (TFS) for all of our DevOps processes. This is a brilliantly simple, yet very flexible and powerful build process tool. I haven't found anything that I haven't been able to do with it. For our new project though, I wanted to make use of Microsoft's replacement to Visual Studio Team Services (VSTS) which is now branded as Azure DevOps.
This seemed the perfect time to start using Azure DevOps - with a new project. I don't have any intentions of migrating our existing projects, so it would require a new project to allow me to get my hands on Azure DevOps.
First off, for anyone who has previously used TFS or VSTS, Azure DevOps (which is really a re-branding of VSTS) should look and feel very familiar. As it's name suggests, it is powered by Azure infrastructure meaning it will scale up and out as your build process grows.
We have separated our new mobile app into two distinct solutions. One is the Xamarin Forms app, and this will unsurprisingly contain the actual app itself. The other will be the Azure backend server that will provide all the functionality to the app (business logic, notifications, service requests etc). It is this latter solution that I have been focussed on moving into Azure DevOps. At the time of writing, I have setup the pipeline to include:
- versioning the assemby
- restoring the Nuget packages
- building the solution
- executing the unit tests
- publishing the code coverage
There are literally hundredes of in-built tasks for building, testing, packaging and deploying your software. You also have access to the Marketplace where you can find hundreds more tasks developed by the community. Even big players such as JetBrains have free tasks available in the Marketplace. So if you can't find the task you want, you can probably find one that matches in the Marketplace. If not, you can always develop your own and publish it in the Marketplace yourself.
My first impressions of Azure DevOps is that it's quite simply a brilliant tool. It reduces our reliance on our on-premise infrastructure and allows us to fully build and deploy our applications on rock solid infrastructure in the Microsoft cloud. If you currently use TFS then it's worth spending the time to explore Azure DevOps. Unless your business already has a large investment in IT infrastructure, you'll be very hard pushed to beat the Azure stack. If you're currently using VSTS then you'll be automatically migrated to Azure DevOps anyway. Even if you don't currently use TFS or VSTS, it doesn't matter. You can build, package and deploy your application using Azure Devops regardless. It has support for every platform and technology. So whether you're brand new to DevOps and don't have anything currently configured, or if you're currently using an alternative, it's worth checking out Azure DevOps anyway.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Having developed several mobile apps and deployed them to both the Google Play and Apple Store, we have been forced to re-write the apps. The reason we have to re-write our apps is due to the fact that our development platform - Telerik Platform - has been retired. The reasons are probably so that they can focus their development efforts on their other cross-platform mobile technology - Nativescript.
That leaves us in the position of having our app in the app stores, but with no means by which to update them. We can update the RESTful services used by the apps as these are completely separate to the apps (thank goodness for good architecture), but we can't make any changes to the apps themselves. That puts us in a slightly vulnerable position, as we can't respond to customer suggestions or changes in market forces (or even change the branding / look and feel).
We have therefore been forced to re-evaluate the technologies that are available to us for developing the mobile apps. We have looked at several technologies. I have previously written why I think Building native enterprise apps is (probably) the wrong approach[^]. In relation to enterprise apps, there is very little (if any) benefit to going native (longer development cycles, greater expense, bigger teams with bigger skill sets, little overall benefit). Cross-platform therefore is the only approach on the table.
We firstly looked at NativeScript (the natural successor of Telerik Platform) as both are owned by Progress. This looked like a great development platform. Progress have made big strides in trying to ease the migration path of existing Telerik Platform users to their newer NativeScript platform. You can choose from Javascript, Angular or Typescript as the language in which to build your apps. It comes with a companion application called Sidekick to simplify many of the development processes. Has the support of a large community and backed by Progress (giving peace of mind). Also, it rendered native components on the device making it a truly cross-platform development environment.
The only other alternative that I considered seriously was Xamarin. I have used this previously (before the Microsoft acquisition) and so was already familiar with it. I was intrigued as to how it may have changed since the Microsoft acquisition. The first thing I noticed when looking through the documentation and examples was the tight integration with Azure. We already make substantial use of Azure with our other mobile and web apps, so it great to see the same design philosophy applied to Xamarin. In fact, the overall architecture used by Xamarin is not too dissimilar to the one I developed for our existing apps and current web app. This was a huge benefit to us right out the box, as I was already familar with the architecture and the key moving parts to building a mobile app with Xamarin. Like NativesScript, Xamarin also renders truly native components on the device.
I spent considerable time looking at both offerings, as well as taking into consideration the skill set of the team. In the end we have decided to go with Xamarin. I am far more familiar with C# and Azure (as well as the architecture of their apps) and this played a part in the final decision. Nativescript would have required us to learn Typescript. Although this is not necessarily a barrier on its own, the reality is that I will be up and running far quicker with Xamarin than compared to NativeScript.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|