Click here to Skip to main content
15,868,016 members
Articles / Hosted Services / Azure

YouConf - Your Live Online Conferencing Tool

Rate me:
Please Sign up or sign in to vote.
4.96/5 (41 votes)
17 Nov 2013CPOL167 min read 257.6K   660   70   68
A site for managing and delivering virtual conferences - complete with embedded live streaming and chat. Showcasing the best Azure has to offer!

Image 1

Introduction

Building on Scott Hanselman's excellent post regarding hosting a two day virtual conference in the cloud for under $10, why not make the ability to host a conference and stream it live available to everyone? We're going to use the same principle as dotnetConf, but build on it so that anyone can create their own conference with speakers and presentations, then record and stream to a live audience. We'll also include built-in chat to allow interaction with the presenter, membership, search, and plenty of other useful features to make the site easy to use. 

Website Details and Source Code  

Website: The YouConf web site is publicly available at http://youconf.azurewebsites.net/  
Source code: All source code, including history, is available at my GitHub repository - https://github.com/phillee007/youconf. I tagged the code at the end of each challenge so you can view the code as it was at the end of each stage. In addition, I've uploaded a copy of my solution to CodeProject just in case you can't get to GitHub - Download youconf-final.zip  

Video: Following the competition, I was fortunate enough to be interviewed by Brian Hitney and Chris Caldwell for a Microsoft DevRadio episode. For an overview of the whole competition, the YouConf solution, and my thoughts on Azure, check out the video (34 min) 

Background

When I visited the dotNetConf site and saw Scott Hanselman's blog post about it, I thought that this could be useful to a much wider audience, particularly smaller groups such as .Net User groups who might like to record and stream their presentations. Having seen the Azure developer contest brief, I figured it would be a good chance to learn more about Azure, MVC, and the .Net technology stack. Hence my entry into this competition. 

I hope that this article can serve as a guide to others, and help all developers get started with the Azure Platform. I also aim provide solutions to common issues one might face when getting started with Azure, and show I how I dealt with them. 

How will Azure Benefit me?

Azure allows me to focus on what I do best - development - and not have to worry about the intricacies of infrastructure or hosting concerns. It provides a robust, scalable platform which allows me to scale-up my app/site as needed, and stay in control of costs through increased visibility. With Azure and automated deployments from GitHub I've been able to streamline the development process so I can make and deploy changes rapidly, with much less overhead than in than in the past. Finally, it provides a full capability set to help grow my applications, such as cloud services, virtual machines, storage, and more.  

The Challenges 

The contest involves five separate challenges, and this article contains a separate section for each challenge. Each section contains:

  • An introduction describing my goals for the challenge
  • A full description of how I achieved those goals, including screenshots, source code, issues I encountered, whether/how I overcame them, and references where appropriate.
  • A conclusion, and possible ideas for future challenges   

The challenges themselves are listed below. Please click on the links to go directly to the section for that challenge. 

  • Challenge One - This involved describing what I was going to do for the competion, as I did in the opening paragraphs above. I was a bit slow off the mark and didn't include much for this challenge, as I didn't realize there was a prize!  
  • Challenge Two - Azure Websites. This involved building the YouConf website and deploying it to Azure, using GitHub. It also covered the use of SignalR, Azure Table Storage, Elmah, embedded Google videos and Twitter chat, and a whole heap of other goodies. 
  • Challenge Three - SQL Azure. This included moving data into SQL, creating SQL databases in Azure, adding membership and emails, creating a separate development source branch in GitHub, and deploying to a separate test environment in Azure. 
  • Challenge Four - Azure VMs. For this challenge I added search functionality to the YouConf site, and setup an Ubuntu VM running Apache SOLR to handle performing searches. I also created a separate worker role to handle background tasks such as sending emails, and adding documents to the Solr search index.  
  • Challenge Five - Mobile access. For this challenge I learned all about responsive design as I tested and refined the YouConf site so it would work on desktop, tablet, and mobile devices. I also explained some of the rationale for choosing responsive design vs a separate mobile site/app, along with the pros/cons of each approach. 

Note that in addition to the sections above, I've recorded daily progress as I go, in the History section of this article. For more detail on some of the daily items I covered, please read that section as I'll be referring to it in other parts of this article. 

Architectural Overview  

The following diagram gives a high-level overview of the components involved in the final version (I.e. at the end of challenge five) of the YouConf web application.  

 

Image 2 

As you can see, the solution takes advantage of a number of Azure capabilities. This is not an exhaustive list, but highlights the main components and their relevance to each individual challenge. 

 

  • Azure websites - The YouConf website is a hosted Azure website, built on the Asp.Net MVC4 web application template that comes with Visual Studio 2012. Other relevant technogies include jQuery, Ninject (for IOC), Elmah (for logging), and SignalR for realtime updates. See Challenge Two for more details on this. 
  •  SQL Azure - All site conference data, and membership data, is stored in a SQL Azure database. Asp.Net Entity Framework Code First was used as a persistence mechanism for the site. See Challenge Three for more details on this. 
  • Azure Table Storage - All error log data is persisted to Azure Table Storage, so it can be analyzed as needed. Logging is taken care of by ElmahSee Challenge Two for more details on this. 
  • Azure Cloud Services and worker roles - A worker role is used to perform offline tasks such as sending email and updating the search index, which makes the web site more robust. It also takes advantage of strongly typed messaging, poison message dead-lettering, and other best practices. See Challenge Four for more details on this. 
  • Azure Service Bus - This is used for inter-role communication between the web site and worker role, and also for scale-out with SignalR. See Challenge Three and Four for more details on this.
  • Azure Virtual Machines - An Apache SOLR VM instance powers the YouConf site search. This runs on top of an Azure Virtual machine. See Challenge Four for more details on this.
  • Users and multiple devices -  The YouConf website uses responsive web design to make it usable on a range of devices including desktops, tablets, and mobiles. See Challenge Five for more details on this

 

A few additional points to note: 

 

  • I haven't tried to show all of the various routers/firewalls etc within the Azure environment as that's not under my control, however, I have shown an obvious firewall for any traffic entering the Azure environment for illustrative purposes. 
  • I would have like to include a larger image, but unfortunately we're restricted to 640px max! 

 

 

From here on, we'll go over the individual challenges, and how I completed each one. If you have any questions or comments, please feel free to point them out using the comments section. Let's begin! 

Challenge Two - Build a website   

Introduction  

For this challenge I built the YouConf website, and deployed it to Azure using automated deployments from GitHub.  The application had a number of initial goals, which I managed to achieve as follows: 

  • Allow users to create conferences, including both presentation and speaker details 
  • Give them a nice SEO-friendly url that the can direct their audience to so they can view conference and session details before the conference begins 
  • Provide attractive pages for audiences to view conference details
  • Provide an embedded Google Hangout video feed on either the conference page, or an auxiliary page, so users can view presentations in realtime (and also view the relevant YouTube videos once the conference has finished).
  • When users are viewing a conference live, ensure they always have the most up-to-date feed url by using SignalR to pushing updates directly to their browser
  • Allow users to chat and interact with each other, and also the presenter, via a chat window on the same page as the live video feed
  • Implement some basic responsive design features (although not to the point of perfection as it takes a long time, and I have to do that in challenge 5!)
  • Technical - Host the site using Azure Web sites    
  • Technical - Store conference data using Azure Table Storage for durable, rapid data access  
  • Technical - Store my source code in a publicly accessible repository on GitHub where anyone can view it, so they can see how I went about my tasks 
  • Technical - Allow me to push changes directly to Azure from my source-control repository without having to prepare a release package, with the additional option of deploying from my local machine if needed
  • Technical - Implement error logging with logs stored in Azure Table Storage 
  • Financial - Try and minimize hosting costs by reducing the amount of outbound data where possible, and only scaling up when necessary 
  • Plus one more little secret, which you'll have to read to the end of this section to find out.... 

By the end of this challenge the site was up & running in Azure. Here's a screenshot of YouConf homepage: 

Image 3

 

 

 

 

 

Note - If you'd like more details on how I completed some of the tasks for challenge two, please have a look through the History section.     

The rest of this section is explains how I achieved the goals above, please follow along and see how I went! 

Let's get started!

Creating the website  

The first thing I needed was a website. I opened up Visual Studio 2012, and followed along with the following tutorial on how to build an MVC4 website, naming my project/solution YouConf. Note that since I'm not using SQL for this part of the competition I left the membership parts out (by commenting out the entire AccountController class so it doesn't try to initialize the membership database). Whilst this means that users won't be able to register, they will still be able to create and edit conferences, it's just that they will all be publicly available for editing. More detail on this is in my daily progress report.

Once I had it building locally, the next step was to get it into Azure. To do this, I went to the Azure Management Portal, select the Web Sites Node, and hit the New button. I wanted the url to start with YouConf, so I entered youconf in the url field, and selected West US as the region since it's closest to me (I'm in New Zealand!) as per the screenshot below:

Image 4

Once I'd hit the Create site button I had a new site up & running just like that!

Next up I wanted to deploy to it, which required me to download the publish profile and import it into Visual Studio. To do so, I clicked on my YouConf site in the Azure Management Portal, then selected the Download the publish profile link. This opened up a window with the publish profile details, which I saved locally.

I then right-clicked on my YouConf web project in Visual Studio, and hit Publish. In the Publish dialog, I selected Import publish profile, and selected the .publishsettings file I'd saved earlier. I validated the connection using the button, chose Web Deploy as the publishing option, hit Next, and in the Settings section chose Release as the Configuration. I hit Next again, then hit Publish, and after about a minute I was able to browse my site in Azure. Now wasn't that easy?!

Image 5

Source Control Integration

Next up was getting source-control in place so that it would deploy automatically to Azure. I chose to use Git, mainly because I haven't used it before and thought this would be a good opportunity to learn about it. I also wanted to be able to have a publicly-available source repository available for anyone to view, and having seen GitHub being used for this by others, thought I'd give it a go. Make no mistake, I love TFS, and use it on every other project, but for this I really wanted to push myself (although Azure makes it so easy that this wasn't quite the case as you'll see).

In order to get this working, I downloaded the Git explorer from http://windows.github.com/, and setup a local youconf repository. I committed my changes locally, then synced my local changes to Git using the explorer. My Git repository is available at https://github.com/phillee007/youconf/ if you'd like to see the code.

Rather than pushing local changes directly to Azure, I wanted them first to go to GitHub so they'd be visible to anyone else who might want to have a poke around. To accomplish this I followed the steps in this article under the heading "Deploy files from a repository web site like BitBucket, CodePlex, Dropbox, GitHub, or Mercurial".

*IMPORTANT* After publishing my changes to Git I realised that I'd included all of my publish profile files as well, which contained some sensitive Azure settings (not good). To remove them, I did a quick search and found the following article http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository. The commands I ran in the Git shell were as follows:

Image 6

I also added an entry to my .gitignore file so that I wouldn't accidentally checkin anything in the Publish profile folder again:

Image 7

After fixing those, I clicked on my website in the Azure portal, clicked the link under Integrate Source Control, and followed the steps, selecting my youconf repository in GitHub. About 20 seconds later - voila! - my site has been deployed to Azure from GitHub. Seriously, how easy was that?!! It took next to no time, and left me set to focus on development, as I'd set out to do from the beginning.

Building the application

From here on, most of my time was spent on building the functionality of the web app, which as I mentioned earlier was an MVC 4 web application. I started building some basic forms for creating/editing/deleting a conference, and was faced with my next challenge - where to store data? I wanted persistent storage with fast access, and an easy api to use.Since SQL wasn't available (till challenge 3), Azure Table Storage seemed like the logical option. See this daily progress update for more on why I chose this.

Azure Table Storage, so many options....

As per this daily progress update, I got setup and read about Partition and Row Keys, and found this article very helpful - http://cloud.dzone.com/articles/partitionkey-and-rowkey . There are plenty of tutorials available about Azure Table storage which is helpful, and I created a table as per http://www.windowsazure.com/en-us/develop/net/how-to-guides/table-services/.

Azure allows you to use the storage emulator when developing locally, and then update your settings for Azure so that your app will use Azure Table storage when deployed to the cloud. I added the following line to my appsettings in web.config to tell Azure to use the development storage account locally:

<add key="StorageConnectionString" value="UseDevelopmentStorage=true" /> 

 I created a YouConfDataContext class (link to GitHub) and accessed this connection string using the following code:

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString")); 

Things seemed to be going well, but once I tried to save a conference I soon realized that I didn't quite understand table storage quite as well as I'd thought! Basically I planned to store each conference, including speakers and presentations, as a single table entity, so that I could store/retrieve each conference in one go (as you would in a document oriented database). I started out writing code for the Conference class as below:

 

public class Conference
{
	public Conference()
	{
		Presentations = new List<Presentation>();
		Speakers = new List<Speaker>();
	}
	public string HashTag { get; set; }
	public string Name { get; set; }
	public string Description { get; set; }
	public IList<Presentation> Presentations { get; set; }
	public IList<Speaker> Speakers { get; set; }
	public string Abstract { get; set; }
	public DateTime StartDate { get; set; }
	public DateTime EndTime { get; set; }
	public string TimeZone { get; set; }
} 

When I tried to save one of these I ran into a bit of a roadblock though...Unfortunately you can only store primitive properties for a table entity, but not child collections or complex child objects. DOH! So, how could I work around this? I found a number of options:

  • Store each object type as a separate entity, E.g. Conference, Speaker, Presentation all get their own rows in the table. I wasn't too keen on this as it seemed like more work than it was worth. Plus it seemed far more efficient to retrieve the whole conference in one hit rather than having to retrieve each entity separately then combine them in the UI.
  • FatEntities - https://code.google.com/p/lokad-cloud/wiki/FatEntities - this looked very thorough, although I don't think it wasn't up to date with the latest Azure Table storage api
  • Lucifure - http://lucifurestash.codeplex.com/ - this also looked like it wasn't up to date with the latest Azure Table storage api
  • Use an object deriving from TableEntity, with a single property containing the Conference serialized as a JSON string. In the end I chose this option as it was easy to implement and allowed me to store the whole conference in a single table row. I used JSON.Net as it's already included in the default MVC4 project, and allows me to serialize/deserialize in one line.

Some sample code from my YouConfDataContext.cs class for doing Inserts/Updates is below:

public void UpsertConference(Conference conference)
{
	//Wrap the conference in our custom AzureTableEntity
	var table = GetTable("Conferences");
	var entity = new AzureTableEntity()
	{
		PartitionKey = "Conferences",
		RowKey = conference.HashTag,
		Entity = JsonConvert.SerializeObject(conference)
	};
	TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
	// Insert or update the conference
	table.Execute(upsertOperation);
} 

 where AzureTableEntity is just a wrapper class for a Table Entity:

public class AzureTableEntity : TableEntity
{
    public string Entity { get; set; }
} 

 

An advantage of this approach is that it makes it easy to visualize conference data as well. To view my data in the Azure storage emulator, I downloaded the wonderful Azure Storage Explorer and viewed my Conferences table as shown below (note that I can see each conference serialized as JSON easily):

Image 8

So now I had my data being stored using Azure Table Storage locally, how could I get it working when deployed in the cloud? I just had to setup a storage account and update my Azure Cloud settings as per http://www.windowsazure.com/en-us/develop/net/how-to-guides/table-services/

I created a storage account name youconf, then copied the primary access key. I then went to the websites section, selected my youconf site, clicked Configure, then added my StorageConnectionString to the app setttings section with the following value:

DefaultEndpointsProtocol=https;AccountName=youconf;AccountKey=[Mylongaccountkey]  

Now when I deployed to Azure I could save data to table storage in the cloud.

Note that I ran into an issue when updating a conference's hashtag, as this is also used for the rowkey in Azure Table storage, and in order to make an update I first had to delete the existing record, then insert the new one (with the new hashtag/rowkey). See this daily progress report for more details.

Site Features 

As mentioned earlier, most of my time was spent on working with MVC and finding/fixing issues with the site as they arose, rather than having any issues with Azure itself. The following section outlines some of the application highlights, and how they address the goals described in the introduction. Feel free to go to the YouConf site and create your own conference if you'd like to give it a try. 

Viewing Conferences - for participants

The conference listing page - http://youconf.azurewebsites.net/Conference/All - lists available conferences, and allows users to drill into the conference/speaker/presentation details if they wish to. It also provides users with an SEO-friendly url for their conference, based on their chosen conference hashtag. In order to achieve this I had to add a custom route for conferences which automatically routed the request to the Conference Controller when applicable, and also a route constraint to ensure that this didn't break other controller routes. The code for adding my custom route is below (from the /App_Start/RouteConfig.cs file - abbreviated for brevity):

public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
	name: "ConferenceFriendlyUrl",
	url: "{hashTag}/{action}",
	defaults: new { controller = "Conference", action = "Details" },
	constraints: new { hashTag = new IsNotAControllerNameConstraint() }
);  

 and the end result at https://youconf.azurewebsites.net/dotNetConf-April2013: 

Image 9

Easy to use conference management/maintenance screens

I used a number of techniques to help make it easier for those running conferences to maintain them. For example:

  • Inline tooltips using the jQuery Tools Tooltip functionality
  • jQuery Date/time picker for easy date/time selection (see daily progress report for detail)
  • Help and FAQ pages
  • Inline validation, including a dynamic lookup on the conference creation page to show whether a conference hashtag is available or not
  • A right-hand sidebar containing tips for end-users 
Embedded videos and Twitter Chat

Both of these involved obtaining code from Google/Twitter which created an embedded widget on the conference live page, based on the hangout id/twitter widget id associated with the conference. The dotNetConf site uses Jabbr for chat, however, I thought that I'd try and go for something that allowed for chat to be on the same page as the video feed. One of the commenters on my article suggested Twitter, which seemed like a good choice as it's already so widely used. In the next stage I might also look at using SignalR for this if time permits.

The image below shows an example of a page with embedded video and chat (note that I used the hangout id for one of the dotNetConf videos for demonstration, and had to shrink the screenshot to fit into the CodeProject window):

Image 10

Keeping the live video feed url up to date with SignalR

SignalR is a great tool for providing realtime updates to clients, and the Jabbr chat site provides a great example of how to harness this technology. For live conference page I used SignalR in a similar way to dotNetConf to ensure that if a conference presenter updated the Google Hangout id for the conference, viewers would be provided with the updated url without having to refresh their page.

To install SignalR, I installed the SignalR Nuget package as below:

Image 11

I then set about building a SignalR hub and client. My main issue came with how to push the notification to my SignalR hub from the Conference Controller. To give some context, here's my YouConfHub class:

public class YouConfHub : Hub
{
    public Task UpdateConferenceVideoUrl(string conferenceHashTag, string url)
    {
        //Only update the clients for the specific conference 
        return Clients.All.updateConferenceVideoUrl(url);
    }
    public Task Join(string conferenceHashTag)
    {
        return Groups.Add(Context.ConnectionId, conferenceHashTag);
    }
}  

and my client javascript code:

<script src="http://www.codeproject.com/ajax.aspnetcdn.com/
          ajax/signalr/jquery.signalr-1.0.1.min.js"></script>
    <script>$.signalR || document.write('<scr' + 
      'ipt src="~/scripts/jquery.signalr-1.0.1.min.js")></sc' + 'ript>');</script>
    <script src="~/signalr/hubs" type="text/javascript"></script>
    <script>
        $(function () {
            $.connection.hub.logging = true;
            var youConfHub = $.connection.youConfHub;
            youConfHub.client.updateConferenceVideoUrl = function (hangoutId) {
                $("#video iframe").attr("src", 
                  "http://youtube.com/embed/" + hangoutId + "?autoplay=1");
            };
            var joinGroup = function () {
                youConfHub.server.join("@Model.HashTag");
            }
            //Once connected, join the group for the current conference.
            $.connection.hub.start(function () {
                joinGroup();
            });
            $.connection.hub.disconnected(function () {
                setTimeout(function () {
                    $.connection.hub.start();
                }, 5000);
            });
        });
    </script>
}  

See the UpdateVideoUrl method in the Hub? I wanted to call that from my ConferenceController when a user updated the conference hangout id/url, and thought I could do so by getting an instance of the Hub, then calling the method on it. E.g.

var context = GlobalHost.ConnectionManager.GetHubContext<YouConfub>();
context.UpdateConferenceVideoUrl("[conference hashtag]", "[new hangout id]"); 

Sadly, it turns out that you can't actually call methods on the hub from outside the hub pipeline Frown | <img src= You can, however, call methods on the Hub clients, and groups. So, in my conference controller's edit method, I was able to use the following code to notify all clients for the specific conference that they should update their url as follows:

if (existingConference.HangoutId != conference.HangoutId)
{
    //User has changed the conference hangout id, so notify any listeners/viewers
    //out there if they're watching (e.g. during the live conference streaming)
    var context = GlobalHost.ConnectionManager.GetHubContext<YouConfHub>();
    context.Clients.Group(conference.HashTag).updateConferenceVideoUrl(conference.HangoutId);
} 

 Not too bad in the end eh?

Responsive design - Basic features

Responsive design is all the rave these days, and fair enough too given the proliferation of web-enabled devices out there. I won't spend too long on this, except to say I've implemented a number of specific styles using media queries to make the bulk of the site look good on both desktop, tablet, and mobile device resolutions. There's a huge amount of information out there about responsive design, and I found articles by the Filament Group and Smashing Magazinevery helpful in both understanding and fixing some of the issues. An example of one of my media queries for devices width widths below 760px (mobiles or small tablets) is below:

/********************
*   Mobile Styles   *
********************/
@media only screen and (max-width: 760px) { 
    .main-content aside.sidebar, .main-content .content {
        float: none;
        width: auto;
    } 
    .main-content .content {
        padding-right: 0;
    } 
}  

I've included a screenshot below to show the homepage on a mobile device. It looks good, but there's still work to do for future challenges....

Image 13

Financial - reducing outbound traffic and scaling up only when necessary

For Azure websites, you're only charged for outbound traffic, hence it makes sense both financially, and for usability, to reduce the amount of bandwidth your site consumes. I used a number of techniques to achieve this:

  • CSS and Javascript bundling/minification using the System.Web.Optimization framework provided by MVC
  • Using CDN-host javascript javascript libraries where possible

For example, in the code below I try to load the jQuery library from the Microsoft Ajax CDN if possible, but if it's not available, fallback to a local copy, which has already been minified to reduce bandwidth:

<script src="http://www.codeproject.com/ajax.aspnetcdn.com/ajax/jQuery/jquery-1.8.2.min.js"></script>
    <script>window.jQuery || document.write('<scr' + 'ipt src="@Scripts.Url("~/bundles/jquery")></sc' + 'ript>');</script>  

 I do the same for other CSS/Javascript too - see my code on GitHub for examples.

Logging

Being able to log and retrieve exceptions for further analysis is key for any application, and it's easy to get good quality logging setup in Azure, along with persistent storage of the logs for detailed analysis.

I've written quite a large article up on how I implemented logging in this daily progress report, so please see it for further technical details. In brief, I used Elmah for logging errors, with a custom logger that persisted errors to Azure Table storage. This means I can view my logs both on the server, and locally using Azure Storage explorer. Awesome!

... and the little secret - a blog! http://youconfblog.azurewebsites.net/

Since about day 3 I'd been thinking of moving the posts on my daily progress into a separate blog, as there's enough information to make some of them worth an entire entry. I was also aware that one of the competition points for this section was around what we do with our other 9 websites. So I figured I'd see if it really was as easy to setup a blog as they made out in http://www.windowsazure.com/en-us/develop/php/tutorials/website-from-gallery/.

As with logging, the bulk of the implementation details are included in this daily progress report. I managed to get the blog up & running without too much fuss, but thought I'd better not move all my content there as it would mean having to cross-post content, and possibly make it harder to assess my article of content was in different places. Here's a screenshot from http://youconfblog.azurewebsites.net/

Image 14

In Conclusion

It's been quite an adventure this far, but I think I've managed to complete what I set out to achieve for challenge two, namely getting the site up & running in Azure with source control integration, and delivering the main features it was required to. I've used table storage both in the emulator and the cloud, and become much more familiar with the Azure platform as a whole. I've also gone through the process of setting up a blog, which was even easier than I thought it would be.

Finally - where are my tests? You may have noticed a distinct lack of unit tests, which I'm ashamed to say is at least partially intentional. Thus far my api has been changing so often that I felt adding tests would slow me down more than it was worth. I know this would drive TDD purists insane, but in my experience it's sometimes helpful to wait till one's api is more stable before adding tests, particularly when it comes to testing controllers. In addition to this, I'm going to be swapping out my table-based data access layer for SQL in challenge 3, so things are likely to change a lot more throughout the application. I will, however, at least add tests for my controllers at the start of challenge 3, so that I can verify I haven't broken anything once I start adding SQL membership etc.

So what's next? 

Future challenges 

At the end of challenge two, there were a number of additional features I had in mind:

  1. Adding membership and registration, so users can manage their own conferences privately. This is reliant on having SQL available, which ties into challenge 3 nicely
  2. Adding unit and integration tests, particularly for the controllers
  3. Adding the ability to upload Speaker photos and store them in BLOB storage
  4. Add SSL to secure the registration and authentication process
  5. Adding live feeds of the slides, possibly using SlideShare
  6. Doing further testing to ensure the site is fully responsive across desktop, tablet, and mobile devices, which will be the focus for challenge 5
  7. Add the ability to setup reminders either using vcards or SMS so a user can register interest in a session, then be reminded when it's coming up.
  8. Perform further security hardening, such as removing parameter-tampering and possible XSS issues associated with MVC model-binding  

 

 

 

 

 

 

 

 

 

Challenge Three - SQL Azure

Introduction 

The goal of this challenge is to learn as much as possible about SQL Azure, and use it to power the YouConf website. My initial plans for using SQL were as follows: 

  1. Add membership and registration functionality, so users can register for the site and manage their own conferences privately, without others having edit access. This is will use the SimpleMembership functionality that comes with the Asp.Net MVC 4 framework, which uses SQL to store membership details. 
  2. After getting membership working locally, create a database in SQL Azure so that when I deploy to the live site I can access the Azure database and store real data  
  3. Find out how to manage and run adhoc queries against the Azure database once deployed to to cloud 
  4. Find out how to perform database backups so I can restore them locally if needed for debugging etc 
  5. Store conference data in SQL Azure, rather than Azure Table Storage (as it was in challenge two) 
  6. Use the Asp.Net Entity Framework + Code First for database access 

These may not seem like very ambitious goals, however, as the competition brief mentioned, "It's not just about data access"and indeed a large part of this challenge actually involved other tasks involved with building a web app and hosting it in Azure. I think many of these are applicable to most web apps that I or anyone else builds (particularly those hosted in Azure), and hence I think they are worth covering. These included:

  1. Adding a service bus queue and topics so that SignalR can distribute messages to all servers in the web farm (if/when I deploy to have multiple servers)     
  2. Setting up a development branch of my source code in GitHub, so that I could continue to make changes and build the application without affecting the live site (so I didn't risk breaking it while it was being judged!) 
  3. Creating a separate test environment for the site in Azure, which replicates the current production environment (including website, database, storage etc), so that I can deploy my development code there and test it in the cloud before pushing to production  
  4. Adding SSL to secure the registration and authentication process  
  5. Setting up a custom domain name and configuring the site to use it  
  6. Adding additional membership features such as password reset functionality, including sending emails with SendGrid
  7. Restricting accesss to the error log admin page so it's only visible to administrators, using role-based membership
  8. Adding unit and integration tests, particularly for the controllers    
  9. Investigating web and worker roles and whether they are applicable 
  10. A whole heap of other features such as keeping config secrets out of source control, unit tests vs integration tests, repositories vs direct access to the DB context, AutoMapper, Ninject, etc.

By the end of this challenge, the solution will make use of the following Azure features:

  • Azure Websites  
  • SQL Azure (for conference and registration data) 
  • Azure Service bus and topics (for SignalR) 
  • Table storage (for error logs)  
  • Automated deployment to Azure websites from GitHub 

I've provided details of the discoveries I made, and issues encountered, in the sections below. As with challenge two, I've been recording daily progress as I go, in the History section of this article.  For more detail on the daily items I covered, please read that section as I'll be referring to it in other parts of this article. Note that in order to help those viewing this article for the first time get up-to-speed, I've left the daily progress reports for challenge two intact. I've also added a separate history section for Challenge three - click here to go straight to it.

Let's get started! 

SimpleMembership

SimpleMembership comes baked into MVC 4, making it really easy to get started with. It uses SQL to store membership data, and with the MVC 4 Internet Application template is automatically setup to store data using SQL Server LocalDB. Why SimpleMembership you ask? Well, doing authentication/authorization from scratch is hard, and I didn't want to have to go creating functions for encrypting/salting passwords, doing oauth etc, as you get all that for free with SimpleMembership, so why make it any harder than it has to be?

If you recall from challenge 2, I commented out the entire AccountController class as I didn't want it to be used, since I wasn't implementing Membership. I left the /views/account/*.cshtml files alone, however, as I knew I'd need them for this part. This time, I uncommented the AccountController code again and dived on in. Let's open up the AccountController class and see what it does... 

The first thing you might notice is the InitializeSimpleMembership attribute. This means that this attribute applies to ALL public action methods on the Account controller. If you go to the /filters/InitializeSimpleMembershipAttribute.cs class you'll find the code for this attribute. When you first access one of the AccountController public action methods (e.g. when someone tries to login to the site), the SimpleMembershipInitializer() constructor will fire. This will only one run once, and will not be executed again unless your application is recycled. 

There are plenty of articles online about Simple Membership so I won't go into the code in too much detail, except to summarise the main code block as shown below:

C#
using (var context = new UsersContext())
{
    if (!context.Database.Exists())
    { 
        // Create the SimpleMembership database without Entity Framework migration schema
        ((IObjectContextAdapter)context).ObjectContext.CreateDatabase();
    }
}
WebSecurity.InitializeDatabaseConnection("DefaultConnection", 
  "UserProfile", "UserId", "UserName", autoCreateTables: true);   

The UsersContext that you see refers to a class that implements DbContext. This means that it represents an Entity Framework Data Context (code first), which is used for accessing the database using the Asp.Net Entity Framework. When this code runs, it checks if the membership database exists, and if it doesn't, creates it using the connection string named DefaultConnection. In the web.config, there is (by default) a connection string with that name, as follows: 

C#
<connectionStrings>
    <add name="DefaultConnection" 
      connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-YouConf-
        20130430121638;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\
        aspnet-YouConf-20130430121638.mdf" providerName="System.Data.SqlClient" />
</connectionStrings>  

So, when I fire up my app and access the /Account/Login page for the first time, this code will run and create a localdb database in my /App_Data/ folder named aspnet-YouConf-20130430121638.mdf, with tables as shown below (using Server Explorer in Visual Studio):

Image 15

It also initializes SimpleMembership to use the DefaultConnection to access the db, and the UserProfile table to store user data. The beauty of this is that it takes away the pain of us having to create the tables ourselves and allows us to get up & running very rapidly. I'll make a few modifications to the AccountController and UserProfile classes as I go, depending on the data I need to to store for users.

Note that I don't want this localdb file to be checked in to source control, so I added an entry to my gitignore file to exclude the whole /App_Data folder from Git.

External authentication providers

I won't go into too much detail here, but please see my daily progress update for details on how I went about setting up external authentication for both Microsoft and Google, and took advantage of the OAuth and OpenId features that come with SimpleMembership. Please also see this article, which covers the whole topic of external authentication really well. 

The end result is a login screen that looks like this: 

Image 16

Now that authentication is working locally, how about getting it working with a real SQL Azure database?  

SQL Azure - your database in the cloud 

As I mentioned earlier, when I'm developing locally I can use localdb for my database (I could also use SQL Server, SQL Express, or SQL CE if I really wanted to). However, when deploying to Azure I need access to a real database in the cloud, so before pushing my source code to GitHub and triggering a deployment, I went about setting one up. With the Free Azure trial, you get to create one database, which is all I need at this stage.

I started by going to the Azure Management Portal, selecting SQL Databases, and clicking Create a SQL database. I then proceeded as follows:

Image 17

on the next screen, I chose a username and password, and chose US West as that's where my website lives (it's recommended to keep your site and database in the same region for both performance and cost - see this article for details). I completed the fields as below: 

Image 18

So now I had a new database named YouConf - easy as pie

I wanted to have the connection string available to the website when deployed to Azure, so I needed to add the connectionstring for my cloud database to the configuration for my website. To do this, I selected the database, then selected View SQL database connection strings as shown in the bottom-right corner below:

Image 19

I copied the connection string value for ADO.Net, then went back to my YouConf website in the management portal, clicked Configure, scrolled down to Connection Strings, and added an entry for DefaultConnection with the value of the connection string that I'd copied earlier (and updated with my real password in the Your_Password_Here part), as shown below:

Image 20

Now my YouConf web site can access the database when it's deployed to Azure. What if I want to access it myself and run queries though? It turns out that you can access the database both from SQL Server Management Studio, and from within the Azure Management Portal.

This was particularly important to me, as I wanted to secure the error log viewer page (created in challenge 2) so that only users in the Administrators role could view it. In order to make myself a member of that role, I needed to run a sql query against my Azure database. I'll show you how I did it using both the portal, and SQL Server Management Studio. You can find more details on how I secured the error logs with Elmah in this daily progress update

Managing your Azure database directly from the Azure Management Portal 

The tools that come built-in to the management portal enable you to run queries and view database statistics within your web browser, which makes it easy to run quick queries etc without having to leave the portal. To connect to the database from the management portal, I selected my database (YouConf) in the SQL Databases section, then clicked Manage. I allowed the management portal to automatically add an IP restriction for my ip address, by confirming the dialog that popped up. I then logged in as follows: 

Image 21

Once connected, I hit the New query button and was able to run the following query to add the Administrators role, and add my user account to it (Note that before doing this, I had registered for the site, and since I'm currently the only user *sad face* my id in the UserProfile table is 1)   

Image 22

I'm now an administrator, so if all goes to plan I should be able to view the error logs page remotely. Let's give it a try: 

Image 23 

Lovely! I had to login before getting to this screen, which is exactly what I was hoping for. I could also have done this using good old SQL Server 2012 Management Studio, which is what I'll show you next.. 

Managing your Azure database from SQL Server Management Studio 

Since I have SQL Server 2012 Management Studio on my machine, I thought I'd see if I could connect to my cloud database directly from my machine. Turns out it wasn't too tricky either.... 

  • First, I made sure that I'd connected at least once using the management portal as shown earlier, so an ip restriction was added for the machine that I'm on. Next, I selected my database (YouConf) in the SQL Databases section, then down the bottom of the screen I copied the value in the Server field as shown below:
    Image 24 
  • Next, I opened SQL Server Management Studio on my local machine, and copied the value into the Server Name field, selected SQL Server Authentication, and entered the username and password that I'd selected when I created the database as below (Note that if I'd forgotten what these were I could have retrieved them using the management portal):

     Image 25 
  • I had a quick look to see if my tables were all there (indeed they were)...

    Image 26 
  • then I ran the same query as I had using the web based manager in the portal (I had to remove myself and the administrators group first otherwise the query wouldn't run)

    Image 27

Two ways of achieving the same goal, once again made very easy by the dedicated folk who built SQL Azure - I raise my glass to you ladies and gentlemen!  

Backing up and exporting your cloud database 

Azure allows you to backup your database to your cloud storage account, so that you can then download a copy and restore it locally if needed (or do whatever else you might need to with it). This was particularly useful for me when debugging data-related issues. In order to take a backup of my database, I followed the tutorial in this article. The steps involved: 

  • Selected my YouConf database in the management portal and hit Export 
  • Entered the credentials for my storage account (that I setup in challenge two for table storage)
  • Hit Finish and waited for the export to complete
  • Opened Azure Storage Explorer on my local machine and connected to my storage account
  • Downloaded the .bacpac file that had been exported
  • Imported and restored the database from the .bacpac file in SQL Server Management Studio, and viewed what little data I had (hopefully that will go up if others start using the site) Smile | <img src=  

By the end of all this I was pretty familiar with SQL Azure and how it worked, and was confident that it would meet the needs of my application. I found it easy to setup and access my cloud databases, and was also able to take backups when I needed. Needless to say that knowing there are multiple redundant copies of my database in the cloud should one node fail also leaves me feeling pretty confident that my database is in good hands - Go SQL Azure! Now, what else did I learn during this challenge? Read on... 

Moving conferences from Azure Table Storage to SQL Azure  

I decided early on that I'd like to move the conference data to SQL storage as soon as possible, and that's what I did. I used Entity Framework + code first migrations and found them very helpful to get up & running fast, and also to keep the database in-sync as I made updates to my model classes. I've covered the detailed steps I went through in this progress update, so please read this for the full details. A few highlights are as follows:

  • I took advantage of the migrations functionality that comes with Entity Framework Code First with, and configured EF to automatically migrate the database to the latest version on application startup. This meant that I didn't have to manually run any SQL to keep the cloud database schema in-sync with my local database, a huge productivity boost! If you're interested in EF Code First Migrations, this is a good demo to follow.  In my case, the code to do configure this feature is straightforward - for example, here's the code from my global.asax.cs class: 
Database.SetInitializer(new System.Data.Entity.MigrateDatabaseToLatestVersion<YouConfDbContext, YouConf.Migrations.Configuration>());  
  • I was able to use the same data model and entity classes with Entity Framework as I had for table storage e.g. The Conference, Presentation, and Speaker classes. What I did first was add more validation attributes such as Max length validators, so these would automatically be applied when the tables were being created. 
  • I made sure to add bi-directional navigation properties where they were needed. For example, at the end of challenge two, the Conference class contained a list of speakers, and a list of presenters, however, there was no Conference property in either the Speaker or Presentation class to navigate the other way. In order to get Entity Framework to generate the tables as I'd like them, I had to add the properties on both ends. Likewise for the relationship between Speaker and Presentation where a presentation can have 0....* presenters.
  • IMPORTANT: This gets me every time!!!! Make sure you mark your navigation properties as virtual, otherwise EF won't be able to lazy-load them! I got bitten by this yet again as I hadn't set them up as virtual, and as a result was wondering why my presentations had no speakers.... Hopefully I don't forget again...   

As an example, here's the Presentation class after making the above modifications. Note that it's not very different to how it was at the end of challenge two, but just has the additional validation attributes and navigation properties:

C#
public class Presentation{
    public Presentation()
    {
        Speakers = new List<Speaker>();
    }
    public int Id { get; set; }
    [Required]
    [MaxLength(500)]
    public string Name { get; set; }
    [Required]
    [DataType(DataType.MultilineText)]  
    public string Abstract { get; set; }
    [Required]
    [DataType(DataType.DateTime)]
    [Display(Name = "Start Time")]
    [DisplayFormat(NullDisplayText = "", 
       DataFormatString = "{0:yyyy-MM-dd HH:mm}", 
       ApplyFormatInEditMode = true)]
    public DateTime StartTime { get; set; }
    [Required]
    [Display(Name = "Duration (minutes)")]
    public int Duration { get; set; }
    [Display(Name = "YouTube Video Id")]
    [MaxLength(250)]
    public string YouTubeVideoId { get; set; }
    [Display(Name="Speaker/s")]
    public virtual IList<Speaker> Speakers { get; set; }
    [Required]
    public int ConferenceId { get; set; }
    public virtual Conference Conference { get; set; }
} 

Service bus queues, topics, and SignalR 

If you recall from challenge 2, I'm using SignalR to keep users' live video feeds up to date. If I were to scale the site onto multiple servers, I need to make sure that SignalR can communicate with all the servers in order to broadcast messages to all users regardless of which server they're connected to.  

In order to transmit messages between server nodes in an Azure web farm, SignalR uses service bus topics. This requires you to setup a service bus in the management portal, which I managed to do without too much fuss. See https://github.com/SignalR/SignalR/wiki/Windows-Azure-Service-Bus for more details, b ut the configuration is fairly simple. Here's what I did: 

Added the service bus namespace in the Azure Management Portal (See http://msdn.microsoft.com/en-us/library/windowsazure/hh690931.aspx for specific details):

Image 29  

Added SignalR Service bus to my website project via NuGet in Visual Studio:

Image 30 

Copied the value of the service bus connection string from the management portal as below: 

Image 31

pasted it into my web.config file 

<add key="Microsoft.ServiceBus.ConnectionString" 
  value="Endpoint=sb://yourservicebusnamespace.servicebus.windows.net/;
    SharedSecretIssuer=owner;SharedSecretValue=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" />  

IMPORTANT - don't forget this, especially if you're using more than one Azure environment (e.g. for testing vs production)  - updated the application settings for my cloud website in the management portal, so that it would use the correct settings (note the Microsoft.ServiceBus.ConnectionString key):    

Image 32

and finally, in global.asax.cs:

C#
//SignalR
var serviceBusConnectionString = 
  CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
GlobalHost.DependencyResolver.UseServiceBus(serviceBusConnectionString, "YouConf");
RouteTable.Routes.MapHubs(); 

Now if I scale out to multiple instances, my SignalR notifications should get broadcast regardless of which server a user's browser is connected to. SignalR takes care of creating the service bus topic and adding the necessary subscriptions, so I don't have to worry about doing that in the management portal. If you're interested in how it does this, you can always check out the SignalR source code on GitHub, or look at this Azure how-to guide

Source control revisited - branching and tagging in Git  

I'm sure everyone is familiar with branching and merging in your chosen source control system. If you're not, make sure to read http://en.wikipedia.org/wiki/Branching_(revision_control) which describes what it is. In short, branching allows you to work on separate versions of your codebase, and merge the changes together as you see fit. It also allows you to keep untested development changes for an iteration separate from the main production codebase. This was particularly applicable after challenge two, where I wanted to start development for challenge 3, but still wanted my source from challenge two to be available for the judges and anyone else to look at. I also didn't want to introduce breaking changes into the live site when I checked changes in. So, what did I do? 

  • Tagging - Firstly, I 'tagged' my current master branch with the tag v1.0, so that it was clear that all the code up until that point was part of the v1.0 release (I.e. for challenge 2). I'm using GitHub for source control, and learned how to do tagging using this article. I did this in the GitHub shell (which I got to by opening GitHub explorer > Tools > open a shell here), firstly creating the label, then pushing it up to GitHub using the commands shown in the following screenshots:Image 33
    Image 34
  • Branching - Second, I created a dev branch, which I will use for making my changes during challenge 3. When I'm happy with my changes, I'll merge them into the master branch so they become part of the main code and also get deployed to the live site. I used this video for help on how to do it. My commands for creating the branch and pushing it to GitHub are shown below:
    git branch dev
    git checkout dev
    git push -u origin dev

Note: Remember how for challenge 2 I setup source control to auto-deploy to the live YouConf site in challenge two? Well, I set that up to auto-deploy off the master branch. Thus if I check-in changes to my dev branch, it won't affect the master branch or the live site, which is exactly what I need. Once again Azure has made what seems like a complex task very easy - fantastic! 

So now I have a master and a dev branch. One thing I found interesting is that when you branch in TFS, it creates a whole separate copy of your source tree in the file system, whereas Git doesn't. I'm not sure quite how it does this, but it seems to be working fine so I won't question it! To switch to my dev branch and start developing, I opened the git shell, ran the command git branch dev, then opened the solution in VS2012 and started working! 

I also found out how to exclude NuGet packages from source control, which allowed me to drastically reduce the size of my public source control repository and make it easier for others to download. See this daily progress update for details

 

Setting up a separate test environment in Azure 

At this stage I'd setup Git so that I have a Master and Dev branch, with Master being configured to auto-deploy to the live site at http://youconf.azurewebsites.net. Initially I'd been merging my dev changes into Master and testing locally before pushing them to GitHub, which was all well and good, but it still didn't feel quite right that I wasn't testing my dev changes in the cloud environment. What I needed to do was setup a test environment in Azure that was connected to my dev branch in source control, so I could dev changes in the cloud before deploying them to production. The good news is that it really isn't that hard to setup a replica environment in Azure. You just have to make sure that you have the same services (e.g. database, storage, queues etc) setup as you do in your live environment. So, what did I do? 

You've seen the detailed steps I went through to setup my production Azure environment in challenge two, and in earlier progress reports, so I won't repeat them in detail here.  I will, however, summarize the steps I went through to create a replica test environment below. I: 

  • Created another Windows live account and signed up for the Azure free trial (Note that if I was doing this for a real client, I'd probably use the same Azure account, but since I'm trying to do this all for free till the last possible moment, I did it this way. This also has the added benefit of reducing the risk of me contaminating the production environment with test environment settings)
  • Signed into the management portal using my new credentials, and created a replica version of the:
    YouConf website (named youconftest)
    - database (youconftest)
    - storage account (youconftest)
    - service bus (youconftest)
    The All items view for the two environments are shown below, first is the test environment:
    Image 35

    and this is production (the only difference being the youconfblog site from challenge two):
    Image 36
     
  • Updated the configuration settings and connection strings for the test version of the web site to use the correct test environment settings, such as test database connection string and service bus account.  An example of the configuration settings for the test site at the end of challenge 3 is shown below:

    Image 37
     
  • Setup automated deployments from my dev branch in Git to deploy to the test version of the site  
  • Waited for the first deployment to complete, as you can see below:
     Image 38
  • Viewed the site at http://youconftest.azurewebsites.net/ and breathed a sigh of relief when it worked!!!!

    Image 39
     

So now I can make dev changes locally and deploy them to the dev site for testing. I can then merge those changes into the Master branch, retest them locally, and push them to the production site. Awesome! Once again the benefits of cloud hosting on Azure shine through!  

All the source for my dev branch is available on GitHub, so feel free to view it at https://github.com/phillee007/youconf/tree/dev (but please don't use that branch when assessing challenge 3, as the Master branch is where the stable code for the live site lives). 

Custom domain names and SSL in Azure web sites, doh!  

You can't really have a serious web app without a proper domain right? At least that was my thinking, so I went and bought the youconf.co domain name, along with an SSL certificate. I figured I'd be able to map it onto my web site in its current state, however, I made a few discoveries:

  •  You have to been running an Azure web site in Shared mode or above to map a custom domain onto it. This isn't too much of an issue, as running a shared site is pretty cheap - $9.36/month - but the real trouble comes with SSL... read on... 
  • You can't map an SSL certificate directly onto an Azure website, regardless of the mode it is running in. The only way to do it properly is to change from a website to a web role. There is a workaround at http://www.bradygaster.com/running-ssl-with-windows-azure-web-sites, however that involves creating a cloud service and using SSL forwarding. From what I've read, support for SSL on Azure websites is coming soon (yay!), but in the meantime I'm stuck. If I used the SSL forwarder, I'd have to pay for an additional cloud service in the long-run, so I might as well switch over to using a webrole for my site and not have to worry about the additional steps, BUT  
  • You can't deploy from GitHub to Azure web/worker roles, only Azure websites Frown | <img src= 
  • You can, however, use hosted TFS to auto-deploy to cloud services... 

This left me in a tricky position, as one of my original goals was to ensure that all my source was available for everyone to see on GitHub, and I also wanted automatic deployments. I also have plans to use a worker role for email sending and other background tasks. But if I go and convert over to web/worker roles, I'd have to use TFS in order to get automated deployments. As I mentioned in challenge 2, I love using TFS, but this is tough for me to decide as I really do want my source to be available to the public. For now I'll leave things as they are, since I already have a recognizable domain at youconf.azurewebsites.net. Another good thing is that Azure automatically provides an an SSL certificate for *.azurewebsites.net, which gives us the security we require for logins etc.

In future I think I might move my source code into TFS and start using a web and worker role, but you'll have to wait till challenge four for that!  Smile | <img src=  I really hope that SSL is available out-of-the-box soon for Azure websites too! 

Securing the authentication process  

Regardless of whether I used a custom domain name and SSL certificate or not, I wanted to secure the authentication process for my site. So I added a url rewrite rule to my web.config (yes url rewrite 2.0 comes built-in to Azure!) to force all AccountController methods to be redirected to https as follows:

XML
<rewrite>
  <rules>
    <rule name="HTTP to HTTPS redirect for account/admin pages" stopProcessing="false" enabled="true">
      <match url="(.*)" />
      <conditions>
        <add input="{HTTPS}" pattern="^OFF$" />
        <add input="{PATH_INFO}" pattern="^/account(.*)" />
        <add input="{SERVER_PORT}" pattern="80" />
      </conditions>
      <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent" />
    </rule>
  </rules>
</rewrite> 

I also updated the authentication element to only issue cookies over SSL:

XML
<authentication mode="Forms">
  <forms loginUrl="~/Account/Login" timeout="2880" path="/" requireSSL="true" />
</authentication> 

If you're serious about securing your .Net web apps, I would highly recommend this Pluralsight course, which covers many scenarios and how to protect against them.  

Password Reset Functionality and Emails 

An essential requirement when implementing membership is having some sort of password reset functionality available for users who forget their passwords. I managed to implement basic functionality for this on top of SimpleMembership. I won't go into all the details here, but if you're interested, you can find them all in this daily progress update.  

One thing I would like to mention is that part of this involved setting up emails using Sendgrid. I ended up sending the password reset emails in-process I.e. Directly from my controller method, rather than using a separate windows service/console app/worker role. This is not ideal from a reliability or scalability point of view due to the possibility of smtp errors disrupting the user, so in future (challenge 4) I'll move this into a worker role and just use the controller to generate the email body and put it on the Azure Service Bus for sending. As I mentioned earlier, I'm likely to move my website into a web role in future too, so challenge four would be a good time to do all of this. If you'd like to see how I plan to do it, have a look at this amazing article. Seriously, if that article were part of this competition I think it would win! 

Adding unit and integration tests, particularly for the controllers     

As I mentioned in the introduction, after challenge 2 I wanted to add more tests, particularly for my controllers, to make the app more robust. Check out this daily progress update for how I went about implementing unit tests initially, and also this progress update for how I setup integration tests using SQL CE to create a dummy database for integration testing. I'm still not quite there yet, and have a lot more tests to write before this competition is over. Thankfully there are still two more challenges after this, so I'll continue my testing efforts, including adding some UI/smoke tests which I can run after each deployment to verify the test and production sites are working as expected. 

  All this and a whole lot more! 

As I mentioned earlier, I made plenty of other discoveries during this challenge, which I've documented in daily progress reports. A few notable mentions are:

I'll leave it at that as this article is already turning into an epic! 

In Conclusion  

As with challenge two, what started out as a fairly focused challenge (SQL Azure) ended up becoming quite a large exercise in how best to develop and maintain an Azure solution. As I've completed various tasks my understanding of both SQL Azure and the Azure environment as a whole has improved greatly. I've moved all my conference data into SQL, along with membership data, and now have a site that allows for secure user registration including emails. I've also found what I think is a good solution for setting up a test version of the site and using a separate development branch in source control to allow me to develop AND test without impacting the production site.  

In addition I made a few discoveries about the limitations of Azure websites regard custom domain name mapping and SSL certificates. I'm fine with leaving the site on the .azurewebsites.net domain for now though. I'm still not 100% sure what I'll do with regard to my dilemna regarding publicly available source code and the issues involved with deploying to Azure cloud services from GitHub, however, I suspect that in the next challenge I'll:

  • Move my source code to hosted TFS 
  • Create a cloud service for my solution
  • Move my Azure web site into a web role
  • Setup a worker role to send emails and use service bus to communicate between the two roles

Thanks for reading this far and I hope I've given you a few tips on how to develop with Azure!  

Future challenges 

For future challenges, there a number of additional features to focus on:

  1. Moving to an Azure cloud service as mentioned above 
  2. Adding the custom domain name and SSL certificate
  3. Learning about and using hosted VMs (I can't divulge what I'm going to use them for yet, but will do so in due course) 
  4. Adding more unit and integration tests, particularly for the controllers
  5. Adding the ability to upload Speaker photos and store them in BLOB storage 
  6. Adding live feeds of the slides, possibly using SlideShare
  7. Doing further testing to ensure the site is fully responsive across desktop, tablet, and mobile devices, which will be the focus for challenge 5
  8. Add the ability to setup reminders either using vcards or SMS so a user can register interest in a session, then be reminded when it's coming up.
  9. Perform further security hardening, such as removing parameter-tampering and possible XSS issues associated with MVC model-binding   

 

 

 

Challenge Four- Azure Virtual Machines   

Introduction   

This challenge focused on using Virtual Machines (VMs) in Azure, which tied in nicely to one of the functions I'd been keen to build for my site for some time now - search. In the past I've used SQL Server full-text search, and also Lucene.Net, to provide search capability for my apps. However, for this challenge I wanted to try out Apache Solr, which is a well-known, high performance search engine built on top of Lucene. In the event that the YouConf site starts to gain popularity, I wanted to ensure that I had a robust search solution in place, and using an Azure VM running Apache Solr is a great way to help achieve that.  

Having worked in Windows-only environments for a long time, I haven't been able to use Solr before, as it requires running Apache on a linux operating system. The good news is that Azure VMs allow you to not-only run Windows systems, but also Linux-based ones such as Ubuntu, Debian and others! This means that I could create an Ubuntu VM on Azure, install Apache/Solr on it, and then call into it from my app to add documents, and perform searches. I'll describe how I went about doing this in the following section, and also how I was able to add documents (with the help of the SolrNet library), and perform searches directly from the client's browser (using the Ajax Solr library). 

In addition to adding search functionality, I also created a separate worker role to handle sending emails, which I was doing in-process at the end of section three. Once this was in place, I updated the web project to use Azure service bus to communicate with the worker role. I also moved the functionality for adding/updating/removing documents in the Solr index into the worker role, so as to improve the performance and robustness of the YouConf web site.  

Finally, if you recall from challenge three, I was planning on moving my app to TFS for source control. I decided against this, as it would have meant squeezing even more into the fairly short window for the challenge, and I didn't want to run the risk of not completing my article on-time due to source control issues. 

As with previous sections, if you'd like more details on how I completed some of the tasks for this challenge, please have a look through the History section. It's a bit light for this part, as I spent most of my time on the article content, and less on the daily progress reports...  

Let's get started!   

Azure Virtual Machines

As per the Azure documentation, with Azure Virtual machines "you get choice of Windows Server and Linux operating systems in multiple configurations on top of the trustworthy Windows Azure foundation". This is great, as I needed to setup an Ubuntu VM running Apache Solr. Initially, I thought about creating one from scratch, however, after some searching I found that there were already a couple of pre-built solutions which help you get started with Apache Solr and Azure fast:  

 

I chose the second option, as it looked very easy to setup, and was already available in the Azure VM Depot. I also thought that the first option looked like a bit of overkill for my needs as it created three VMs - however if the site absolutely takes off, then maybe I'll revisit that option in future. So, how did I set up my VM? I followed along with the article at http://wiki.bitnami.com/Azure_Cloud/Getting_Started, and have documented my steps below. 

 

Creating the VM image 

Firstly, I needed to create my own VM image based on the one in the VM depot. This would later be used as the basis for the actual virtual machine instance. To do this, I logged into the Azure Management portal and selected the Virtual Machines tab as below: 

 Image 42

 I selected the Images tab, then hit the Browse VM Depot button: 

 Image 43

 I selected the Apache Solr image from Bitnami:

Image 44 

 ... selected my YouConf storage account (that I'd setup in challenge two: 

Image 45 

 ...and clicked the little tick to confirm button. I then had to wait a while as Azure copied the VM image (30gb) from the VM image gallery to my storage account, as shown below:

Image 46 

 

once it was complete, the screen refreshed and told me that I needed to register the image:

 Image 47

So I hit the Register button at the bottom of the screen, as follows:  

 Image 48

Now that my image was registered, I needed to create a VM based on it.

Creating the VM  

I selected the New button in the bottom-left of the screen, and chose Virtual Machine > From Gallery as follows:

Image 49 

I named it youconf-solr, and made it extra small, so it would cost as little as possible to run. A good thing about Azure VMs is that you can always change their size later on, so if I find it's running slowly, I can scale up to a larger instance. I also selected a username, and chose the Provide Password option so I could login to the machine once it was created, as follows (Note: make sure you remember your username password, as you'll need it to login later on): 

Image 50

 

Next I needed to configure DNS. At this stage I'm just using a stand-alone machine, which I gave the dns name youconfsearch, which seemed like a logical name for it. 

Image 51

 

 

When asked to configure availablility, I didn't create an availability set, as I just want my machine to run on its own for now, and am not quite so concerned about high availability as I would be if I were running a large app for a client. Again, if the site were to suddenly become popular, I would likely revisit this and configure availability set so the site is more fault-tolerant. 

Image 52 

After confirming this, Azure began provisioning the VM, which took a few minutes to complete, as below:

Image 53 

Endpoints 

Once provisioning was complete, my VM was up & running, with a single endpoint for SSH as shown below (after selecting the Endpoints tab): 

Image 54 

 

To enable access to the VM from the web, I needed to add a public endpoint on port 80, which I did by clicking Add, and then completing as below:

Image 55   

Image 56

Now my additional endpoint was displayed as below:  

Image 57

 

At this stage, I should be able to browse to my vm on the web, which I was able to do using the url provided on the Dashboard screen for my vm - http://youconfsearch.cloudapp.net. And it worked!

Image 58 

 

I then selected the Access my application link, and was taken to the Solr management screen for my Solr instance: 

Image 59 

 

As I've said before in this competition - isn't it nice when things just work! I had a fully functional Solr instance up & running on Azure, and it really wasn't that tricky to complete, thanks to the BitNami tutorial I referred to earlier, and the Azure Management Portal's ease-of-use. For more information on setting up Azure VMs, I'd recommend reading some of the documentation at http://www.windowsazure.com/en-us/documentation/ 

Note: I used the Azure Management Portal GUI to perform the setup tasks above such as creating the VM image. You can also perform all of the same steps via the command-line if you prefer. If you'd like to find out more, I'd recommend reading this blog post by Scott Hanselman where he creates a VM server farm, all via the command line. 

If you're interested in learning more about Apache Solr, I'd recommend reading about it on the Apache website. One can use the admin interface shown above to query documents and manage the Solr instance, which makes it really easy to use. Now I had my Solr instance running, what I needed to do next was add some documents to the index.  

Adding documents to the Solr Index 

I wanted to be able to connect to the Solr VM from my Azure site, and add conference data, which could later be searched on. Rather than writing my own code to handle HttpRequests etc to the Solr instance, I used the SolrNet library, which is a .Net wrapper that makes it easy to manage Solr documents using .Net code. I initially tried installing the NuGet package for it, however, I discovered that the NuGet package doesn't contain the latest version of SolrNet, which I needed in order to connect to my Solr instance (v 4.3.0). Thankfully, the source code for SolrNet is available on GitHub, and since I already had GitHub explorer installed on my machine, I was able to:

 

  • Clone the repository 
  • Download the source code
  • Build the solution (in release mode)
  • Copy the relevant binaries to a lib folder and reference them in my solution
  • Start writing code using SolrNet

I've shown a screenshot below of my lib folder, which contains the SolrNet binaries that I copied (note I included the pdb and xml files as well):

 

 Image 60

If you're interested in using SolrNet, I'd recommend reading the documentation at https://code.google.com/p/solrnet/. I'll show you how I set it up below.

After adding references to the above binaries in my web project in Visual Studio, I first added a ConferenceDTO class as per the SolrNet documentation, which would represent the data that I'd be sending to the Solr index for later retrieval. The class is as follows: 

public class ConferenceDto
   {
       [SolrUniqueKey("id")]
       public int ID { get; set; }
       [SolrUniqueKey("hashtag")]
       public string HashTag { get; set; }
       [SolrField("title")]
       public string Title { get; set; }
       [SolrField("content")]
       public string Content { get; set; }
       [SolrField("cat")]
       public ICollection<string> Speakers { get; set; }
   }

The attributes on each property correspond to fields in the Solr index. Note that by default, the Solr index already contains all of the fields above, except for the hashtag field. I needed to add it to Solr manually, and will come back to that later to show you how I did it by modifying the Solr schema.xml file...

I wanted to add/update the conference data in the Solr index whenever a conference was changed in the YouConf site, so I updated my ConferenceController to do that. First, I updated the constructor:

 ISolrOperations<ConferenceDto> Solr { get; set; }
public ConferenceController(IYouConfDbContext youConfDbContext, ISolrOperations<ConferenceDto> solr)
{
    if (youConfDbContext == null)
    {
        throw new ArgumentNullException("youConfDbContext");
    }
    if (solr == null)
    {
        throw new ArgumentNullException("solr");
    }
    YouConfDbContext = youConfDbContext;
    Solr = solr;
} 

Next I added an AddConferenceToSolr method to add/update the conferences in Solr, which I called from my Create and Edit methods. The code is as follows: 

private void AddConferenceToSolr(Conference conference)
{
    // make some articles  
    Solr.Add(new ConferenceDto()
    {
        ID = conference.Id,
        HashTag = conference.HashTag,
        Title = conference.Name,
        Content = conference.Abstract + " " + conference.Description,
        Speakers = conference.Speakers
            .Select(x => x.Name)
            .ToList()
    });
    Solr.Commit();
} 

Finally, since I'm using Ninject for dependency injection, I added an entry to my Ninject bootstrapper file in /App_Start/NinjectWebCommon.cs to create an instace of the ISolrOperations interface, which would be passed into the controller:

kernel.Load(new SolrNetModule("http://youconfsearch.cloudapp.net/solr")); 

In the above I provide the url for my Solr VM which I configured earlier. Now, when conferences are added/updated, the changes will automatically propagate to the search index in Solr. 

Note: As I mentioned in the introduction, by using the AddConferenceToSolr method in my controller, I made the UI less responsive, as this was making an external call to Solr in-process. Later on I'll show you how I moved this to a separate worker process to speed things up again.  

At this stage, conference data was ready to be propagated to Solr. I also updated the Speaker and Presentation controllers to update Solr when they were changed. I still had work to do though before I could actually save some conferences, because if I tried to run the above code, Solr would complain that it didn't know about the hashtag field. Remember how I mentioned earlier that I needed to update the Solr schema.xml file to add the hashtag field? That's what I'll show you next.   

Using SSH to connect to the Azure VM and update the Solr Schema file 

Since I was running an Ubuntu VM, I couldn't automatically remote desktop into it with a gui automatically like I would be able to with a Windows Server VM (actually it turns out it is possible, I just didn't get onto it early enough!) Instead of using remote desktop, I had to use SSH from the command-line. The article at http://www.windowsazure.com/en-us/manage/linux/how-to-guides/log-on-a-linux-vm/ does a really good job of showing how to use SSH, and I'd recommend reading it.

In my case, I went to the management portal and obtained the SSH details for the VM, then used KiTTY (a variant of PuTTY) to login to the VM using my VM using the username/password I'd created when I setup the VM, as shown below:  

 Image 61

 

Now I was connected to the VM and could make changes via the console. To update the Solr Schema.xml file, I needed to find and open the file, then add an additional field for the hashtag. Once I'd navigated to the right directory, I used the command sudo nano schema.xml to open the file in the nano text editor, with administrative rights, as shown below: 

Image 62 

 

I then added the hashtag field: 

Image 63 

 

...then hit Ctrl^X to save and exit the file. For the changes to take effect, I needed to restart Solr as follows:

 Image 64

 

I then exited KiTTY, opened up a web browser and and went to my Solr Instance, and confirmed that the hashtag field had been added successfully as shown below:

Image 65 

 

Great - it's there! Now when I performed any CRUD operations on the YouConf website, the Solr index would be kept up-to-date automatically. There was one thing missing though - a search page! I'll show you how I added that now... 

Adding a search page 

I wanted a nice simple search page, which retrieved results directly from my Solr instance, rather than going via the YouConf website. This would mean better performance for the YouConf site, as it didn't have to relay query requests to the Solr VM.  To get started, I added a new SearchController, which simply displayed a plain page with a search box on it as shown below: 

 Image 66

 

Next, I wanted to add the functionality to perform the actual search. As it turns out, there's already a library to do just that - https://github.com/evolvingweb/ajax-solr. To use it, I downloaded the relevant javascript files, and followed the Reuters tutorials at https://github.com/evolvingweb/ajax-solr/wiki/reuters-tutorial. Using this, I was able to add basic search, paging, and hit highlighting. I won't go into all of the technical details here, as the tutorial I mentioned does a better job of that, and I recommend reading it if you're looking to implement this functionality yourself. If you'd like to dig into the source code that drives the YouConf search page functionality, it's all available either via download at the top of this article, or on GitHub. A good starting point is the search page code - https://github.com/phillee007/youconf/blob/master/YouConf/Views/Search/Index.cshtml 

The end result was the search page with results displayed below, which I thought was quite nice.  

 

 Image 67 

 

Note: I really wanted to add autocomplete, and I found a number of tutorials on how to use NGram and EdgeNGram filters to accomplish this, however, after a few goes at it I still couldn't get it working, and I felt that my time would be better spent on the rest of this article. In future I'll look to incorporate this though for sure!  

Feel free to try out the search page at http://youconf.azurewebsites.net/search (Hint: search for terms such as Azure or other terms that appear in the conference description for any of the conferences). Just make sure to access it over http as your browser will likely block ajax search requests to the Solr VM (http://youconfsearch.cloudapp.net) due it being non-http. In the next section I'll discuss how I had planned to fix this, and how I didn't quite get there in the end...  

 Securing the Solr Instance   

At this point I thought about security, and wanted to secure my Solr instance so only authorized users could make edits/updates and access the admin ui, whilst still leaving the querying functionality available to be consumed by the client-side javascript on the YouConf search page. I also wanted to add SSL to it. I read the Apache documentation at http://httpd.apache.org/docs/current/howto/auth.html, and also the BitNami documentation at http://wiki.bitnami.com/Components/Apache#How_to_create_a_password_to_protect_access_to_apache.3f, and tried adding and modifying .htaccess files, and also the Solr configuration, but still couldn't get it to work! I've shown a screenshot below of my updated Solr configuration file which I thought would work, but didn't:

 Image 68

I'm no Apache expert, and I'm guessing there's something simple that I was missing here. I leave it as an exercise to you the reader for now, but if anyone reading this and know's what's wrong, please let me know by posting in the comments section Smile | <img src=  In future I'll continue to try and get this working, as it's an essential part of securing the app so malicious users can't get in and fiddle with my Solr index.  

Another thing I'd like to do (once I've setup auth and SSL on the VM) is to take an image of it, so that in the event I want to create a new Solr VM, I can use my pre-configured image to get started quickly. To do so, I read over the steps in the article at http://www.windowsazure.com/en-us/manage/linux/how-to-guides/capture-an-image/. The process is fairly simple, so I'll leave it to you the reader to have a go at doing it once you've setup your VM.  

Moving on - I now had a fully functional search implementation using Apache Solr, which was working like a charm. I could have left it like this, however as I mentioned earlier, all of the updating of the Solr index was being performed from within the web app in-process, and I wanted to move it into a background worker process to make it more robust. That's what I'll show you next.    

Adding a Worker Role to handle background tasks  such as email and updating Solr

Using background services to perform offline tasks can help improve both the performance and robustness of your application, and Azure makes it easy to add these background services using Worker Roles. I initially thought about adding another Windows VM to perform this functionality, however, worker roles are perfectly suited to this task, and I wouldn't feel comfortable trying to shoehorn this into a VM-based solution, as it wouldn't fit with my goal of giving proper guidance to readers of this article.

If I did use a VM, it would also mean I couldn't have automated deployments (if/when I move to TFS), and make it harder to keep track of monitoring data etc, which comes out-of-the-box with worker roles. In saying that, if you're reading this and you require a solution where you need to have complete control over the operating system upon which your background service is running, or you need to move an existing background service to the cloud quickly without creating a worker role, then VMs might be the option for you. The beauty of the Azure platform as it gives you the power to choose which option suits you best, so you can pick the solution that's best for your needs. 

As I've mentioned earlier, the article at http://www.windowsazure.com/en-us/develop/net/tutorials/multi-tier-web-site/1-overview/ gives a comprehensive rundown on how worker roles can be used to perform background tasks, and I highly recommend reading it if you're not familiar with the concept of worker roles or why you would need them. The YouConf web site had two tasks that were candidates for offline processing:

 

  • Sending emails (generated during the password reset process)
  • Updating the Solr search index (which happens whenever a CRUD operation is performed on a Conference, Speaker, or Presentation 

I'll go over the steps I took to create and deploy the worker role below. Note that I won't go into all the details as that would make this article a bit too lengthy, however, if you're interested, all the source code is available on GitHub. I've also written a separate article about best practices when using Azure Service bus with strongly typed messages, which I thought was worth an article in its own right!

 

Creating the worker role in Visual Studio  

To add a worker role, I opened the YouConf solution in Visual Studio, and then hit Add New > Windows Azure Cloud Service as shown below: 

 Image 70

I selected Worker Role with service bus, and named the project YouConfWorker: 

 Image 71  

 

With the worker role project in place, I went about moving the functionality for sending emails and updating the Solr index out of the web project, and into the worker role. Note: As mentioned earlier, I've written a separate article describing the best practices I followed when creating the worker role for:  

  • Smart backoff when no messages are retrieved within the current polling interval 
  • Sending and retrieving strongly typed messages   
  • Exception logging and handling and poison messages 

If you're interested, please check out my other CodeProject article - http://www.codeproject.com/Articles/603504/Best-practices-for-using-strongly-typed-messages-w 

As an example, I'll show you how I moved the Solr index updating functionality across into the worker project.  

Updating the web site to use Azure Service Bus   

The first thing to do was to remove the code that updates the Solr Index from the web project, and replace it with code that simply puts a message on a service bus queue, with the details of the update to be made. To accomplish this, I added a new base controller class, with common functionality for sending queue messages, and updated the ConferenceController to inherit from it. The code for the BaseController class was as follows: 

public class BaseController : Controller
    {
        const string QueueName = "ProcessingQueue";
        protected void SendQueueMessage<T>(T message)
        {
            // Create the queue if it does not exist already
            string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
            var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
            if (!namespaceManager.QueueExists(QueueName))
            {
                namespaceManager.CreateQueue(QueueName);
            }
            // Initialize the connection to Service Bus Queue
            var client = QueueClient.CreateFromConnectionString(connectionString, QueueName);
            // Create message, passing a string message for the body
            BrokeredMessage brokeredMessage = new BrokeredMessage(message);
            brokeredMessage.Properties["messageType"] = message.GetType().AssemblyQualifiedName;
            client.Send(brokeredMessage);
        }
        protected void UpdateConferenceInSolrIndex(int conferenceId, SolrIndexAction action)
        {
            var message = new UpdateSolrIndexMessage()
            {
                ConferenceId = conferenceId,
                Action = action
            };
            SendQueueMessage(message);
        }
    } 

Note the UpdateConferenceInSolrIndex method, which simply creates a new UpdateSolrIndexMessage which specifies which conference needs to be updated, and the action to be performed (either Update or Delete).  The SendQueueMessage<T> method is responsible for creating an actual BrokeredMessage and putting it on the queue, and specifying the type of the message to help with retrieval in the worker role. 

The relevant code in the Create method of the ConferenceController then became: 

.... 
//Save conference to db etc...
....

   UpdateConferenceInSolrIndex(conference.Id, Common.Messaging.SolrIndexAction.Update);
                
...  

Now that I had this in place, I needed to add the actual Solr Index update functionality into the worker role.

Adding Solr update functionality to the worker role 

You've seen the pattern I used for accessing the Azure Queue and accessing strongly-typed messages in the article I referred to earlier, so I won't repeat it here. What I will do, however, is show you the code from the message handler responsible for pushing updates to Solr, stored in the YouConfWorker/MessageHandlers/UpdateSolrIndexMessageHandler.cs class:

namespace YouConfWorker.MessageHandlers
{
    public class UpdateSolrIndexMessageHandler : IMessageHandler<UpdateSolrIndexMessage>
    {
        public IYouConfDbContext Db { get; set; }
        public ISolrOperations<ConferenceDto> Solr { get; set; }
        public UpdateSolrIndexMessageHandler(IYouConfDbContext db, ISolrOperations<ConferenceDto> solr)
        {
            Db = db;
            Solr = solr;
        }
        public void Handle(UpdateSolrIndexMessage message)
        {
            if (message.Action == SolrIndexAction.Delete)
            {
                Solr.Delete(message.ConferenceId.ToString());
            }
            else
            {
                var conference = Db.Conferences.First(x => x.Id == message.ConferenceId);
                Solr.Add(new ConferenceDto()
                {
                    ID = conference.Id,
                    HashTag = conference.HashTag,
                    Title = conference.Name,
                    Content = conference.Abstract + " " + conference.Description,
                    Speakers = conference.Speakers
                        .Select(x => x.Name)
                        .ToList()
                });
            }
            Solr.Commit();
        }
    }
} 

This code is very similar to the original code in the ConferenceController, and simply calls the Solr VM using SolrNet. It also retrieves the conference from the database if needed when doing updates. 

In addition to the changes above, I moved some of the common messaging and database-related functionality into a new project called YouConf.Common. I'd recommend checking out the source code if you'd like to find out more.  After debugging locally, I just needed to deploy the role to Azure... 

 Deploying the worker role to Azure 

As I mentioned in challenge three, I initially thought of moving the whole solution to TFS, as this allows integrated continuous deployments to cloud services from TFS. When using GitHub, however, this is only available for Azure web sites. After some thinking I decided to leave the code in GitHub for now, as it means that it will remain available for everyone to access. That meant that in order to deploy the worker role to Azure, I had publish directly from my local machine.   

To do this, I first created a local deployment package, by right-clicking on my YouConfCloudService project in Visual Studio, and hit Package (selecting Cloud and Release in the service and build configuration boxes), which created an initial deployment package on my machine. I then went to the Azure management portal and created a new cloud service as shown below:

Image 72 

Since I'd selected the Deploy a cloud service package box, I was able to select my local package that I'd created for deployment to the cloud. 

 

 

 

Image 73

 

This then created the cloud service in Azure and deployed my package to it, as shown below: 

Image 74 

 

 

I was also able to publish directly from Visual Studio after downloading the publish profile (which I originally did in challenge two), by selecting the cloud service project in Visual Studio, and selecting Publish, as shown below:

Image 75

 

 

 this then published the site to Azure, with the result shown in the output window:

Image 76 

 

So there you have it - offline processing and background tasks being performed in an Azure worker role - a win for performance, robustness, and appropriate use of the technologies available! Note that I selected Extra Small as the virtual machine size, as I didn't want to go over the usage limit for my Azure subscription (bear in mind that I'm also running an extra-small VM, which means I'm just within the monthly limit for the free Azure subscription). I can always scale this up if needed though Smile | <img src= "   

Conclusion  

 

What a challenge this was! By the end of it, the application was finally at a state where all the pieces were together, with the right tools being used for the right tasks. My Apache Solr VM was running well in an Extra small instance, and easily handled all the search requests I threw at it. If needed, I could easily have scaled it up to a larger instance depending on the load. The functionality for sending emails and updating the Solr index was sitting within a worker role, where it should be, and the web app was communicating with the worker role using Azure service bus, making it robust and reliable. 

I didn't manage to solve the authentication/authorization issues with Apache, but this will be an ongoing task, which hopefully feedback from others will help solve. I also had to deploy the worker role directly from Visual Studio, rather than having it auto-deploy from GitHub, however this was fairly easy to manage. I just had to make sure that I didn't accidentally check-in my production connection strings. 

You might also recall from challenge three that I was keen on adding SSL and a custom domain name to my Azure web site. The good news is that SSL has recently become available for Azure websites, however, it requires running your site in reserved mode, and is fairly costly to use (at least I thought so). Thus, my view at the end of challenge three - that it would be better to move the web site into a dedicated web role - remains the same. I may decide to implement this in the next challenge, however, at this stage I'm happy with the site remaining on the .azurewebsites.net domain till it starts getting more traffic.

One last item worth mentioning - in challenge three I discussed my method for keeping sensitive configuration settings out of GitHub with Azure Web Sites. I felt it was worth an article in its own right, so if you're interested, please check it out at http://www.codeproject.com/Articles/602146/Keeping-sensitive-config-settings-secret-with-Azur

Future challenges 

Only one to go now! The focus for challenge four is on responsive design, so that's what I'll target next. I'll also aim to complete some of the outstanding tasks from earlier challenges, namely:

  1. Adding more unit and integration tests, particularly for the controllers
  2. Adding the ability to upload Speaker photos and store them in BLOB storage 
  3. Adding live feeds of the slides, possibly using SlideShare
  4. Doing further testing to ensure the site is fully responsive across desktop, tablet, and mobile devices, which will be the focus for challenge 5
  5. Add the ability to setup reminders either using vcards or SMS so a user can register interest in a session, then be reminded when it's coming up.
  6. Perform further security hardening, such as removing parameter-tampering and possible XSS issues associated with MVC model-binding   

 

 

 

 

 

Challenge Five - Mobile Access

Introduction

It's the final stage of the contest, and what better to work on than one of the major challenges facing all web developers building modern websites - mobile access. My main goal for this challenge was to make the YouConf website usable across a range of devices, including smartphones, tablets, desktops, and all things in-between. With this in mind, I chose to use responsive design to optimize the site to give the best user experience regardless of the device on which the user was browsing. In the following sections I'll explain why I chose this approach, and also look at the pros/cons of other approaches available. I'll then look at responsive design in detail, and go through the steps I took to make the YouConf site responsive - including how to test it, and the issues I had to overcome for specific pages. Finally, I'll wrap up with a few highlights from my time in this competition, and a list of future steps that I plan to take for the YouConf web application. 

One point worth mentioning is that whilst I didn't end up needing to make use of Azure Mobile Services for this challenge, I still did some research into their usage, and can see how they would be useful they would be if you were building a mobile app and required features such as push notifications, backend services, and scheduling. Maybe I'll have the chance to build a mobile app and try them out in a future competition.... 

As with previous sections, if you'd like more details on my day to day progress, please have a look through the History section. I admit it is very light for this part, as I spent nearly all on the article content due to time constraints.  

For those who can't wait, here's a sneak peek at the finished product: 

Image 78 

Designing for mobile - what are our options?

Before I go into the details of how I went about completing the tasks and building a responsive site, let's first look at the options available when designing for mobile. I did some research on building mobile-capable websites, and found that there are three main options available when you want to start developing for mobile:

  • Responsive web design (RWD) - using CSS3 media queries and/or javascript to display an optimized version of the existing YouConf web site, without having to create a separate site or app.
  • Create a separate mobile version of the website - E.g. m.youconf.azurewebsites.net, which is tailored specifically at mobile devices. This may or may not offer the same content/features as the desktop version of the site.
  • Create separate mobile app/s - such as a Windows Phone app and/or iPhone app and/or Android app. As with the separate mobile site, this may or may not offer the same content/features as the desktop version of the site, but is likely to take advantage of specific features only available on the given mobile device.

Each of the above options has its own pros/cons, some of which I'll outline below. Note that if you'd like further details on the pros/cons of each, please check out this article.

Responsive design

Pros:

  • One only has to maintain a single codebase, which can serve both desktop and mobile users.
  • Costs are lower due to being able to use the same technologies for desktop/mobile users, and hence fewer separate developer skillsets are required
  • Your site should be usable across the broadest possible range of devices, due to wide support for CSS3 and media queries in mobile and desktop browsers

Cons:

  • You may have to make user experience compromises if user goals vary widely between mobile and desktop.
  • A responsive design requires more front-end code to work across all common browsers and operating systems—particularly Internet Explorer 6 through 8. This can lead to slower load times for mobile and desktop users compared with separate sites that are highly optimized for their intended screen size.
  • Responsive design can compromise the desktop user experience. Certain layouts are very difficult to make responsive (the most notorious being tables with lots of columns).

Separate mobile site

Pros:

  • With a separate site, you can more fully tailor the browsing experience for mobile users, including things that are not just HTML and CSS. For instance, your user research may indicate that the content, navigation, or writing style/length should be different for mobile users.
  • You can load only the assets that your mobile users need (e.g. smaller images, JavaScript, CSS files), resulting in faster load times.
  • Your separate mobile site can use more modern front-end technologies like HTML5 and WebKit features when you don’t have to plan for backward compatibility with older desktop browsers.

Cons:

  • You have to manage cross-linking and redirecting of users between the two sites. This can be tricky to get right and will negatively affect page load speeds.
  • Having two separate front-end codebases to maintain may increase maintenance costs.
  • Some users don’t want a separate experience, especially on tablets. Any time you have separate mobile and desktop sites you must provide easy, obvious ways for users to navigate from one site to the other.

Separate mobile app/s

Pros: These are similar to the advantages of building a separate mobile site, with the additional benefit of allowing users to take advantage of specific features for the particular device that the mobile devices is targeting. For example, taking a picture and uploading directly, accessing local file storage, performing actions offline etc.

Cons: Again, these are similar to those for a separate mobile site, with the additional cons of:

  • Further costs due to having to build/support/test multiple codebases - one for each app that you build.
  • Reduced audience reach as your app is only available for users that use the specific devices that your app targets

So, where does that leave us?

As I said earlier, I wanted the YouConf site to be usable on a range of devices, not just those with a specific brand or screen size. I also wanted the same features of the desktop website to be available on mobile devices. Finally, since there's only one of me, and my resources (both dev and testing) are severely limited, I wanted to choose the option that gave the greatest return on investment. Hence my decision to use responsive web design! Note: if you're building a web application and considering what to do to make your site available on mobile, it's worth reviewing the articles I mentioned earlier, as your goals may be different to mine, and hence you may need to evaluate the other two approaches in more detail. In my case, however, responsive design seemed like the obvious choice.

Now that I'd decided on my approach, let's take a look at responsive design in detail.

Getting Started with responsive design

What are our goals? How will we test the site? 

Response design involves making your site flexible so that it presents the best user experience possible in any context, be it desktop, or mobile. If you're new to the whole concept, I'd recommend reading Ethan Marcotte's 2010 article which gives a great explanation of what responsive design is, along with a quick demo of how to make an example site responsive. Responsive design is a huge topic, and though I'll only cover a small portion of it in this article, I hope to give you an indication of some of the steps involved in making a site responsive, and also show the great results you can achieve without having to make too many changes.

Sometimes when building a responsive site for public usage, you might be given a specific device to target, such as "we want it to work on mobile, so we'll just test it on the iPad.". Whilst this allows you to focus your efforts on a specific device, it doesn't necessarily mean your site is the best it could be, as there so many devices out there that it's impossible to code for every specific device, not to mention every orientation on every device! A better approach, as outlined in this article, is to take a device-agnostic approach and focus on how your key content displays at a range of screen sizes. You can then see where content breaks, and adjust your design to make the site usable across a whole range of screen sizes, rather than just a single on a single device.

This is the approach we'll take with the YouConf site, and we'll start by testing across multiple resolutions/devices to see what happens to our content as the browser is resized. From there we'll identify some possible breakpoints (based on our key content/navigation) where we need to adjust our layout to optimize the site for the given resolution. Note: We'll still need to test the site in specific devices to make sure it displays properly, and also to get an idea of the common resolutions that we'll need to pay attention to if we want to make the site usable for the widest possible audience. However, by making the site flexible based on target screen sizes (as opposed to target devices) we should end up with a result that naturally works well across a range of devices. Note also that I'm not an expert in this field, and I'd recommend doing plenty of your own research and reading the articles that I've mentioned thus far. 

Where to start? 

For the YouConf site, we're lucky that there aren't a huge number of pages we need to work on. However, there are a sufficient number of elements (images, video, multiple column layouts etc) that we'll need to address each one and ensure that it works across a range of screen sizes. Before we go any further, we need to figure out how to test our existing site on multiple devices, and figure out what we need to work on. That's what I'll show you next.   

Testing our site in multiple devices/screen sizes 

The first thing we'll do is look at the site as it appears today at a number of screen sizes on a number of devices. Often it's easy to test different browser sizes by simply resizing our browser window, as this gives a quick guide as to whether the site is performing as we expect it to in our chosen browser at a given resolution. However, if we want to test specific screen sizes/devices, we need to use something that resizes to those specific screen sizes and is as close to the native device as possible (of course, we could go and purchase a heap of mobile devices, but that could get pretty costly!). It would also be useful if we could test using the same OS as users will be on their devices. Thankfully, there's already a solution for this - BrowserStack. Scott Hanselman has a great blog post on BrowserStack integration into Visual Studio, which I throughly recommend reading to familiarize yourself with it. In short, BrowserStack provices a virtual environment that allows you test your site on various devices and operating systems, meaning you can test more reliably and efficiently.

To get started with BrowserStack, I first visited http://www.modern.ie/ and clicked on the link to 'Try BrowserStack'. (As an aside, the Modern.Ie site has some useful links for testing your site, and is well worth a read. Chris Maunder has a good write up on it as well). 

I signed up for the free trial, which provides 3 months of free testing in Windows-based environments, and 30 minutes of non-windows testing time. I've shown screenshots of the signup process below: 

Image 79 

Image 80

 

 

 

 

As per Scott's post, I then installed the Visual Studio extension which allows me to debug using BrowserStack right from Visual Studio 2012, as shown below:

Image 81 

 

Once I started debugging, it allowed me to choose which OS/browser I'd like to test in: 

Image 82

 

 

 

 

At the next step I chose to debug an internal url using a tunnel, and received a warning saying I had to install Java in order for it to run, as shown below: 

 Image 83

I then clicked on the Download the latest java version link and installed java: 

 

Image 84

 

After reloading the page, I was able to specify the local url and port to test (which is what I've been using already when debugging locally) and then let BrowserStack do its magic:

 Image 85

Image 86 

Image 87

 

 

The end result is shown below - I'm remotely debugging code in that's running on my local machine using a cloud emulator running Windows 8/IE10 - which is pretty amazing IMHO! 

 Image 88

 

Now that we've got that working, what's next? Well, let's go back and test on some devices that have screen sizes in the range that we want to support. 

Target browsers and IE8 support

Before we look at mobile and tablets, let's look at the desktop site and the browsers we want to support. Ideally, I'd like the site to be fully functional in all of the latest Chrome/Safari/Firefox variants, plus IE8/9/10. However, given the competition time constraints, there's no way I could test the site thoroughly in all of them. If I had more time I'd have given the site a run-through on all of them to ensure it was fully functional! IE9 and IE10 support CSS3 media queries (as do the latest versions of Chrome/Firefox/Safari), which allow us to use specific CSS styles depending on the browser viewport width. IE8, however, does not. To fix that, I included respond.js, which is a script that helps make media queries work in IE6/7/8. Note that I'm not supporting IE6/7, as past experience tells me that coding for these two rogues is more pain than it's worth. Plus by coding for them, I'm not keeping in line with the guidance from Microsoft which encourages users to upgrade their browser to IE8 if they're on XP, or IE10 if they're on Windows 7.  

It turns out that respond.js was already included in the default Visual Studio MVC 4 internet application template that I'd started with back in challenge two, in the /scripts/modernizr-2.6.2-respond-1.1.0.min.js file. So all I needed to do was update my /Views/Shared/_Layout.cshtml file to reference this script file instead of the default modernizer.js file, as follows: 

<head>
    ..... 
    <script src="/scripts/modernizr-2.6.2-respond-1.1.0.min.js"></script>
</head>  

Now IE8 will support media queries, which is essential for IE8 users with desktop resolutions below 1024 * 768px.  

Testing on mobile devices and tablets 

I'd like the site to work on tablets (big and small), smaller desktops, and mobile devices. When viewing in an iPad or a Microsoft Surface in portrait mode, the viewport width is usually 768px, so I'll test that. There are also a range of other tablets out there, not to mention mobile phones in landscape mode, which we want to accomodate. Finally, many mobile devices have screen widths of 320px, so I'll test at that width too. Once we get below 320px things get pretty cramped, and given most of the newer mobile devices out there are 320px and above, I won't test at screen sizes below this. 

Remember what I said earlier about the content being the most important part to focus on when using responsive design? Well, on our site we have a number of key content pages we'll focus on: 

  • The live conference video page - arguably the most important of any page in the whole site, as this is where viewers can tune in to watch their conference in realtime 
  • The conference detail page - which contains conference, presentation and speaker information 
  • The home page - if we want to drive site usage, we really need to have this looking sharp across the board  

For each of the above pages, I'll test it at various screen sizes, and then apply specific responsive design techniques in order to make it work properly in tablets/mobiles. Note that whilst I'll focus on the above pages, I'll also look at the site as a whole, and make modifications to other pages if required.

With that in mind, let's have a look at the live conference video page in an iPad 2 (768px wide)

 Image 89

As you can see, when looking on a large tablet such as the iPad 2, things aren't too bad. There are a number of issues though:  

  1. The nav is getting quite close to the logo   
  2. There is no margin or padding on the outside of the page, resulting in the content being squeezed against the sides of the browser window
  3. The h1 heading is very large relative to the rest of the content

Once we get down to the small tablets, however, it's another story. For quick testing on my local machine, I've downloaded a browser extension for Chrome called Viewport, which allows me to automatically resize the browser window to common device widths. At 480px wide (the same as an iPhone in landscape mode) the result is as below: 

Image 90

 Now things are starting to look a bit off. We have the following issues: 

  1. The nav is now floating up against the right-hand side 
  2. The video, being a fixed size, has extended beyond the page border, and thus caused a horizontal scroll.  
  3. The h1 heading is even larger, relative to the rest of the page, than it was on the iPad

 

Finally, let's have a look at what the page above looks like on an iPhone 4 (320px)

 

Image 91

Oh dear, it looks like we have a few issues to fix, which are much the same as those we found at 480px. Note that the phone has adjusted the scale of the page to compensate for the large video, and the result isn't very nice! I know that, due to bandwidth limitations (at least in NZ), users are less likely to be viewing a video on their mobile. However, as mobiles become more powerful and networks improve, this will become more popular over time. So I'd like to get the video working at 320px as well if possible. So, what can we do? Let's look at each specific page I mentioned earlier and fix the issues. 

Making the site responsive 

If I were using a responsive CSS framework such as Twitter Bootstrap, Skeleton, or Foundation, some of the boilerplate CSS required to make the site scale properly on different devices would already be included. However, since I started with the vanilla MVC 4 internet application template, there is only a small amount of code included for responsive design by default. Whilst this could be seen as a bad thing, I see it as a positive, as it means that I will have to learn more about using CSS media queries properly to achieve a website that will be truly responsive. It also means I'll be in control and have a better understanding if things don't work quite as intended.  

Right, enough talk, let's get down to the code and start addressing the issues mentioned earlier!   

Fixing the live video page  

First things first - adding a viewport meta tag 

As mentioned in this incredibly helpful article, most mobile browsers scale HTML pages to a wide viewport width so it fits on the screen. You can use the viewport meta tag to reset this. The viewport tag below tells the browser to use the device width as the viewport width and disable the initial scale.   

HTML
<meta name="viewport" content="width=device-width, initial-scale=1.0" /> 

I included the above code in the <head> section of my master view template, to prevent mobile devices trying to scale the page in the event that it exceeded the available viewport width.  

Making the video scale 

The code for embedding the YouTube video was initially as follows:

<div id="video">
                <iframe width="630" height="473" src="//youtube.com/embed/@Model.HangoutId" frameborder="0" allowfullscreen></iframe>
            </div>  

Note the fixed width of the iframe, which is what causes the video to break the layout once the screen width drops down below 630px. To fix this, we'll update the code so that the video automatically scales to fill all of the available screen width. That will not only make it work properly on mobile, but also give desktop and tablet users a larger video area to watch.  One of THE most helpful sites I found to help get up-to-speed quickly on responsive design was http://webdesignerwall.com/, and I found this tutorial particularly useful when trying to implement proper scaling for video and images.  

First, we add the following CSS classes to our stylesheet:

CSS
#video {
    position: relative;
    padding-bottom: 56.25%;
    padding-top: 30px;
    height: 0;
    overflow: hidden;
}
#video iframe,
#video object,
#video embed {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
    height: 100%;
} 

We also remove the the width and height declarations on the iframe, so it becomes: 

HTML
<div id="video">
                <iframe src="//youtube.com/embed/@Model.HangoutId" frameborder="0" allowfullscreen></iframe>
            </div>  

 Now the video will scale to fit the screen automatically. Onto our next issue:

Adding a margin to the outside of the page

The width of our main content area when viewing the page on a desktop is 960px. As long as the user's browser is wider than this, the page will have a margin on the left and right-hand sides to stop the content pushing up against the edges of the browser. However, once the browser window is smaller than 960px wide, the content hits the edges of the screen, as shown in the mobile screenshots above. To fix this, we'll add a media query to our stylesheet, so that if the browser width is less than 960px, a small margin will be added to the outside of the main content blocks, as follows: 

CSS
 @media only screen and (max-width: 960px) {
<span class="Apple-tab-span" style="white-space: pre;">	</span>.content-wrapper {
        margin: 0 1%;
    }    
} 

Note that rather than setting fixed width margin using pixels, we're following the guidance from this article, and making our margin fluid so it will scale with the browser window. Onto our next issue:

The h1 heading is not scaling with the browser window

As the browser gets smaller, the h1 element containing the conference title looks proportionately larger. To fix this, we'll add some more entries after our .content-wrapper class mentioned above, to reduce the size of the headings on screens less than 960px wide, as follows:

CSS
h1 {
        font-size: 1.6em;
    }
    h2 {
        font-size: 1.3em;
    }
    h3 {
        font-size: 1.1em;
    } 

With that in place, the headings should still stand out from the body text, but not take up too much space on smaller devices. I also added an additional media query to reduce the font size further for mobile devices with a width of less than 480px, as follows:

CSS
 @media only screen and (max-width: 479px) {
    h1 {
        font-size: 1.3em;
    }
    h2 {
        font-size: 1.2em;
    }
} 
The navigation bar

Whilst the header navigation links don't look too bad on the iPad 2, on smaller devices they float up against the right hand of the screen and drop below the banner logo. To fix this, I initally looked at creating a dropdown or expandable menu for mobile devices, such as the one discussed in this article, or the one that comes out of the box with Twitter bootstrap. However, at this stage there are only a few items in the navigation, and they all fit on one line even on mobile devices at 320px width. Hence, all I did was make them center-aligned, and remove the floats, along with the login links and logo. Note: If I need to add any more items to the navigation in future, then I'll revisit this and go with one of the pulldown menu options mentioned above.

For clarity, the html for the header and nav is included below:  

HTML
 <header class="site-header">
        <div class="content-wrapper clear-fix" style="position: relative;">
            <section id="login">
                @Html.Partial("_LoginPartial")
            </section>
            <div class="float-left">
                <a href="/" title="Home">
                    <img src="/images/logo-full.png" alt="YouConf logo" /></a>
            </div>
            <div class="float-right">
                <nav>
                    <ul id="menu">
                        <li>@Html.ActionLink("Home", "Index", "Home")</li>
                        <li>@Html.ActionLink("Conferences", "All", "Conference")</li>
                        <li>@Html.ActionLink("Help", "Index", "Help")</li>
                        <li><a href="/search" title="Search"><img src="~/images/search-nav.png" class="search-button" alt="Search" /></a></li>
                    </ul>
                </nav>
            </div>
        </div>
    </header> 

And the code to achieve the center-aligned menu, login links, and logo was as follows:

CSS
@media only screen and (max-width: 767px) {
/* header
----------------------------------------------------------*/
header .float-left,
header .float-right {
float: none;
text-align: center;
}
/* logo */
header .site-title {
margin: 10px;
text-align: center;
}
/* login */
#login {
font-size: .85em;
margin: 0 0 12px;
text-align: center;
}
/* menu */
ul#menu {
margin: 0;
padding: 0;
text-align: center;
}
ul#menu li {
margin: 0;
padding: 0;
}
}  

Note the additional media query, which will only apply these styles if the browser width is less than 768px.  

While I was doing that, I also added additional code so that any images would scale to fit the browser window, as discussed in this article. I didn't need to add a media query for this, and simply added it to the main section of my stylesheet as follows:

CSS
img {
<span class="Apple-tab-span" style="white-space: pre;">	</span>max-width: 100%;
<span class="Apple-tab-span" style="white-space: pre;">	</span>height: auto;
}
@media \0screen {
  img { 
  <span class="Apple-tab-span" style="white-space: pre;">	</span>width: auto; /* for ie 8 */
  }
}  

Finally, as the logo was taking proportionately more vertical space once the screen width dropped below about 480px, I added an additional rule so that once the screen width was less than 480px, it would take up a maximum of 80% of the screen width (with height scaling automatically) as follows:

CSS
@media only screen and (max-width: 479px) {
    .logo {
        max-width: 80% !important;
    }
} 

The results

As I made each change, I resized my desktop browser for quick feedback to see if it had worked or not. Once I had all of the changes in place, I went back to BrowserStack and checked again. As you can see, things were looking a lot better.

On the iPad 2: 

 Image 92

 

... and the iPhone 4:

 Image 93

 

Not too bad eh? The h1 text was still a bit too large on the iPhone 4, however the video itself was still visible above the fold, which I felt was satisfactory. Now that I had that page working, let's look at our other two key pages. 

The Conference Detail page

Believe it or not, after making those adjustments to the header, nav, and headings for the live video page, the conference detail page actually looked pretty good on both the iPad, iPhone, and desktop, so I didn't have to do anything to it! Fingers crossed I wasn't just dreaming... One more page to go... 

The Home page

From the outset I thought this would be the page that required the most work, as it has a number of large block elements, including the hero banner with two links on the right-hand side, and also the three info tiles in the center. With the header and nav already taken care of, I just had to take care of the remaining unique elements on this page. First, let's see what it looked like before I started on it. 

This time on an iPad 3 at 768px wide: 

 Image 94

 

Next - on a Samsung Galaxy Note 2 at 480px wide (note I scrolled down to show the hero banner and info tiles): 

 Image 95

 

and finally, on an iPhone 4S at 320 * 480:

 Image 96

This time it was mainly the iPhone (320px wide) device that had issues, namely:

 

  • The hero banner text is far too big 
  • The info tiles are floating side by side, and squash together with the text overflowing
  • Whilst the tile images scaled down and don't overflow, they were very hard to read and understand at such a small size 

Note that the info tile content actually overflowed vertically on the Galaxy Note 2, I just couldn't capture a proper screenshot of it. Note also that even at 480px, the tiles still looked ok in the Galaxy Note 2 - however, I think we can make them look even better (read on to find out how!).

 

Once again, let's go about fixing the issues one by one:

Hero banner text

The html for the hero banner is as follows, with two main columns:

HTML
<section class="content-wrapper hero-wrapper clear-fix">
        <div id="hero" class="hero box">
            <div class="grid">
                <div class="col-7-10">
                    <div class="teaser">Prepare. Present. Engage.</div>
                    <h1>YouConf - your conference online</h1>
                </div>
                <div class="col-3-10">
                    <ul>
                        <li><a href="@Url.Action("Index", "Help")" class = "button"><span class="arrow">Get started</span></a></li>
                        <li><a href="@Url.Action("All", "Conference")" class = "button"><span class="info">More info</span></a></li>
                    </ul>
                </div>
            </div>
        </div>
    </section> 

To improve the look of this on smaller devices, we'll first reduce the font size of the text if the browser width is less than 960px, as follows:

CSS
.hero {
        font-size: 0.8em;
        padding: 3%;
    } 

Note that I added the above CSS rule to the existing section of my stylesheet targeting browsers with max-width < than 960px. 

Before I took the above screenshots, I'd also already added some code to remove the float so the buttons move below the heading text once the browser width falls below 768px, and adjusted the padding to scale with the browser, by adding additional rules to the section of the stylesheet targeting browsers with max-width 767px:

CSS
.hero .grid div {
        width: auto;
        float: none;
    }
    .hero .button {
        padding: 3%;
    } 

Now the hero banner should adjust to fit the window nicely on mobile devices.

Fixing the info tiles

The tiles are arranged so each one takes up one-third of the available space and sits alongside the others. The html for the tiles is as follows:

HTML
 <section class="content-wrapper clear-fix">
        <div class="grid landing-panels">
            <div class="col-1-3">
                <div class="box clearfix">
                    <img src="/images/conferencescreenshot.png" alt="Setup and manage your conference screen" />
                    <div>
                        <h2>Prepare</h2>
                        <p>Setup your conference, and invite people to visit your conference page with a recognizable url. We make it easy to manage presentations, speaker, and conference details.</p>
                    </div>
                    <div style="clear: both;"></div>
                </div>
            </div>
            <div class="col-1-3">
                <div class="box clearfix">
                    <img src="/images/vsscreenshot.png" alt="Live embedded video feeds with Google Hangouts" />
                    <div>
                        <h2>Present</h2>
                        <p>Broadcast live using Google Hangouts. With YouConf you can embed your live video feeds for both conferences and individual presentations, providing an integrated viewing experience.</p>
                    </div>
                    <div style="clear: both;"></div>
                </div>
            </div>
            <div class="col-1-3">
                <div class="box">
                    <img src="/images/twitterscreenshot.png" alt="Integrated chat with Twitter alongside your video" />
                    <div>
                        <h2>Engage</h2>
                        <p>Engage and interact with your audience using Twitter live chat feeds alongside your video. Viewers can comment on your presentation in realtime!</p>
                    </div>
                    <div style="clear: both;"></div>
                </div>
            </div>
        </div>
    </section> 

What I thought would work best was the following:

 

  • If the screen width is > 768px, show the tiles side-by-side, as they currently appear on the desktop.
  • If the width is between 480px (mobile landscape) and 767px, stack the tiles vertically, with the image on the left-hand side of the tile, and the heading/text on the right-hand side 
  • If the width is < 480px, stack the tiles vertically, and hide the image (as the images are not essential content, and can be hard to make out on smaller devices)

To achieve this, I first added the following CSS targeting widths < 768px:

 

CSS
.landing-panels [class*='col-']{
        width: auto;
        float: none;
        padding: 0;
    }
    .landing-panels > div{
        padding: 0 !important;
    }
    .landing-panels .box {
        padding: 1em;
        height: auto;
    }
    .landing-panels [class*='col-'] img {
        float: left;
        width: 40%;
        margin-right: 1em;
    }
    .landing-panels .box div {
        overflow: auto;
        padding: 0;
    } 

 I then added selectors to the CSS section targeting browsers < 480px, as follows:

CSS
@media only screen and (max-width: 479px) {
.landing-panels [class*='col-'] img {
<span class="Apple-tab-span" style="white-space: pre;">	</span>display: none;
}
.landing-panels .box div {
<span class="Apple-tab-span" style="white-space: pre;">	</span>margin-left: 0;
}
} 

 With these rules in place, let's see the results!

On the iPad 3: 

 Image 97

 

On the Galaxy Note 2 (note the tiles with images alongside the text):

Image 98 

 

And finally, the iPhone 4S (note the tile images are now hidden):

Image 99 

 

Fantastic! I have to say I was pretty chuffed with the end result for the three key pages, particularly given that prior to this competition I had no idea of the possibilities for multi-device support that responsive web design offers. Now that you've seen how I went about testing and adjusting the key pages, I hope you have an appreciation for some of the items that I had to deal with when trying to design for both desktop, tablet, and mobile. Note that I could still continue to refine the site at various resolutions to make it look even better, but thought it would be better to focus on getting this article completed! 

What about the rest of the site? 

I had to make a few other adjustments to the site elsewhere, but I won't go into detail on those, as the steps I took were much the same as the ones above. Some notable mentions were:

 

  • Turning off the input box jQuery tooltips on mobile (as they really looked terrible, and the native tooltip was still available to the browser) 
  • Moving the right-hand sidebar on the conference/speaker/presentation admin pages so it sat below the rest of the form content.    

Remember, if you'd like to see the full source code, you can do so using the links at the top of this article.

 

Conclusion

We now have a site that's responsive and works across desktop, tablet, and mobile devices. What's more, once I got my head around the main concepts of responsive web design and CSS3 media queries, making the necessary changes wasn't too difficult. There were a number of key takeaways I had from this challenge. In spite of the proliferation of mobile devices out there, with varying screen sizes, resolutions, and underlying operating systems: 

 

  1. It is possible to have a site that performs well in most of them (just make sure you have plenty of time available for testing!) 
  2. There are great tools available for testing, including BrowserStack, browser add-ons, and plain old manual resizing of your browser window     
  3. There are already many well-documented responsive design techniques we can use to make our sites work across the broadest possible range of devices  

Finally - As I mentioned in the introduction, I didn't need to create a specific Azure mobile service for this challenge, however, I am looking at possibly using one in future to handle executing scheduled tasks such as sending conference reminders (once I add that feature). 

Future features 

Most of these are ongoing tasks, and rest assured, this isn't the end for YouConf! I still have a few things to work on...:

  1. Adding more unit and integration tests, particularly for the controllers
  2. Doing further testing to ensure the site is fully responsive across a broad range of desktop, tablet, and mobile devices 
  3. Add the ability to setup reminders either using vcards or SMS so a user can register interest in a session, then be reminded when it's coming up. 
  4. Add scheduled tasks functionality, possibly making use of Azure mobile services, to send reminders and perform other maintenance tasks 
  5. Perform further security hardening, such as removing parameter-tampering and possible XSS issues associated with MVC model-binding 
  6. Move the web sites to an Azure web role, and add a custom domain name and SSL
  7. Configure authentication and SSL on the Apache SOLR VM 
  8. Adding the ability to upload Speaker photos and store them in BLOB storage - no longer required due to ease of integration with avatars.io and gravatar 
  9. Adding live feeds of the slides, possibly using SlideShare  - this is no longer required 

  

Final Thoughts  

What a competition this has been! I initially set out to try and replicate the dotNetConf website, with a few additional features to make it available to the public, and didn't really think too far beyond that (as evidenced by my poor showing in challenge one). However, I ended up becoming well and truly absorbed in Windows Azure and the competition itself. I learned a tonne about the many aspects of Azure, and with each challenge I found myself appreciating the Azure platform more and more, due to its ability to support every development scenario that I could throw at it. I can't recommend it highly enough! 

I hope that this article will live on beyond this competition and provide guidance to anyone who is trying out Azure - both those that are trying it out for the first time, and those who are looking for additional tips on how to perform certain tasks. It's taken enough blood sweat & tears on my part that it would be a shame if it didn't help a few folk out there. If you're reading this now and you've learned a thing or two that's helpful, please let me know as it will put a smile on my face and inspire me to continue on with similar conquests like this :) 

Finally - A slightly scary thought is that in spite of spending the last two months learning everything I possibly can about it, I've still only just scratched the surface in terms of what Azure is capable of. There's so much more to learn, and I encourage you to go and learn for yourself. If you're a developer, be it .Net, PHP, RoR, C++, or anything else, and you haven't tried out Azure, get on there and give it a try! You won't regret it. 

Thanks for reading this far - I hope you've learned something useful!  

 

 

 

 

 

 

 

History

Part one: I've registered for the free Azure offering of up to 10 websites (http://www.windowsazure.com/en-us/pricing/free-trial/) and just realised how generous the offer really is. Up to 10 websites!!! Hopefully we won't need all of those, but you never know....

*I'll try and post daily project updates, but if there are no entries for a given day, I either didn't find time to work on the project, or was so caught-up in working on the project that I forgot to post an update.

Challenge 2   

Day one (April 29)

Was a bit worried about what I'd gotten myself into, thinking things like - "You mean you're trying to improve on something the Hanselman built? Are you crazy?!" I then thought about what a good opportunity to learn that this competition is, and calmed down a little... Spent the rest of the day reading up on Google Hangouts and how they work, SignalR, TFS and Git integration into Azure using VS2012.

Day two (April 30)

Time to build a website! I'm following the tutorial on how to build an MVC4 website but since I'm not going to be using SQL for this part of the competition I'll leave the membership stuff out for now (by commenting out the entire AccountController class so it doesn't try to initialize the membership database).
Managed to deploy the sample MVC4 website to Azure - http://youconf.azurewebsites.net/ using the builtin Visual Studio publishing mechanism after I'd downloaded my publish profile from Azure.

Image 100

Note: I had a bit of an issue at one stage with Azure as per the screenshot - I couldn't seem to access my website, even though I could see the site live at http://youconf.azurewebsites.net/ ...... After about half an hour this seemed to go away, so not quite sure what was happening....

Image 101

Now let's setup Git so I can publish directly to Azure when I checkin, using the steps in this article.

I've downloaded the Git explorer, setup a local youconf repository, and published my local changes to Git (https://github.com/phillee007/youconf/). Rather than pushing local changes directly to Azure, I'd rather they were first pushed to my GitHub repository so they're visible to anyone else who might want to have a poke around. To accomplish this I'm following the steps in the article under the heading "Deploy files from a repository web site like BitBucket, CodePlex, Dropbox, GitHub, or Mercurial"

*IMPORTANT* After publishing my changes to Git I realised that I'd included all of my publish profile files as well, which contained some sensitive Azure settings (not good). To remove them, I did a quick search and found the following article http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository. The commands I ran in the Git shell were as follows:

Image 102

I also added an entry to my .gitignore file so that I wouldn't accidentally checkin anything in the Publish profile folder again:

Image 103

After fixing those, I clicked on my website in the Azure portal, clicked the link under Integrate Source Control, and followed the steps, selecting my youconf repository in GitHub. About 20 seconds later - voila! - my site has been deployed to Azure from GitHub. Seriously, how easy was that?!! Now we're all setup to write some code.....

Next up: What will my site look like?

I want a site that looks good, so will do a bit of searching and see if I can find something that's nice, and free (Creative Commons licence or similar).

Day three (May 1st):

I've decided to build the next Facebook! Just kidding, but dreams are free right? Happy May day!

After looking at various free CSS templates on http://html5up.net/ yesterday I got a bit stuck as I wasn't sure whether I should go for the one that looked really good, but which had some very complicated looking CSS that I couldn't get my head around in a short time, OR whether I should start with something simple like a grid layout and build up from there. My dilemna is that I'm not very good at producing graphics, and they take me a long time to make, so by getting a pre-built template I can avoid all the hassle. In saying that, I like to know what's going on in the CSS in case I need to modify it. Maybe I'll have to use a bit of both? Moving on....

Membership (Not for now)

I'd like users to register in order to create conferences, however, since we're not going to be building any sort of membership mechanism for this part, I'm going to allow anyone to create conferences without registration. I did find a membership provider for table storage in the Azure code samples, however, it didn't include the Facebook/Twitter authentication which I know comes bundled with SimpleMembership. I think I'll wait till I get SQL before I go further with membership.

Conferences

I need to be able to record conference details (sessions, speakers etc) and will try and get something working for that today. In Scott's article he mentioned using an xml file stored in DropBox for the data, which seemed like a pretty good idea for a single conference. However, given that we're building a site to (hopefully) host lots of online conferences, and because this is a good chance for me to learn about Azure, I'm going to look into using one of the Azure Storage options (Queue, Table, SQL). Since I'm trying to avoid using SQL till the third part of this challenge, I think I'll go with table storage as it's fast, easy to setup, and gives me a chance to learn a new tool.

I started reading up on Partition/Row keys, and found this article very helpful - http://cloud.dzone.com/articles/partitionkey-and-rowkey

Azure Table Storage, so many options....

I got setup and created a table as per http://www.windowsazure.com/en-us/develop/net/how-to-guides/table-services/, however, I soon realized that I didn't quite understand table storage quite as well as I'd thought! Basically I planned to store each conference, including speakers and presentations, as a single table entity, so that I could store/retrieve each conference in one go. I started out writing code for the Conference class as below:

C#
public class conference
{
    public conference()
    {
        presentations = new list<presentation>();
        speakers = new list<speaker>();
    }
    public string hashtag { get; set; }
    public string name { get; set; }
    public string description { get; set; }
    public ilist<presentation> presentations { get; set; }
    public ilist<speaker> speakers { get; set; }
    public string abstract { get; set; }
    public datetime startdate { get; set; }
    public datetime endtime { get; set; }
    public string timezone { get; set; }
}

When I tried to save one of these I ran into a bit of a roadblock though...Unfortunately you can only store primitive properties for a table entity, but not child collections or complex child objects. DOH! So, how could I work around this? I found a number of options:

  • Store each object type as a separate entity, E.g. Conference, Speaker, Presentation all get their own rows in the table. I wasn't too keen on this as it seemed like more work than it was worth. Plus it seemed far more efficient to retrieve the whole conference in one hit rather than having to retrieve each entity separately then combine them in the UI.
  • FatEntities - https://code.google.com/p/lokad-cloud/wiki/FatEntities - this looked very thorough, although I don't think it wasn't up to date with the latest Azure Table storage api
  • Lucifure - http://lucifurestash.codeplex.com/ - this also looked like it wasn't up to date with the latest Azure Table storage api
  • Serialize the Conference as a JSON string. In the end I chose this option as it was easy to implement and allowed me to store the whole conference in a single table row. I used JSON.Net as it's already included in the default MVC4 project, and allows me to serialize/deserialize in one line.

Some sample code from my YouConfDataContext.cs class for doing Inserts/Updates is below:

C#
public void UpsertConference(Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
}  

where AzureTableEntity is just a wrapper class for a Table Entity:

C#
public class AzureTableEntity : TableEntity
{
    public string Entity { get; set; }
} 

Progress at end of day: I've managed to insert some conferences, but am still a bit stuck getting the UI looking nice as I'm flip-flopping between using a custom CSS template or just building from scratch with the grid layout like http://960.gs/

Day 4 (May 2nd)

Currently working on the input screens for conferences and speakers. I really love the MVC framework, both how easy it is to use for common scenarios such as validation, and also how easy it is to extend through ModelBinders, DisplayTemplates etc. Some cool things I've discovered:

Display Templates/Editor Templates

Each conference has a TimeZoneId, such as (UTC-04:00) Atlantic Time (Canada).This is stored as a string property on the Conference e.g.

C#
public class Conference
{ 
public string TimeZoneId { get; set; }
...
} 

The advantage of just storing this as a string rather than a TimeZoneInfo is that I don't need to write a custom modelbinder or custom validator as it's just a plain old string, so the framework can take care of binding and validating it when it's a mandatory field etc.

When adding/editing a conference I want to be able to display a dropdown list of all timezones, and have this automatically bound to the conference. To achieve this, I used code from the http://romikoderbynew.com/2012/03/12/working-with-time-zones-in-asp-net-mvc/ and omitted the custom ModelBinder as I didn't need it. I created a new Editor Template in /Views/Shared/EditorTemplates named TimeZone, and also in /Views/Shared/DisplayTemplates as follows:

C#
@* Thanks to http://romikoderbynew.com/2012/03/12/working-with-time-zones-in-asp-net-mvc/*@
@model string
@{
    var timeZoneList = TimeZoneInfo
        .GetSystemTimeZones()
        .Select(t => new SelectListItem
        {
            Text = t.DisplayName,
            Value = t.Id,
            Selected = Model != null && t.Id == Model
        });
}
@Html.DropDownListFor(model => model, timeZoneList)
@Html.ValidationMessageFor(model => model) 

This will handle displaying a dropdown with all timezones, however, I needed to tell the framework that when rendering the TimeZoneId property on a Conference it should use this template... and it turned out to be really easy! I just had to add a UiHint to the TimeZoneId property and it automagically wired it up. E.g

C#
[Required]
[UIHint("TimeZone"), Display(Name = "Time Zone")]
public string TimeZoneId { get; set; } 

And that's it! Now when I call .DisplayFor or .EditorFor in my views for the TimeZoneId property it automatically renders this template. In the view it looks like this:

C#
<div class="editor-label">
  @Html.LabelFor(model => model.TimeZoneId) 
</div
<div class="editor-field">
  @Html.EditorFor(model => model.TimeZoneId)
  @Html.ValidationMessageFor(model => model.TimeZoneId)
</div> 

and on-screen:

Image 104

BOOM!!!

Validation

Well that turned out to be as easy as adding the right attributes to the properties I wanted to validate. You'll see above I added the [Required] attribute to the TimeZoneId property, which ensures a user has to enter it. I also added the [Display] attribute with a more user-friendly property name.

Azure Table storage issues when updating a conference

When storing conferences, I used "Conferences" as the PartitionKey, and the conference HashTag as the RowKey, as each conference should have a unique HashTag. My UpsertConference code is as follows:

C#
public void UpsertConference(Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
} 

Unfortunately this means that if I were to update a conference's HashTag, a new record would be inserted as the .InsertOrReplace code thinks it's a completely new entry. To work around this, I had to find the old conference record first using the old HashTag, delete it, then Insert the conference again with the new HashTag. It feels a bit clunky, especially since it't not wrapped in a transaction or batch, but as I mention in my comments, this is something I'll be refactoring to use SQL Server in Part 3 of the competition, so I'm not stressing too much over it at the moment. The updated code is as follows:

C#
public void DeleteConference(string hashTag)
{
    var table = GetTable("Conferences");
    TableQuery<AzureTableEntity> query = new TableQuery<AzureTableEntity>();
    TableOperation retrieveOperation = 
      TableOperation.Retrieve<AzureTableEntity>("Conferences", hashTag);
    TableResult retrievedResult = table.Execute(retrieveOperation);
    if (retrievedResult.Result != null)
    {
        TableOperation deleteOperation = TableOperation.Delete((AzureTableEntity)retrievedResult.Result);
        // Execute the operation.
        table.Execute(deleteOperation);
    }
}
/// <summary>
/// Inserts or updates a conference
/// </summary>
/// <param name="hashTag">The hashTag of the existing conference
// (for updates) or the hashTag of the new conference (for inserts)</param>
/// <param name="conference">The conference itself</param>
public void UpsertConference(string hashTag, Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    //We're using the HashTag as the RowKey, so if it gets changed
    // we have to remove the existing record and insert a new one
    //Yes I know that if the code fails after the deletion we could be left
    // with no conference.... Maybe look at doing this in a batch operation instead?
    //Once I move this over to SQL for part 3 we can wrap it in a transaction
    if (hashTag != conference.HashTag)
    {
        DeleteConference(hashTag);
    }
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
}
public void DeleteConference(string hashTag)
{
    var table = GetTable("Conferences");
    TableQuery<AzureTableEntity> query = new TableQuery<AzureTableEntity>();
    TableOperation retrieveOperation = 
      TableOperation.Retrieve<AzureTableEntity>("Conferences", hashTag);
    TableResult retrievedResult = table.Execute(retrieveOperation);
    if (retrievedResult.Result != null)
    {
        TableOperation deleteOperation = 
          TableOperation.Delete((AzureTableEntity)retrievedResult.Result);
        // Execute the operation.
        table.Execute(deleteOperation);
    }
}
/// <summary>
/// Inserts or updates a conference
/// </summary>
/// <param name="hashTag">The hashTag of the existing conference
// (for updates) or the hashTag of the new conference (for inserts)</param>
/// <param name="conference">The conference itself</param>
public void UpsertConference(string hashTag, Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    //We're using the HashTag as the RowKey, so if it gets changed
    // we have to remove the existing record and insert a new one
    //Yes I know that if the code fails after the deletion we could be left
    // with no conference.... Maybe look at doing this in a batch operation instead?
    //Once I move this over to SQL for part 3 we can wrap it in a transaction
    if (hashTag != conference.HashTag)
    {
        DeleteConference(hashTag);
    }
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
}  

CRUD

I've found it fairly easy to perform simple CRUD operations using Table storage thus far, with the minor issue relating to updating an entity's RowKey. While developing locally I used Development storage by setting my web.config storage connection string as follows <add key="StorageConnectionString" value="UseDevelopmentStorage=true" />. In order to get this working in the cloud I just had to setup a storage account and update my Azure Cloud settings as per http://www.windowsazure.com/en-us/develop/net/how-to-guides/table-services/

I created a storage account name youconf, then copied the primary access key. I then went to the websites section, selected my youconf site, clicked Configure, then added my StorageConnectionString to the app setttings section with the following value:

DefaultEndpointsProtocol=https;AccountName=youconf;AccountKey=[Mylongaccountkey] 

Date/time and TimeZone fun

I've had to do a bit more work than expected with the date/times, given that when a conference is created, the creator can select a start/end date/time, and also a timezone. The same goes for a Presentation, which has a start date/time, duration, and timezone.

Initally I was going to store them in local format, along with the timezone Id (as they appear to be stored in dotNetConf from reading Scott's blog post). However, after doing some reading on the subject of storing date/time information, I gathered that it's best to store datetimes in UTC, then convert them into either the user's timezone, or your chosen timezone (such as the event timezone) as close to the UI as possible. This allows for easier comparisons in server-side code, and also makes it easy to order Conferences and presentations by date/time E.g.

@foreach (var presentation in Model.Presentations.OrderBy(x => x.StartTime)) 

http://stackoverflow.com/questions/2532729/daylight-saving-time-and-timezone-best-practices seems to be an article that I keep coming back to whenever I do anything involving datetimes and different timezones, and I read it once again to re-familiarise myself with how to go about things.

So, a user enters the datetime in their chosen timezone, selects the timezone from a dropdown list, and hits Submit. In order to store the date in UTC I have to have code such as this in the Controller, or possibly in a ModelBinder (I haven't tried using a Custom ModelBinder yet though)

C#
var conferenceTimeZone = TimeZoneInfo.FindSystemTimeZoneById(conference.TimeZoneId);
conference.StartDate = TimeZoneInfo.ConvertTimeToUtc(conference.StartDate, conferenceTimeZone);
conference.EndDate = TimeZoneInfo.ConvertTimeToUtc(conference.EndDate, conferenceTimeZone); 

... then to render it back out again in the local timezone I created a custom EditorTemplate called LocalDateTime.cshtml. Note that I also add a date class onto the input field, so that I can identify and date fields using jQuery when wiring up a date time picker (more on that later).

C#
@model DateTime
@{
    var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById((string)ViewBag.TimeZoneId);
    var localDateTime = Model.UtcToLocal(localTimeZone);
}
@Html.TextBox("", localDateTime.ToString(), 
  new { @class = "date", 
  @Value = localDateTime.ToString("yyyy-MM-dd HH:mm") }) 

.. and to use this template, I can either decorate the relevant properties on my Conference/Presentation classes with a UIHint, or specify the editor template directly from another view. For example, here's some of the code from /Views/Conference/Edit.cshtml:

C#
@Html.LabelFor(model => model.StartDate) 
@Html.EditorFor(model => model.StartDate, "LocalDateTime", 
  new { TimeZoneId = Model.TimeZoneId }) @Html.ValidationMessageFor(model => model.StartDate)

Note that 2nd parameter which specifies the editor template that I want to use. I also pass in the TimeZoneId of the conference as a parameter to the LocalDateTime editor template.

The UI - How to display date/times?

I was investigating how best to render date/times, and was initially looking at using dual input boxes, with one holding the date, and one holding the time, as per yet another of Scott's articles. However, after getting partway through implementing that, I discovered an amazing jQuery datetimepicker plugin at http://trentrichardson.com/examples/timepicker/ which extends the existing jQuery datepicker.

By using that I was able to get away with having a single input box containing both the date AND time, along with a nice picker to help users. It really is cool, and only takes a single line of code to add:

C#
$(function () {
    $(".date").datetimepicker({ dateFormat: 'yy-mm-dd' });
}); 

... and the resulting UI looks pretty good to me!

Image 105

Day 5 (May 3rd)

Not too much to report today as I've been hacking away at the stylesheet to try and make the site look nice. I had a go with the Twitter Bootstrap CSS template, but eventually decided not to use it as it might not work well with jQuery UI (and validation etc). Still struggling away at the end of the day....

Day 6 (May 4th)

More CSS and UI tidy-up. Things are starting to look better now - have a look at the live site to see it coming together.

JSON Serialization

When adding the functionality to delete a speaker, I ran into an issue where I would delete the speaker, but they would not be removed from the actual presentation. Here's a snippet of the code from the Presentation class:

C#
...
[Display(Name="Speaker/s")]
        public IList<Speaker> Speakers { get; set; } 
... 

Now in the Delete method of my speaker controller, I have code like this to delete the speaker:

C#
...
//Remove the speaker
conference.Speakers.Remove(currentSpeaker);
//Also remove them from any presentations...
foreach (var presentation in conference.Presentations)
{
    var speaker = presentation.Speakers.FirstOrDefault(x => x.Id == currentSpeaker.Id);
    presentation.Speakers.Remove(speaker);
}
YouConfDataContext.UpsertConference(conferenceHashTag, conference);
return RedirectToAction("Details", "Conference", new { hashTag = conferenceHashTag }); 
... 

Note that line which says Presentation.Speakers.Remove(speaker)... with my default setup this wasn't actually deleting the speaker, because by default JSON.Net serializes all objects by reference (remember that we're serializing the entire conference when we save it to table storage then deserializing it on the way back out). This means that the speaker object that I retrieved on the line beforehand is not actually the same instance as the one in the presentation.Speakers collection.

Initially I was going to override Equals on the Speaker class to have it compare them by Id, but then I did some googling and found that, sure enough, others had already run into this problem. And it turns out JSON.Net (written by the coding fiend aKa James Newton-King, who also happens to be in Wellington, NZ) already handles this situation and allows you to preserve object references! See http://johnnycode.com/2012/04/10/serializing-circular-references-with-json-net-and-entity-framework/ for more. Basically I just had to specify the right option when serializing the conference before saving as follows, in my UpsertConference method:

C#
var entity = new AzureTableEntity()
{
    PartitionKey = "Conferences",
    RowKey = conference.HashTag,
    //When serializing we want to make sure that object references are preserved
    Entity = JsonConvert.SerializeObject(conference, 
      new JsonSerializerSettings { 
      PreserveReferencesHandling = PreserveReferencesHandling.Objects })
};
TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);

Setting the width of textareas in MVC

Remember earlier how I said I could make MVC automatically render a textarea for a property by simple decorating the property with the [DataType(DataType.MultilineText)] attribute? Well, what if I want to specify the height/width of the textarea? CSS to the rescue!

The framework automatically adds the multi-line class to any textareas that it renders using the default editortemplate, which means I was able to add a style for this class and achieve the desired result. E.g.

.multi-line { height:15em; width:40em; }     

Day 7 - 10 (May 5 - 8)

Spent most of my time doing CSS and UI enhancements, and making the homepage look pretty. I tend to struggle with CSS and making things look beautiful at the best of times, particularly when I start to run into cross-browser issues. However, I think that I've come up with something that looks quite nice now - check it out at http://youconf.azurewebsites.net/

A few things I found helpful along the way...

jQuery buttons - make your buttons and links look pretty

jQuery UI comes with a button widget, which "Enhances standard form elements like buttons, inputs and anchors to themeable buttons with appropriate hover and active styles." It makes them look quite nice, and since I already had jQuery UI included in the project (it comes bundled with the MVC4 Internet web application template) I thought I'd use it. One line of javascript was all that was needed:

$("#main-content input[type=submit], #main-content a:not(.no-button), #main-content button")
.button(); 

Note that I've scoped it to only include items within the main-content element to improve selector performance. The before > after is below:

Image 106

Nice Icons

On the subject of buttons, it's often nice to have icons for various buttons, not to mention in either your header or logo. I found a couple of sites that provide free icons released under the Creative Commons attribute licence, and so I used a few of them (and included the relevant link-back in my site footer). The sites were:

Find Icons - http://findicons.com/
Icon Archive - http://www.iconarchive.com/

I also found a very cool logo generator at http://cooltext.com/, which I used to generate the text in the YouConf logo.

Social links - Twitter, Google Plus, Facebook

It's fairly easy to include links for the three big boys above since they provide code that you can embed either via iFrame or javascript. Unfortunately they seem to take quite long time to load though, so this can result in flickering of the icons as one populates after the other. To hide this I hacked away and ended up hiding the section with the buttons in it till 5 seconds after the DOM had loaded. E.g.

C#
setTimeout(function () {
    $("#social").show();
}, 5000); 

I'm sure there's a better way to do this, but I'm not sure I have time to find out just yet! Thanks to http://www.noamdesign.com/3-ways-to-integrate-social-media/ for the idea anyhow...

Azure + Source Control = a match made in heaven

Isn't it nice when things just work? All this time that I've been stressing away fixing bugs and getting my site looking nice, I haven't had a single issue with Git publishing to TFS. I simply check in my changes to my local repository as I complete features, and try to sync to GitHub a few times a day. Each time I sync to GitHub my changes are automatically pushed to my Azure website, usually within a few minutes. I've been able to focus on building my website and not fret over versioning or deployment issues. Phew!

Day 11 (May 9th)

Quite a bit to report on today....

Setting up a Dupal blog website

Since about day 3 I'd been thinking of moving the posts on my daily progress into a separate blog, as there's enough information to make some of them worth an entire entry. I was also aware that one of the competition points for this section was around what we do with our other 9 websites. So I figured I'd see if it really was as easy to setup a blog as they made out in http://www.windowsazure.com/en-us/develop/php/tutorials/website-from-gallery/.

I found the above post, and a few others, which setup Wordpress blogs, so I thought why not try a different one to make things a bit more interesting. In the end I went with Drupal, as an old workmate of mine used to rave about it. I found an article for guidance on installing Wordpress at http://www.windowsazure.com/en-us/develop/php/tutorials/website-from-gallery/, so used this as a guide. Here's what I did:

  1. Selected the web sites note in the Azure management screen and clicked New
  2. Selected Compute > Web site > From gallery
  3. Selected Acquia Drupal 7 (Note that later I realized there were specific blog applications, so if doing this again I would use one of those...)
    Image 107
  4. Chose the url for my blog - youconfblog - and chose to create a new mysql database.
    Image 108
  5. Followed the rest of the prompts and provided my email address etc, and let it complete. I was then able to browse to my vanilla Drupal installation at http://youconfblog.azurewebsites.net/
  6. I then wanted to install a blog theme, and I found a nice looking one at http://drupal.org/project/responsive_blog
  7. To install it on my site, I found the .tar.gz url for the installation package - http://ftp.drupal.org/files/projects/responsive_blog-7.x-1.6.tar.gz - and in the admin section of my Drupal site, selected Appearance from the top menu, then Install new theme.
  8. I provided the url to the responsive blog package, and then let Drupal do its thing and complete the installation.
  9. I then configured the theme by going to the Settings page for the theme, and added my own YouConfBlog logo, and disabled the slideshow on the homepage.
    Image 109

And now I have a nice themed Drupal site! http://youconfblog.azurewebsites.net

I then added a couple of blog entries for day one & two, by copying & pasting the html code from my CodeProject article into the blog entry.

What, wait a minute, aren't we supposed to avoid duplication?

After getting my second day's progress blogpost into my Drupal site, I realized that if I was to copy & paste all the articles:

  1. It could take a while
  2. I'd have to do the same in future for all my other daily progress updates
  3. If I changed one, I'd have to update the other
  4. I wouldn't be keeping in line with the CodeProject terms, which discourage you from posting content from CodeProject elsewhere
  5. I might make it harder for the judges to assess me article, as now they'd have to look in two places

In light of the above, I left my two initial blog posts intact, and decided that for now I'll only post updates in my CodeProject article, since the goal of setting up the blog was to see if it really was as easy as others had made out (whilst learning along the way), which indeed it was. I'll leave the blog in place though, as it deserves to be part of my entry for challenge two as one of the other 9 websites.

Error Logging

Usually one of the first things I do when creating a project is setting up Error Logging. Sometimes it's to a text file, sometimes to xml, sometimes to a database, depending on the application requirements. My favourite logging framework for .Net web apps is Elmah, as it takes care of catching unhandled exceptions and logging them to a local directory right out-of-the-box. It has an extension for MVC too, which is awesome.

Elmah allows you to specify the route url you're like to use for viewing errors in your web.config. It also allows you to restrict access to the log viewer page if needed, using an authorization filter so you can specify which user roles should have access. At this stage I haven't implemented membership, and so can't restrict access via roles. Thus I'm going to leave remote access to the logs off (which it is by default). For part 3 when I implement membership I'll update this. Note that for any production application I'd never leave the error log page open to the public, as it would give away far too much to anyone who happens to come snooping.

Right - to setup Elmah logging I did the following:

  1. Opened the nuget package manager for my YouConf project in Visual Studio, and searched for Elmah as below
    Image 110
  2. Selected the Elmah.Mvc package and installed it. This added an <elmah/> section to my web.config, and also some appsettings for configuring Elmah.
  3. Opened up my web.config and (using the ol' security through obfuscation mantra) updated my elmah.mvc.route appsetting value to be a long complicated URL - superdupersecretlogdirectorythatwillbeprotectedonceweimplementregistrationwithSqlandsimplemembership
  4. Fired up the local debugger and navigated to http://localhost:60539/superdupersecretlog directorythatwillbeprotectedonceweimplementregistrationwithSqlandsimplemembership
  5. Voila - we have an error viewer!
    Image 111
  6. Now if I trigger an error by going to a dodgy url e.g. http://localhost:60539/<script>alert(0);</script> I should see an error appear in my list. Image 112
  7. And voila - there it is!
Logging to persistent storage

By default Elmah logs exceptions in-memory, which is great when you're developing, but not so good when you deploy to another environment and want to store your errors so you can analyze them later. So, how do we setup persistent storage?

In the past I've used local xml file, which is really easy to configure in Elmah by adding the following line to the <elmah></elmah> section of your web.config as follows:

XML
<elmah>
  <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data" />
</elmah> 

This is fine if you're working on a single server, or can log to a SAN or similar and then aggregate your log files for analysis. However, in our case we're deploying to Azure, which means there are no guarantees that our site will stay on a single server for its whole lifetime. Not to mention that the site will be cleared each time we redeploy, along with any local log files. So what can we do?

One option is to setup Local Storage in our Azure instance. This will give us access to persistent storage will not be affected by things like web role recycles or redeployments. To use this, we would need to:

  1. Setup local storage as per the following article (http://msdn.microsoft.com/en-us/library/windowsazure/ee758708.aspx)
  2. Configure our error logger to use this directory instead of App_Data.
  3. Sit back and relax

The above solution would work fine, however, since I'm already using Azure Table storage, I thought why not use it for storing errors as well? After some googling I came upon the following package for using table storage with Elmah, but upon downloading the code realized it wasn't up-to-date with the Azure Storage v2 SDK. It was easy to modify though, with the end result being the class below.

C#
namespace YouConf.Infrastructure.Logging
{
    /// <summary>
    /// Based on http://www.wadewegner.com/2011/08/
    ///        using-elmah-in-windows-azure-with-table-storage/
    /// Updated for Azure Storage v2 SDK
    /// </summary>
    public class TableErrorLog : ErrorLog
    {
        private string connectionString;
        public const string TableName = "Errors";
        private CloudTableClient GetTableClient()
        {
            // Retrieve the storage account from the connection string.
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
               CloudConfigurationManager.GetSetting("StorageConnectionString"));
            // Create the table client.
            return storageAccount.CreateCloudTableClient();
        }
        private CloudTable GetTable(string tableName)
        {
            var tableClient = GetTableClient();
            return tableClient.GetTableReference(tableName);
        }
        public override ErrorLogEntry GetError(string id)
        {
            var table = GetTable(TableName);
            TableQuery<ErrorEntity> query = new TableQuery<ErrorEntity>();
            TableOperation retrieveOperation = TableOperation.Retrieve<ErrorEntity>("", id);
            TableResult retrievedResult = table.Execute(retrieveOperation);
            if (retrievedResult.Result == null)
            {
                return null;
            }
            return new ErrorLogEntry(this, id, 
              ErrorXml.DecodeString(((ErrorEntity)retrievedResult.Result).SerializedError));
        }
        public override int GetErrors(int pageIndex, int pageSize, IList errorEntryList)
        {
            var count = 0;
            var table = GetTable(TableName);
            TableQuery<ErrorEntity> query = new TableQuery<ErrorEntity>()
            .Where(TableQuery.GenerateFilterCondition(
              "PartitionKey", QueryComparisons.Equal, TableName))
            .Take((pageIndex + 1) * pageSize);
            //NOTE: Ideally we'd use a continuation token
            // for paging, as currently we're retrieving all errors back  
            //then paging in-memory. Running out of time though
            // so have to leave it as-is for now (which is how it was originally)
            var errors = table.ExecuteQuery(query)
                .Skip(pageIndex * pageSize);
            foreach (var error in errors)
            {
                errorEntryList.Add(new ErrorLogEntry(this, error.RowKey,
                    ErrorXml.DecodeString(error.SerializedError)));
                count += 1;
            }
            return count;
        }
        public override string Log(Error error)
        {
            var entity = new ErrorEntity(error);
            var table = GetTable(TableName);
            TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
            table.Execute(upsertOperation);
            return entity.RowKey;
        }
        public TableErrorLog(IDictionary config)
        {
            Initialize();
        }
        public TableErrorLog(string connectionString)
        {
            this.connectionString = connectionString;
            Initialize();
        }
        void Initialize()
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
               CloudConfigurationManager.GetSetting("StorageConnectionString"));
            var tableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = tableClient.GetTableReference("Errors");
            table.CreateIfNotExists();
        }
    }
    public class ErrorEntity : TableEntity
    {
        public string SerializedError { get; set; }
        public ErrorEntity() { }
        public ErrorEntity(Error error)
            : base(TableErrorLog.TableName, 
              (DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19"))
        {
            PartitionKey = TableErrorLog.TableName;
            RowKey = (DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19");
            this.SerializedError = ErrorXml.EncodeString(error);
        }
    }
} 

This will log all errors to the Errors table in Azure table storage, and also take care of reading them back out again.

I also had to update my web.config to use the new logger class as follows:

XML
<elmah>
    <errorLog type="YouConf.Infrastructure.Logging.TableErrorLog, YouConf" />
</elmah> 

Now if I generate an error I'll still see it on the Elmah log viewpage, but I can also see it in my table storage. I'm using dev storage locally, so I can fire up the wonderful Azure Storage Explorer and view my Error Log table as shown below:

Image 113

and also on-screen:

Image 114

Lovely!

Day 12 (May 10th)

Today I spend most of my time writing up the final article content for challenge two. I also implemented the SignalR functionality for keeping the live video url up-to-date as below.

SignalR

Keeping the live video feed url up to date with SignalR

SignalR is a great tool for providing realtime updates to clients, and the Jabbr chat site provides a great example of how to harness this technology. For live conference page I used SignalR in a similar way to dotNetConf to ensure that if a conference presenter updated the Google Hangout id for the conference, viewers would be provided with the updated url without having to refresh their page.

To install SignalR, I installed the SignalR Nuget package as below:

Image 115

I then set about building a SignalR hub and client. My main issue came with how to push the notification to my SignalR hub from the Conference Controller. To give some context, here's my YouConfHub class:

C#
public class YouConfHub : Hub
{
    public Task UpdateConferenceVideoUrl(string conferenceHashTag, string url)
    {
        //Only update the clients for the specific conference 
        return Clients.All.updateConferenceVideoUrl(url);
    }
    public Task Join(string conferenceHashTag)
    {
        return Groups.Add(Context.ConnectionId, conferenceHashTag);
    }
}  

and my client JavaScript code:

C#
<script src="http://www.codeproject.com/ajax.aspnetcdn.com/
            ajax/signalr/jquery.signalr-1.0.1.min.js"></script>
    <script>$.signalR || document.write('<scr' + 
      'ipt src="~/scripts/jquery.signalr-1.0.1.min.js")></sc' + 'ript>');</script>
    <script src="~/signalr/hubs" type="text/javascript"></script>
    <script>
        $(function () {
            $.connection.hub.logging = true;
            var youConfHub = $.connection.youConfHub;
            youConfHub.client.updateConferenceVideoUrl = function (hangoutId) {
                $("#video iframe").attr("src", 
                   "http://youtube.com/embed/" + hangoutId + "?autoplay=1");
            };
            var joinGroup = function () {
                youConfHub.server.join("@Model.HashTag");
            }
            //Once connected, join the group for the current conference.
            $.connection.hub.start(function () {
                joinGroup();
            });
            $.connection.hub.disconnected(function () {
                setTimeout(function () {
                    $.connection.hub.start();
                }, 5000);
            });
        });
    </script>
} 

See the UpdateVideoUrl method in the Hub? I wanted to call that from my ConferenceController when a user updated the conference hangout id/url, and thought I could do so by getting an instance of the Hub, then calling the method on it. E.g.

var context = GlobalHost.ConnectionManager.GetHubContext<YouConfub>();
context.UpdateConferenceVideoUrl("[conference hashtag]", "[new hangout id]"); 

Sadly, it turns out that you can't actually call methods on the hub from outside the hub pipeline Frown | <img src= You can, however, call methods on the Hub clients, and groups. So, in my conference controller's edit method, I was able to use the following code to notify all clients for the specific conference that they should update their url as follows:

C#
if (existingConference.HangoutId != conference.HangoutId)
{
    //User has changed the conference hangout id, so notify any listeners/viewers
    // out there if they're watching (e.g. during the live conference streaming)
    var context = GlobalHost.ConnectionManager.GetHubContext<YouConfHub>();
    context.Clients.Group(conference.HashTag).updateConferenceVideoUrl(conference.HangoutId);
} 

Not too bad in the end eh?

Article progress

My article is now almost complete, with just a few touchups required. I'll probably spend the next day or two tidying up the site's css, javascript etc and making sure I haven't missed anything!

Day 13 (May 11th)

Carried on updating my article and tidying up my code to fix all those little things such as extraneous files that were no longer necessary. Also added an Easter Egg for the spot challenge!!

Day 14 (May 12th)

Continued to make a few text changes and minor tidyup as I realised in NZ we're actually 16 hours ahead of the timezone the conference is being judged in. Thus the challenge one deadline for me was actually about 4pm on May 13th NZT! 

Challenge Three Begins! 

Day 15 - 18 (May 13 - 16)

Since I'd been spending quite a lot of time on this during challenge two, I thought I'd have a bit of a break and keep away from the computer for a bit. I kept up with the comments and forum, but didn't do any development work. I did learn how to do tagging (or 'labelling' in TFS-speak) in branching in Git though....

Source control revisited - branching and tagging in Git 

I'm sure everyone is familiar with branching and merging in your chosen source control system. If you're not, make sure to read http://en.wikipedia.org/wiki/Branching_(revision_control) which describes what it is. In short, branching allows you to work on separate versions of your codebase, and merge the changes together as you see fit. It also allows you to keep untested development changes for an iteration separate from the main production codebase. This is particularly applicable in my current situation, where I'd like to start development for challenge 3, but still want my source from challenge two to be available for the judges and anyone else to look at. I also don't want to introduce breaking changes into the live site when I check changes in. So, what did I do?

  • Tagging - Firstly, I 'tagged' my current master branch with the tag v1.0, so that it was clear that all the code up until that point was part of the v1.0 release (I.e. for challenge 2). I'm using GitHub for source control, and learned how to do tagging using this article. I did this in the GitHub shell (which I got to by opening GitHub explorer > Tools > open a shell here), firstly creating the label, then pushing it up to GitHub using the commands shown in the following screenshots:
    Image 117
    Image 118
  • Branching - Second, I created a dev branch, which I will use for making my changes during challenge 3. When I'm happy with my changes, I'll merge them into the master branch so they become part of the main code and also get deployed to the live site. I used this video for help on how to do it. My commands for creating the branch and pushing it to GitHub are shown below:
    git branch dev
    git checkout dev
    git push -u origin dev

Note: Remember how for challenge 2 I setup source control to auto-deploy to the live YouConf site in challenge two? Well, I set that up to auto-deploy off the master branch. Thus if I check-in changes to my dev branch, it won't affect the master branch, and thus won't change the live site, which is exactly what I need. Once again Azure has made what seems like a complex task very easy - fantastic!

So now I have a master and a dev branch. One thing I found interesting is that when you branch in TFS, it creates a whole separate copy of your source tree in the file system, whereas Git doesn't. I'm not sure quite how it does this, but it seems to be working fine so I won't question it! To switch to my dev branch and start developing, I opened the git shell, ran the command git branch dev, then opened the solution in VS2012 and started working!

Unit Tests

One of my tasks from last time was to add some tests for my controllers, to ensure that they're doing what they're meant to be doing. Given that most of the logic is in the Conference, Speaker, and Presentation controllers, I'll start with them. I'm not too keen on testing to the n-th degree when it comes to what is essentially a CRUD-based system, however, there is some specific logic that I think should be tested so we can be confident we're not breaking anything going forward...

To get started, I've installed a couple of packages that I find useful for testing, with screenshots.

  • Fluent Assertions MVC - to make assertions for our MVC controller actions easier
    Image 119
  • Moq - for mocking/stubbing.
    Image 120


Note I would like to use the Visual Studio Fakes Framework, and I did try, however every time I've tried using it in the recent past I end up in situations where something can't be mocked for whatever reason, and I'm left without a clue how to fix it. For example, I followed the steps in http://msdn.microsoft.com/en-us/library/hh549174.aspx and added the fakes assembly for YouConf, but after building it couldn't generate a fake for the IYouConfDataContext. Given the tight timeframes for this competition, I really didn't have time to look any further, so I went with Moq which I knew would work.

I tend to use either Rhino Mocks or Moq on my projects, because they both do what they're supposed to do and have lots of useful help and tutorials available. As an aside, now that Ayende won't be actively maintaining Rhino Mocks, I wonder who will?.....

I won't go into the details of the tests too much, except to say that I'll try and add tests for what I see as the important bits of my controllers as I go. You can always check the source code if you'd like to see more. An example test for the All() method on my ConferenceController is as follows:

C#
[TestClass]
public class ConferenceControllerTests
{
    [TestMethod]
    public void All_Should_ReturnOnlyPublicConferences()
    {
        //Setup a stub repository to return three public conferences and one private
        var stubRepository = new Mock<IYouConfDataContext>();
        stubRepository
            .Setup(x => x.GetAllConferences())
            .Returns(new List<Conference>(){
                new Conference(){ AvailableToPublic = true},
                new Conference(){ AvailableToPublic = true},
                new Conference(){ AvailableToPublic = true},
                new Conference(){ AvailableToPublic = false}
            });
        var conferenceController = new ConferenceController(stubRepository.Object);
        var result = conferenceController.All()
            .As<ViewResult>();
        result.Model
            .As<IEnumerable<Conference>>()
            .Should().HaveCount(3);           
    } 

Source control - Excluding NuGet packages

I downloaded the source for my project from GitHub at the end of challenge one (to make sure it worked), and found (to my surprise) that the download size was ~60mb! After checking where most of the files were, I found that it was due to the large number of NuGet packages in my solution. I thought to myself "wouldn't it be nice if these could be automagically downloaded during the build process on GitHub" and it turns out that this problem has already been solved! See this article for details on how to instruct the build server to automatically download missing NuGet packages - in my case I had to do the following:

  • In Visual Studio, with the solution open and my YouConf web project selected, open the Projects menu, and select "Enable NuGet package restore" as shown below:
    Image 121

This added a new .nuget folder to my solution, as below:

Image 122

Finally, I updated my .gitignore file to exclude the entire packages folder from source control (note that there was already a line for it in there by default, so I just uncommented it):

Image 123

And now we no longer have packages getting checked in to source control! I also ran a command to delete the packages folder from my GitHub remote repository, but not my local machine as I wanted to keep the existing packages in my dev environment, as follows, and then checked in my changes.

git rm -r --cached packages 

Day 19 (May 17)

Today I'll try and get the basic membership functionality that comes with the example MVC 4 Web application template working. In order to do that, I'll need SQL.... and guess what - that's the focus for challenge 3!

SimpleMembership

SimpleMembership comes baked into the MVC 4, making it really easy to get started with. Why SimpleMembership you ask? Well, doing authentication/authorization from scratch is hard, and I didn't want to have to go creating functions for encrypting/salting passwords, doing oauth etc, as you get all that for free with SimpleMembership, so why make it any harder than it has to be?

In challenge 2, I commented out the entire AccountController class as I didn't want it to be used, since I wasn't implementing Membership. I left the /views/account/*.cshtml files alone, however, as I knew I'd need them for this part. I've now uncommented the AccountController code again, and want to point out a few things. Let's open up the AccountController class and see what it does...

The first thing you might notice is the InitializeSimpleMembership attribute. This means that this attribute applies to ALL public action methods on the Account controller. If you go to the /filters/InitializeSimpleMembershipAttribute.cs class you'll find the code for this attribute. When you first access one of the AccountController public action methods (e.g. when someone tries to login to the site), the SimpleMembershipInitializer() constructor will fire. This will only one run once, and will not be executed again unless your application is recycled.

There are plenty of articles online about Simple Membership so I won't go into the code in too much detail, except to summarise the main code block as shown below:

C#
using (var context = new UsersContext())
{
    if (!context.Database.Exists())
    { 
        // Create the SimpleMembership database without Entity Framework migration schema
        ((IObjectContextAdapter)context).ObjectContext.CreateDatabase();
    }
}
WebSecurity.InitializeDatabaseConnection("DefaultConnection", 
  "UserProfile", "UserId", "UserName", autoCreateTables: true);   

The UsersContext that you see refers to a class that implements DbContext. This means that it represents an Entity Framework Data Context (code first), which is used for accessing the database using the Asp.Net Entity Framework. When this code runs, it checks if the membership database exists, and if it doesn't creates it using the connection string named DefaultConnection. In the web.config, there is (by default) a connection string with that name, as follows:

C#
<connectionStrings>
    <add name="DefaultConnection" 
      connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-YouConf-
        20130430121638;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\
        aspnet-YouConf-20130430121638.mdf" providerName="System.Data.SqlClient" />
</connectionStrings> 

So, when I fire up my app and access the /Account/Login page for the first time, this code will run and create a localdb database in my /App_Data/ folder named aspnet-YouConf-20130430121638.mdf, with tables as shown below (using Server Explorer in Visual Studio):

Image 124

It also initializes the built-in Asp.net web security to use the DefaultConnection and the UserProfile table. The beauty of this is that it takes away the pain of us having to create the tables ourselves and allows us to get up & running very rapidly. I'll make a few modifications to the AccountController and UserProfile classes as I go, depending on the data I need to to store for users.

Note that I don't want this be checked in to source control, so I'll add another entry to my gitignore file to exclude the whole /App_Data folder for now.

External authentication providers

I really don't want poor users to have to remember yet another username/password for my site, so I'll allow them to login using external providers as listed below:

  • Microsoft
  • Google
  • Facebook (not right now, but later if time permits)
  • Twitter (not right now, but later if time permits)

Again, support for this comes built-in to MVC 4, and I highly recommend visiting the article at http://www.asp.net/mvc/overview/getting-started/using-oauth-providers-with-mvc as it contains information on how to support all of the above providers. I now have to go and register YouConf with each of the above providers so I can get an api key/secret. Again, the article above shows how to go about that task as well. 

Now that authentication is working locally, how about getting it working with a real SQL Azure database? 

SQL Azure - your database in the cloud

As I mentioned earlier, when I'm developing locally I can use localdb for my database. However, what about when I deploy this to Azure? I'll need to access a real database as that stage, so before I go any further I think I should set one up. With the Free Azure trial, you get to create one database, which is all I'll need.

I started by going to the Azure Management Portal, selecting SQL Databases, and clicking Create a SQL database. I then proceeded as follows:

Image 125

on the next screen, I chose a username and password, and chose US West as that's where my website lives (it's recommended to keep your site and database in the same region for both performance and cost - see this article). The result is below:

Image 126

So now I had a new database named YouConf - easy as pie! I wanted to have the connection string available to the website, so first I selected the database, then selected View SQL database connection strings as shown in the bottom-right corner below:

Image 127

I copied the connection string value for ADO.Net, then went back to my YouConf website in the management portal, clicked Configure, scrolled down to Connection Strings, and added an entry for DefaultConnection with the value of the connection string that I'd copied earlier (and updated with my real password in the Your_Password_Here part), as shown below:

Image 128

Now I can access my database from the YouConf web site. I can also manage it and run queries directly from the Azure Management Portal by clicking the Manage button with the database selected. Note that I had to allow my IP address in the firewall rules in order to do this, which I did by simply accepting the prompt that came up when I clicked the Manage button.

I can also access it from my local SQL server management studio if needed! Again, this requires a firewall entry to allow my ip address to have access. More on this later... 

What about my Conferences - will I move them over from table storage?

I certainly plan to, and will do so over the coming days. This will take a bit of work though, so at this stage I'll leave them as-is since they're working fine using table storage. What I will do, however, is rename the UsersContext to YouConfDbContext as below:

C#
public class YouConfDbContext : DbContext
{
   public YouConfDbContext()
        : base("DefaultConnection")
    {
    }
    public DbSet<UserProfile> UserProfiles { get; set; }
} 

I updated references to it, moved it to my /Data folder, and also moved the UserProfile class to its own file. Remember my source is all available on GitHub in the dev branch so feel free to view it - https://github.com/phillee007/youconf/tree/dev

From this point forward, it's important to know a bit about Entity Framework migrations if I need to change my UserProfile class, so I'd recommend the following two articles if you aren't familiar with EF or code-first migrations:

The Repository Pattern - and Entity Framework 

If you've looked at my code, you'd realise I've used a repository pattern to hide the implementation details for accessing Azure Table storage from my controllers. I felt this was a good approach to take, as there are a number of specific details (managing partition/row keys, update strategy when the conference hashtag changes etc) which are best performed by a specific repository class - in this case the YouConfDataContext - and not other classes.

Taking a step back in time, (before I started using NHibernate, and then Entity Framework for db access), I was a big fan of using repositories or other data access strategies such as ActiveRecord or DAOs in order to hide the database implementation details from my UI and other code. However, with the advent of such powerful OR-mapping tools, I often find these days (particularly with small sites like YouConf) that it's easier to avoid having a repository or data access layer, and just use the Entity Framework Data Context (or ISession for NHibernate) directly from the controller.

For those who argue that doing this for EF is not testable - it actually is - you just have to create an interface that your DbContext derives from and pass that as a dependency into your controllers, much like I'm already doing with the IYouConfDataContext. I'm an avid follower of Oren Eini's blog, and admit to having been swayed by some of his views on this, particularly when it comes to eager-loading of object graphs, and the need for this to be transparent so we don't go creating select n+1 issues etc. I'd recommend reading some of his posts if you're interested:

The upshot of all this is that I'll need to create an IYouConfDbContext interface, which YouConfDbContext implements, and use this from the controllers; without requiring an additional abstraction layer between it and the controllers. You'll see what I mean when I start writing some code!

Day 19 - 22 (May 17 - 20)

Sorry for the lack of updates, but as I'm sure you're aware, the more stuff I add to the app, the more I have to write about it, and as a result I end up spending far too much time in front of the small screen and not doing other things! I might have to shorten some of the daily updates so I can fit them in more easily, but here's a brief list of the things I've been working on over the past few days...

  • SimpleMembership - setting up Microsoft and Google external providers, adding additional data about users and requesting it on the registration form
  • Adding password reset functionality - including sending emails
  • Adding secret appsettings that won't be checked into source control by using a separate file and having git ignore i 
  • Conferences - Moving them into SQL instead of table storage, and making updates to the data model in order to support this (E.g. virtual properties for lazy-loading, max length validators, bi-directional navigation properties. See http://msdn.microsoft.com/en-US/data/jj591621 for a good tutorial on this, and http://stackoverflow.com/questions/8373050/code-first-causing-required-relation-to-be-optional for an issue I ran into)
  • Updating the controllers based on the new data context that uses Sql (now Conference, Speaker, and presentation all have their own id field, which makes things easier)
  • Handling updates to properties with AutoMapper
  • Setting up an identical test environment to the production one
  • Custom domain names and SSL
  • Web Roles vs Web sites - which one to use and when?
  • Service bus queues and topics (http://msdn.microsoft.com/en-us/library/windowsazure/hh690931.aspx) particularly for SignalR (https://github.com/SignalR/SignalR/wiki/Windows-Azure-Service-Bus)
  • Built-in chat with SignalR instead of Twitter (although this is not implemented yet!)
  • Worker roles vs VMs and when to use each
  • and a whole heap of the other everyday stuff that you get involved with when building a web app!

Day 23 (May 21)

Right, time to start writing up some of the details of the items I mentioned earlier, here goes...

SimpleMembership

I went to both the Microsoft site and setup an external Oauth account for my app, and was going to do the same with Google, but then found that since I didn't need to access any of the user's information in their google account, I could get away with not setting up an Oauth account with them. Note that in the MVC4 default application, the Google external provider actually uses OpenId for authentication, not OAuth. This wasn't a concern for me so I didn't worry about it, but if you have to have OAuth it would be worth noting.

I then added the code to enable the Microsoft and Google external providers in my /App_Start/AuthConfig.cs files as follows:

C#
public static void RegisterAuth()
{
    // To let users of this site log in using their accounts
    // from other sites such as Microsoft, Facebook, and Twitter,
    // you must update this site. For more information
    // visit http://go.microsoft.com/fwlink/?LinkID=252166
    Dictionary<string, object> microsoftSocialData = 
             new Dictionary<string, object>();
    microsoftSocialData.Add("Icon", "/images/icons/social/microsoft.png");
    OAuthWebSecurity.RegisterMicrosoftClient(
        clientId: ConfigurationManager.AppSettings["Auth-MicrosoftAuthClientId"],
        clientSecret: ConfigurationManager.AppSettings["Auth-MicrosoftAuthClientSecret"],
        displayName: "Windows Live",
        extraData: microsoftSocialData);
    Dictionary<string, object> googleSocialData = new Dictionary<string, object>();
    googleSocialData.Add("Icon", "/images/icons/social/google.png");
    OAuthWebSecurity.RegisterGoogleClient("Google", googleSocialData);
} 

Note that I've also added an additional piece of data for the icon to display, to make the login page a little prettier by showing an icon for each provider, rather than just a button with text. Thanks to http://icondock.com/free/vector-social-media-icons for the icons! The result is that we get buttons like this on the login screen (note that I may try and remove the gray border at some stage too...):

Image 129

'Secret' appsettings and how to store them

You may have noticed that when setting up my Microsoft external provider, I used code such as ConfigurationManager.AppSettings["Auth-MicrosoftAuthClientId"] to retrieve the private keys from the web.config file. Given that the web.config file is checked into source control, I didn't want the values for these to be publicly available for all to see, so I had to find a way to hide them. Note that these are slightly different to my local db connection strings, as I don't have a problem with other users seeing my local db connection string in the web.config. For settings like this, however, I want them to be available on my local machine, but don't want them going into source control. So, what did I do?

UPDATE: During challenge four I found a better way to handle sensitive config settings with Azure Websites, and keep them out of GitHub. I've documented this in a full article at http://www.codeproject.com/Articles/602146/Keeping-sensitive-config-settings-secret-with-Azur  

As you may be aware, you can store appsettings in a separate file if you wish, using the file attribute on the appsettings element. Any values in the additional file that you specify will overwrite the existing values for the same key in the web.config, or just be added if they weren't present in the web.config. So in my case, I:

  • Added an additional file named HiddenSettings.config, and immediately checked this into source control so it would be deployed whenever the site was published.
  • From then on, I didn't want any of my changes to that file to go into source control so I excluded it (now and in the future) as follows: Image 130
  • Added the secret settings to the  HiddenSettings.config file, and also added the same settings with dummy values to the base web.config file, so that if someone was using the code, they could see that they needed to populate this with a valid value. E.g. in my web.config I currently have:
XML
<appSettings file="HiddenSettings.config">
<add key="Auth-MicrosoftAuthClientId" value="thisvalueneedstobeupdatedinthecloudconfig"/>
<add key="Auth-MicrosoftAuthClientSecret" value="thisvalueneedstobeupdatedinthecloudconfig"/>
</appSettings> 
  • And in my  HiddenSettings.config file I have the same keys, but the real values that I want to use for local development 
  • To make the live settings available to the web site on Azure, I went the Azure Management Portal and added the relevant settings to the application settings section so that when I deploy to the cloud, those values will be used, rather than the dummy values in the web.config (Remembering that our hidden settings don't get checked in) as below:
    Image 131

So now I have the correct settings available on my local dev machine, and in the cloud, but not to anyone who decides to go poking around in the source code in GitHub, which is what I need!

 Password Reset functionality

I implemented the standard password reset functionality where one has to enter an email address, click a button, then get sent an email with a reset token in the querystring. Upon clicking that, they get taken to the site to reset their password. This required me to send emails, and for that I used Sendgrid. After setting up an account with them, I copied the username and password values, and added them to my HiddenSettings.config file. I also added a system.net entry to my web.config file as follows:

XML
 <system.net>
<mailSettings>
<!-- Method#1: Configure smtp server credentials -->
<smtp from="no-reply@youconf.azurewebsites.net">
<network enableSsl="true" host="smtp.sendgrid.net" 
  port="587" userName="empty@thiswillgetoverwritten" 
  password="thiswillgetoverwritten" />
</smtp> 

I added an email sender class to send the emails, along with an interface, and configured Ninject to inject it into the constructor of the AccountController. The method for sending emails is as follows (note that I didn't use the Sendgrid library, just plain old .Net code):  

C#
public void Send(string to, string subject, string htmlBody)
{
    MailMessage mailMsg = new MailMessage();
    // To
    mailMsg.To.Add(new MailAddress(to));
    // From
    mailMsg.From = new MailAddress("no-reply@youconf.azurewebsites.net", "YouConf support");
    // Subject and multipart/alternative Body
    mailMsg.Subject = subject;
    string text = "You need an html-capable email viewer to read this";
    string html = htmlBody;
    mailMsg.AlternateViews.Add(
      AlternateView.CreateAlternateViewFromString(text, null, MediaTypeNames.Text.Plain));
    mailMsg.AlternateViews.Add(
      AlternateView.CreateAlternateViewFromString(html, null, MediaTypeNames.Text.Html));
    // Init SmtpClient and send
    SmtpClient smtpClient = new SmtpClient();
    System.Net.NetworkCredential credentials = new System.Net.NetworkCredential(
      CloudConfigurationManager.GetSetting("Sendgrid.Username"), 
      CloudConfigurationManager.GetSetting("Sendgrid.Password"));
    smtpClient.Credentials = credentials;
    smtpClient.Send(mailMsg);
} 

Generating the email body 

I wanted to generate the body of the emails using Razor views, so I could pass in models, parameters etc, and have them nicely formatted. To do that, I added the nuget package for the MvcMailer library, and configured it to have a UserMailer class with a PasswordReset method and corresponding view. Scott Hanselman has a good blog post on this which I recommend reading.

To use the UserMailer class, I simply call it from my controller:

C#
string token = WebSecurity.GeneratePasswordResetToken(user.UserName);
//Send them an email
UserMailer mailer = new UserMailer();
var mvcMailMessage = mailer.PasswordReset(user.Email, token);
MailSender.Send(user.Email, "Password reset request", mvcMailMessage.Body); 

and the code in the UserMailer class...

C#
public virtual MvcMailMessage PasswordReset(string email, string token)
{
    ViewBag.Token = token;
    return Populate(x =>
    {
        x.Subject = "Reset your password";
        x.ViewName = "PasswordReset";
    });
} 

When I complete the forgot password process, I receive an email as follows:

Image 132

Magnifique!!! 

Note: I'm sending the email in-process here, which is not recommended as it can slow down the user's browsing experience and is less resilient to faults connecting to smtp etc. In future I'll look to move this into an Azure worker role, but for now I'll leave it and move on as I have other issues with worker roles which I'll explain later. See my section on some of the issues I had when looking at domains, ssl, web/worker roles for more on this....  

Moving Conferences to SQL 

I thought I'd bite the bullet and do this now, so that I didn't end up scrambling to finish it on the last day of the challenge. In the end it wasn't too hard, as I was able to use the same data model and entity classes with Entity Framework as I had for table storage e.g. The Conference, Presentation, and Speaker classes. What I did first was add more validation attributes such as Max length validators, so these would automatically be applied when the tables were being created.

I also made sure to add bi-directional navigation properties where they were needed. For example, at the end of challenge two, the Conference class container a list of speakers, and a list of presenters, however, there was no Conference property in either the Speaker or Presentation class to navigate the other way. In order to get Entity Framework to generate the tables as I'd like them, I had to add the properties on both ends. Likewise for the relationship between Speaker and Presentation where a presentation can have 0....* presenters. To give an example, below is the code for the Presentation and Speaker classes:

C#
public class Presentation{
    public Presentation()
    {
        Speakers = new List<Speaker>();
    }
    public int Id { get; set; }
    [Required]
    [MaxLength(500)]
    public string Name { get; set; }
    [Required]
    [DataType(DataType.MultilineText)]  
    public string Abstract { get; set; }
    [Required]
    [DataType(DataType.DateTime)]
    [Display(Name = "Start Time")]
    [DisplayFormat(NullDisplayText = "", 
      DataFormatString = "{0:yyyy-MM-dd HH:mm}", 
      ApplyFormatInEditMode = true)]
    public DateTime StartTime { get; set; }
    [Required]
    [Display(Name = "Duration (minutes)")]
    public int Duration { get; set; }
    [Display(Name = "YouTube Video Id")]
    [MaxLength(250)]
    public string YouTubeVideoId { get; set; }
    [Display(Name="Speaker/s")]
    public virtual IList<Speaker> Speakers { get; set; }
    [Required]
    public int ConferenceId { get; set; }
    public virtual Conference Conference { get; set; }
}
public class Speaker{
    public int Id { get; set; }
    [Required]
    [MaxLength(200)]
    public string Name { get; set; }
    [Required]
    [DataType(DataType.MultilineText)]  
    public string Bio { get; set; }
    [MaxLength(250)]
    public string Url { get; set; }
    [MaxLength(150)]
    public string Email { get; set; }
    [Display(Name = "Avatar Url")]
    [MaxLength(250)]
    public string AvatarUrl { get; set; }
    [Required]
    public int ConferenceId { get; set; }
    public virtual Conference Conference { get; set; }
    public virtual IList<Presentation> Presentations { get; set; }
} 

IMPORTANT: This gets me every time!!!! Make sure you mark your navigation properties as virtual, otherwise EF won't be able to lazy-load them! I got bitten by this yet again as I hadn't set them up as virtual, and as a result was wondering why my presentations had no speakers.... Hopefully I don't forget again... 

I'm using Code-First, and since the database had already been setup automatically with just the Membership tables by the SimpleMembership attribute I didn't have to recreate it. What I did do was remove the initializer in the SimpleMembershipAttribute.cs class, and add one in the Global.asax.cs class to automatically migrate the database to the latest version on app startup as follows:

C#
//Tell Entity Framework to automatically update
// our database to the latest version on app startup
  Database.SetInitializer(
  new System.Data.Entity.MigrateDatabaseToLatestVersion<YouConfDbContext, 
  YouConf.Migrations.Configuration>()); 

As I mentioned earlier, I'd created a YouConfDbContext which inherited from the EF DBContext for accessing the database. The code for this is as follows: 

C#
public class YouConfDbContext : DbContext, IYouConfDbContext
{
   public YouConfDbContext()
        : base("DefaultConnection")
    {
    }
    public DbSet<UserProfile> UserProfiles { get; set; }
    public DbSet<Conference> Conferences { get; set; }
    public DbSet<Speaker> Speakers { get; set; }
    public DbSet<Presentation> Presentations { get; set; }
} 

I had to enable code-first migrations, and add my initial migration, as follows:

Image 133

which resulted in the following code (note that I commented out the UserProfile table as it was already created by SimpleMembership):  

C#
public partial class AddConferenceDataToStoreInDatabaseInsteadOfTableStorage : DbMigration
{
    public override void Up()
    {
        //CreateTable(
        //    "dbo.UserProfile",
        //    c => new
        //        {
        //            UserId = c.Int(nullable: false, identity: true),
        //            UserName = c.String(),
        //        })
        //    .PrimaryKey(t => t.UserId);
        
        CreateTable(
            "dbo.Conferences",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    HashTag = c.String(nullable: false, maxLength: 50),
                    Name = c.String(nullable: false, maxLength: 250),
                    Description = c.String(),
                    Abstract = c.String(nullable: false),
                    StartDate = c.DateTime(nullable: false),
                    EndDate = c.DateTime(nullable: false),
                    TimeZoneId = c.String(nullable: false),
                    HangoutId = c.String(maxLength: 50),
                    TwitterWidgetId = c.Long(),
                    AvailableToPublic = c.Boolean(nullable: false),
                })
            .PrimaryKey(t => t.Id);
            
        CreateTable(
            "dbo.Presentations",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    Name = c.String(nullable: false, maxLength: 500),
                    Abstract = c.String(nullable: false),
                    StartTime = c.DateTime(nullable: false),
                    Duration = c.Int(nullable: false),
                    YouTubeVideoId = c.String(maxLength: 250),
                    ConferenceId = c.Int(nullable: false),
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("dbo.Conferences", t => t.ConferenceId, cascadeDelete: true)
            .Index(t => t.ConferenceId);
        
        CreateTable(
            "dbo.Speakers",
            c => new
                {
                    Id = c.Int(nullable: false, identity: true),
                    Name = c.String(nullable: false, maxLength: 200),
                    Bio = c.String(nullable: false),
                    Url = c.String(maxLength: 250),
                    Email = c.String(maxLength: 150),
                    AvatarUrl = c.String(maxLength: 250),
                    ConferenceId = c.Int(nullable: false),
                    Presentation_Id = c.Int(),
                })
            .PrimaryKey(t => t.Id)
            .ForeignKey("dbo.Conferences", t => t.ConferenceId, cascadeDelete: true)
            .ForeignKey("dbo.Presentations", t => t.Presentation_Id)
            .Index(t => t.ConferenceId)
            .Index(t => t.Presentation_Id);
    }
        
    public override void Down()
    {
        DropIndex("dbo.Speakers", new[] { "Presentation_Id" });
        DropIndex("dbo.Speakers", new[] { "ConferenceId" });
        DropIndex("dbo.Presentations", new[] { "ConferenceId" });
        DropForeignKey("dbo.Speakers", "Presentation_Id", "dbo.Presentations");
        DropForeignKey("dbo.Speakers", "ConferenceId", "dbo.Conferences");
        DropForeignKey("dbo.Presentations", "ConferenceId", "dbo.Conferences");
        DropTable("dbo.Speakers");
        DropTable("dbo.Presentations");
        DropTable("dbo.Conferences");
        //DropTable("dbo.UserProfile");
    }
} 

When I fired up the debugger in Visual Studio and ran the app, my tables were automatically created by Entity Framework, and I was able to keep developing using SQL! 

As I made updates to my entity classes I added additional migrations in order for the changes to propagate to the database, such as when I added an Email field to the UserProfile class, so I could store the user's email address. 

Updating the controllers and AutoMapper  

A common issue when using MVC is how to handle the mapping from form parameters to your domain objects for saving to the database. E.g. You might have the following method signature in a controller: 

public ActionResult Edit(string currentHashTag, Conference conference)

MVC can take care of binding form fields to the conference parameter, but how do you map those those values onto the existing entity retrieved from the database? Often in this situation it can be helpful to use viewmodels to restrict the properties that can be updated, and make mapping easier, however even if we use viewmodels, we still have the same issue.

The good news is that AutoMapper helps make this issue easy to resolve! I'd recommened reading the documentation to find out more, but in my case I had to:

  • Add the nuget package for AutoMapper
  • Define my mappings in my global.asax.cs class on app startup
  • Use AutoMapper in my controller Edit methods to map from the input model onto the existing domain model. 

For example, in global.asax.cs I have a method called ConfigureAutoMapper as follows (note that I don't want to override the existing colletion properties so I ignore them):

C#
private static void ConfigureAutoMapper()
{
    Mapper.CreateMap<Speaker, Speaker>()
        .ForMember(x => x.Presentations, x => x.Ignore())
        .ForMember(x => x.Conference, x => x.Ignore());
    Mapper.CreateMap<Presentation, Presentation>()
        .ForMember(x => x.Speakers, x => x.Ignore())
        .ForMember(x => x.Conference, x => x.Ignore());
    Mapper.CreateMap<Conference, Conference>()
        .ForMember(x => x.Presentations, x => x.Ignore())
        .ForMember(x => x.Speakers, x => x.Ignore())
        .ForMember(x => x.Administrators, x => x.Ignore());
} 

and in my ConferenceController edit method: 

C#
public ActionResult Edit(string currentHashTag, Conference conference)
{ 
....
var existingConference = YouConfDbContext.Conferences
.FirstOrDefault(x => x.Id == conference.Id);
if (conference == null)
{
    return HttpNotFound();
} 
...
Mapper.Map(conference, existingConference);
                YouConfDbContext.SaveChanges(); 
...
} 

one line of code to do the mappings, and only a few lines to configure it - looks a bit like magic to me!!! 

Service bus queues, topics, and SignalR 

In order to transmit messages between server nodes in an Azure web farm, SignalR uses service bus topics. See https://github.com/SignalR/SignalR/wiki/Windows-Azure-Service-Bus for more details, but the configuration is fairly simple. You just need to create a service bus namespace, add SignalR service bus to your project in Visual Studio, and then tell SignalR to use your service bus namespace, as shown below:

Adding the service bus namespace in the Azure Management Portal (See http://msdn.microsoft.com/en-us/library/windowsazure/hh690931.aspx for specific details):

Image 134  

Add SignalR Service bus via NuGet:

Image 135 

Copy the value of the service bus connection string from the management portal as below: 

Image 136

paste it into your web.config file, or in my case, my HiddenSettings.config file.  

XML
<add key="Microsoft.ServiceBus.ConnectionString" 
  value="Endpoint=sb://yourservicebusnamespace.servicebus.windows.net/;
     SharedSecretIssuer=owner;SharedSecretValue=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" />  

Important: Don't forget to update your application settings in your cloud service (note the Microsoft.ServiceBus.ConnectionString key):  

Image 137

and finally, in global.asax.cs:

C#
//SignalR
var serviceBusConnectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
GlobalHost.DependencyResolver.UseServiceBus(serviceBusConnectionString, "YouConf");
RouteTable.Routes.MapHubs(); 

Now if I scale out to multiple instances, my SignalR notifications should get broadcast regardless of which server a user's browser is connected to. SignalR takes care of creating the service bus topic and adding the necessary subscriptions, so I don't have to worry about doing that in the management portal. If you're interested in how it does this, you can always check out the SignalR source code on GitHub, or look at this Azure how-to guide.  

 Custom domain names and SSL, Azure Web/Worker roles, grrrrrr 

You can't really have a serious web app without a proper domain right? At least that was my thinking, so I went and bought the youconf.co domain name, along with an SSL certificate. I figured I'd be able to map it onto my web site in its current state, however, I made a few more discoveries:

  •  You have to been running a site in Shared mode or above to map a custom domain onto it. This isn't too much of an issue, as running a shared site is pretty cheap - $9.36/month - but the real trouble comes with SSL... read on... 
  • You can't map an SSL certificate directly onto an Azure website, regardless of the mode it is running in. The only way to do it properly is to change from a website to a web role. There is a workaround at http://www.bradygaster.com/running-ssl-with-windows-azure-web-sites, however that involves creating a cloud service and using SSL forwarding. From what I've read, support for SSL on Azure websites is coming soon (yay!), but in the meantime I'm stuck. If I used the SSL forwarder, I'd have to pay for an additional cloud service in the long-run, so I might as well switch over to using a webrole for my site and not have to worry about the additional steps, BUT  
  • You can't deploy from GitHub to Azure web/worker roles, only Azure websites Frown | <img src= 
  • You can, however, use hosted TFS to auto-deploy to cloud services... 

I'm now in a tricky position, as one of my original goals was to ensure that all my source was available for everyone to see on GitHub, and I also wanted automatic deployments. I also have plans to use a worker role for email sending and other background tasks. But if I go and convert over to web/worker roles, I'd have to use TFS in order to get automated deployments. As I mentioned in challenge 2, I love using TFS, but this is tough for me to decide as I really do want my source to be available to the public. For now I think I'll leave things as they are, since I already have a recognizable domain at youconf.azurewebsites.net, and there's already an SSL certificate for *.azurewebsites.net automatically provided by Azure, which gives us the security we require for logins etc. I suspect I'll have to revisit this in future....

 Day 24 (May 22) 

5000+ hits on the article?!!!! I'm wondering if those stats are right or if maybe someone's playing a trick on me... Anyway, hopefully if you are one of the 5000 mystery viewers you've learned a thing or two, or perhaps learned what not to do! 

 Error Logs - remember them?  

In the 2nd challenge, you might recall that I didn't make the error logs page public as I couldn't secure it using role-based authentication. Now that I've included SimpleMembership, I can enable remote access to administrators. To do so, I first had to update my web.config as follows (note that I've made the Elmah error log url a bit shorter this time). 

First, I enabled remote access:

XML
<elmah>
<errorLog type="YouConf.Infrastructure.Logging.TableErrorLog, YouConf" />
<security allowRemoteAccess="1" />
</elmah> 

Second,  I enabled authentication, set the allowed role to Administrators, and changed the url: 

<add key="elmah.mvc.requiresAuthentication" value="true" />
<add key="elmah.mvc.allowedRoles" value="Administrators" />
<add key="elmah.mvc.route" value="viewerrorlogs" />  

So now any user in the Administrators role can browse my to the error logs at https://youconf.azurewebsites.net/viewerrorlogs. Note that I normally would do my best to avoid letting anyone know the url for my error logs/admin pages, but in this case it's worth doing for the benefit of the article.

One thing you might be asking is how does a user become an administrator? In the old-school asp.net web apps, one might have a role management section and be able to assign users to roles using the built-in functionality. However, since we don't have that luxury I'm going to go one better and run some SQL to assign myself to the Administrators group. How do I run SQL against my live database? As I mentioned earlier, you can use the web-based database management tools from within the management portal, or you can connect directly to your database using SQL Management Studio. I'll show you both below.  

Managing your Azure database directly from the management portal 

The builtin management tools enable you to run queries and view database statistics within your web browser, which makes it easy to run quick queries etc without having to leave the portal. To connect to the database from the management portal, I selected my database (YouConf) in the SQL Databases section, then clicked Manage. I then logged in as follows (note that I'd already allowed the management portal to automatically add an IP restriction for my ip address earlier, so didn't need to add it again):

Image 139

Once connected, I hit the New query button and was able to run the following query to add the Administrators role, and add my user account to it (Note that before doing this, I had registered for the site, and since I'm currently the only user *sad face* my id in the UserProfile table is 1)   

Image 140

I'm now an administrator, so if all goes to plan I should be able to view the error logs page remotely. Let's give it a try: 

Image 141 

Lovely! I had to login before getting to this screen, which is exactly what I was hoping for. I could also have done this using good old SQL Server 2012 Management Studio, which is what I'll show you next...

Managing you Azure database from SQL Server Management Studio 

Since I have SQL Server 2012 Management Studio on my machine, I thought I'd see if I could connect to my cloud database directly from my machine. Turns out it wasn't too tricky either....

  • First, I made sure that I'd connected using the management portal, so an ip restriction was added for the machine that I'm on. Next, I selected my database (YouConf) in the SQL Databases section, then down the bottom of the screen I copied the value in the Server field as shown below:
    Image 142 
  • Next, I opened SQL Server Management Studio on my local machine, and copied the value into the Server Name field, selected SQL Server Authentication, and entered the username and password that I'd selected when I created the database as below (Note that if I'd forgotten what these were I could have retrieved them using the management portal):
     Image 143 
  • I had a quick look to see if my tables were all there (indeed they were)...
    Image 144 
  • then I ran the same query as I had using the web based manager in the portal (I had to remove myself and the administrators group first otherwise the query wouldn't run)
    Image 145

Two ways of achieving the same goal, once again made very easy by the dedicated folk who built SQL Azure - I raise my glass to you ladies and gentlemen!  

Exporting my cloud database backup and importing it locally 

It's getting late and I don't quite have time to get screenshots, but when I wanted to get a copy of the database that I had in production, I followed the tutorial in this article. The steps involved:

  • Selected my YouConf database in the management portal and hit Export 
  •  Entered the credentials for my storage account (that I setup in challenge two for table storage)
  • Hit Finish and waited for the export to complete
  • Opened Azure Storage Explorer on my local machine and connected to my storage account
  • Downloaded the .bacpac file that had been exported
  • Imported into SQL Server Management Studio and viewed what little data I had (hopefully that will go up if others start using the site) Smile | <img src= " />  

Day 25 (May 23)  

I'm going to start writing up my official article for Challenge 3 today, so that I don't end up running out of time on the last days. Before I do so, however, there's one more thing I did a while ago that I think is highly relevant to nearly every app that you expect to actually use for real in Azure. Namely, setting up a dedicated test environment in Azure, so I can test my dev changes in the cloud before I deploy them to the production site.

Setting up a separate test environment in Azure 

As I mentioned in an earlier post, I setup Git so that I have a Master and Dev branch, with Master being configured to auto-deploy to the live site at http://youconf.azurewebsites.net. Initially I'd been merging my dev changes into Master and testing locally before pushing them to GitHub, which is all well and good, but it still didn't feel quite right that I wasn't testing my dev changes in the cloud environment. What I needed to do was setup a test environment in Azure that was connected to my dev branch in source control, so I could test them in the cloud before deploying them to production. The good news is that it really isn't that hard to setup a replica environment in Azure. You just have to make sure that you have the same services (e.g. database, storage, queues etc) setup. So, what did I do?

You've seen the detailed steps I went through to setup my production Azure environment with website, database etc, so I won't repeat them in detail here, but to summarize, I:

  • Created another Windows live account and signed up for the Azure free trial (Note that if I was doing this for a real client, I'd probably use the same Azure account, but since I'm trying to do this all for free till the last possible moment, I did it this way. This also has the added benefit of reducing the risk of me contaminating the production environment with test environment settings)
  • Signed into the management portal using my new credentials, and created a replica version of the:
    YouConf website (named youconftest)
    - database (youconftest)
    - storage account (youconftest)- service bus (youconftest)
    The All items view for the two environments are shown below, first is the test environment:
    Image 147

    and this is production (the only difference being the youconfblog site from challenge two):
    Image 148
     

Image 149

  • Updated the configuration settings and connection strings for the test version of the web site to use the relevant test settings, such as test database connection string and service bus account.  
  • Setup automated deployments from my dev branch in Git to deploy to the test version of the site  
  • Waited for the deployment to complete, as you can see below:
  • Viewed the site at http://youconftest.azurewebsites.net/ and breathed a sigh of relief when it worked!!!!

    Image 150
     

So now I can make dev changes locally and deploy them to the dev site for testing. I can then merge those changes into the Master branch, retest them locally, and push them to the production site. Awesome! Once again the benefits of cloud hosting on Azure shine through! 

Day 26 (May 24)  

Did some tidyup on the daily progress reports and started preparing the main article for challenge two. I should also mention that yesterday I got the 3rd Easter Egg challenge working, although after some discussion on the forums I'm not sure if I did it correctly or not. Hopefully I did! 

Setup an integration test project with SQL CE, based on the helpful article at http://www.codeproject.com/Articles/460175/Two-strategies-for-testing-Entity-Framework-Effort. I've been swinging between whether it's better to try and mock the DB context when using EF, or use SQL CE or localdb and do integration tests. Given that I've been burned in the past when using FakeDbSets and found the behaviour isn't the same as when using the real EF Sql provider, I thought I'd go with the SQL CE option this time around and see how it goes. Yes it means some of the setup is more verbose given that the entities have to be valid before they can be inserted into the test db, but hopefully the tests end up being more reliable. Whilst they're 'integration' tests since they're hitting a real (albeit a disposable) database, I'm going to treat them as unit tests and use the integration test project for doing most of my testing. 

I'm hoping to setup some UI/smoke tests as well, which I can run after each time I deploy to test/production to verify everything is working as expected. 

Day 27 (May 25)   

Spent a fair amount of time writing up my article for challenge three and making sure it all fitted together well. Just putting the finishing touches on it today, and will publish it tomorrow so it gets approved in time for the deadline. Wish me luck!  

Day 28 (May 26)   

Managed to get the article submitted in time - here's hoping I can do well! Also did the Pascal's Pyramid challenge previously  

Challenge Four  

 

Day 29-33 (May 27-31)   

Had a bit of a break after challenge 3 and did some thinking about how I'll use Azure Vms.

*WARNING - Begin Rant *
Upon checking in again I saw the results for challenge 3 had been announced. Feeling a bit sore after seeing them to be honest, given the effort I'd put into challenge 3. Not to take away from the 3 winners - I read all three articles and they were very well written - good job guys Smile | <img src=  However I felt like my approach of covering not only the Sql side of things, but also a whole heap of other relevant web app issues, would've been worthy of a top 3 spot. Sorry for ranting but I'm sure anyone who's entered any event, be it IT, sport, or other, and not made it onto the podium understands where I am at the moment.... Will see how things unfold over the next few days.
* End Rant *  

Day 34-35 (June 1 - 3) 

Given my reservations about entering the next challenge, (see my rant above), I still wasn't quite sure what to do. However, after some encouragement from fellow developers (thanks guys - you know who you are!) I snapped out of my melancholy mood, and am back in the game Smile | <img src=    It was good to have a few days off anyhow, as it gave me time to think about how I'll implement my VM solution. I've begun work on it and made good progress, however, as I mentioned at the end of challenge 3 I'm going to wait till close to the end of the challenge before I post all of my updates for challenge 4, so as not to give everything away too early. Sorry to those of you who are following along, but I think this is the best option competition-wise..... Onwards and upwards, and good luck to everyone else involved!  

Day 36 (June 4)  

What a day! Three big accomplishments:

 

  • Added a worker role to move all of my email sending functionality out-of-process, and updated the web app to send messages to Azure Service bus to kick off the email sending process. Also added other background tasks into the worker role and deployed it 
  • Developed a useful pattern for handling the processing of service bus messages, which I'll document in due course 
  • Documented a new pattern for Keeping sensitive config settings secret with Azure Websites and GitHub - have a look and if you like the article - vote for it! 

Also did some more tinkering with my VM, although I haven't yet had a chance to try out the RDP functionality that roscler suggested. Maybe tomorrow...

Day 37 (June 5)   

 Had a fairly quiet day today, struggling away with authentication in Apache. I've been trying to configure basic auth and just can't get it to work - doh! Maybe I'll have to run without auth for this challenge as I need to start writing up the article soon.

Day 38-39 (June 6-7)   

 Wrote up an article describing the pattern I used for sending strongly-typed messages using Azure Service Bus, and also other best practices such as: 

  • Logging exceptions
  • Deadlettering of poison messages
  • Automatic polling backoff when no messages are received
  • Use of IOC with Ninject  

Check it our - http://www.codeproject.com/Articles/603504/Best-practices-for-using-strongly-typed-messages-w. I thought about including it in the body of this article, but figured that it might count against me given that challenge four is supposed to be all about VMs, and this isn't a VM solution.  

 I also wrote up the bulk of the challenge four article on VMs and my background worker role. 

 

Day 40 (June 8) 

Doing some final tidyup on the search screen, and having one last stab at configuring authentication on the VM. The article is pretty much ready to publish, so I'll do that sometime soon as well. 

Day 41 (June 9)  

 Time to publish what has become something of an epic! After a few little touch-ups I think it's ready for general availability :P 

Challenge Five  

 

Day 42 - 50 (June 10 - 18)  

Apologies for the lack of updates.... I didn't want to modify the article until the judging for challenge four was complete. In any case, it looks like it was worth staying in the competition for challenge four after all, as my article made it back into the top 3 again - phew!

Now onto challenge 5 - mobile access and responsive design. This is something I've been looking forward to since day one, as up until not long ago I really had no idea what it was or how I could apply it to any of the sites I've worked in. Thankfully there are loads of helpful articles out there on the world wide web, and I think I've managed to grasp some of the fundamentals. I've been trying to think of the best way to go about documenting my progress for this section, as it's a bit tricky to decide how much detail to go into. There have also been some discussions around this on the forum, and my interpretation thus far is that you have to go into detail for at least part of your article to help those who are completely new to the topic, but also summarise well for those who just want a quick overview if what you did, and don't necessarily care exactly how you went about it. Gosh iit's tricky :)

Anyway, I'm going to carry on with updating YouConf to work on the big screen, small screen, and everything in between over the coming days. Hopefully I'll post a few updates as I go!  

Day 51 - 52 (June 19 - 20)

10000 views on the article - and 23 votes -  woohoo! That's exceeded even my wildest expectations (well almost, although dreams are free right?)

This mobile challenge is a tricky one, as the write-up seems to take much longer than the actual development/testing time. Hopefully I'll have the bulk of the article complete by eod tomorrow so I don't have to rush on the weekend :) 

Day 53 * 54 (June 21 - 22)

Almost there! Did some tidyup of various items in the article that I'd been putting off for a while, including writing up some final thoughts on the competition and what it has meant to me.

37000+ words, 130+ screenshots, and  only one more day left :) 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
New Zealand New Zealand
I'm a software developer based in the beautiful city of Wellington, New Zealand. I love experimenting with new .Net technologies as they arrive, and these days there seems to be a lot of choice as there are so many new features in the framework! My current interests are Azure, ASP.Net MVC, SignalR, Knockout, AngularJS, and responsive design (inc. using Bootstrap, Foundation, Skeleton). These change fairly often as I tinker with various new technologies...

Comments and Discussions

 
Questionbuild errors Pin
Roger C Moore2-Mar-16 11:26
Roger C Moore2-Mar-16 11:26 
GeneralMy vote of 5 Pin
almuradyh10-Apr-15 7:17
almuradyh10-Apr-15 7:17 
QuestionCongratulation ! Pin
Cloud Geek30-Nov-14 20:33
Cloud Geek30-Nov-14 20:33 
QuestionCongratulations on your prize Pin
Pete O'Hanlon8-Jul-13 10:43
subeditorPete O'Hanlon8-Jul-13 10:43 
AnswerRe: Congratulations on your prize Pin
Phil Lee NZ9-Jul-13 12:08
Phil Lee NZ9-Jul-13 12:08 
GeneralRe: Congratulations on your prize Pin
Pete O'Hanlon9-Jul-13 19:09
subeditorPete O'Hanlon9-Jul-13 19:09 
QuestionCongratulations on winning the grand prize! Pin
roscler5-Jul-13 16:37
professionalroscler5-Jul-13 16:37 
AnswerRe: Congratulations on winning the grand prize! Pin
Phil Lee NZ5-Jul-13 18:48
Phil Lee NZ5-Jul-13 18:48 
GeneralRe: Congratulations on winning the grand prize! Pin
roscler6-Jul-13 8:10
professionalroscler6-Jul-13 8:10 
GeneralRe: Congratulations on winning the grand prize! Pin
Phil Lee NZ6-Jul-13 23:49
Phil Lee NZ6-Jul-13 23:49 
GeneralMy vote of 5 Pin
NoCodeMonkey30-Jun-13 19:27
NoCodeMonkey30-Jun-13 19:27 
GeneralRe: My vote of 5 Pin
Phil Lee NZ1-Jul-13 0:41
Phil Lee NZ1-Jul-13 0:41 
GeneralMy vote of 5 Pin
Enrique Albert23-Jun-13 11:45
Enrique Albert23-Jun-13 11:45 
GeneralRe: My vote of 5 Pin
Phil Lee NZ23-Jun-13 12:35
Phil Lee NZ23-Jun-13 12:35 
GeneralMy vote of 5 Pin
TheRealIlya23-Jun-13 2:16
TheRealIlya23-Jun-13 2:16 
GeneralRe: My vote of 5 Pin
Phil Lee NZ23-Jun-13 12:32
Phil Lee NZ23-Jun-13 12:32 
QuestionGreat work Pin
mikecasey8421-Jun-13 1:35
mikecasey8421-Jun-13 1:35 
GeneralMy vote of 5 Pin
Hallmanac19-Jun-13 1:22
Hallmanac19-Jun-13 1:22 
GeneralRe: My vote of 5 Pin
Phil Lee NZ19-Jun-13 10:29
Phil Lee NZ19-Jun-13 10:29 
GeneralMy vote of 5 Pin
Phil Wheeler18-Jun-13 15:53
Phil Wheeler18-Jun-13 15:53 
GeneralRe: My vote of 5 Pin
Phil Lee NZ18-Jun-13 20:47
Phil Lee NZ18-Jun-13 20:47 
QuestionTry Azure Cloud Table (Context) Pin
Hallmanac18-Jun-13 7:47
Hallmanac18-Jun-13 7:47 
AnswerRe: Try Azure Cloud Table (Context) Pin
Phil Lee NZ18-Jun-13 12:21
Phil Lee NZ18-Jun-13 12:21 
GeneralRe: Try Azure Cloud Table (Context) Pin
Hallmanac19-Jun-13 1:21
Hallmanac19-Jun-13 1:21 
No problem, but I should be the one saying thanks here. This is, by far, one of the most comprehensive write ups I've seen in a while. It rivals some of the tutorials they have on the Azure website (and they have people who get paid to create those)! Seriously, great job!! Thumbs Up | :thumbsup:
-Brian Hall-

GeneralMy vote of 5 Pin
gang.gxu13-Jun-13 12:32
gang.gxu13-Jun-13 12:32 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.