Monday 17 November 2014

Shelveset Comparer Updated

Shelveset Extension is a visual studio extension that I first published at the start of this year. The extension provides a functionality that is otherwise missing in both Visual Studio and Team Foundation Server that is to compare the contents of two shelvesets. I felt that need for it as our team used shelvesets to pass work around and tracking what has changed since the time a shelveset was taken was not always obvious.

There extension has proved popular I have been trying to keep up with comments and feedback on it. This update was due for some time. The view of the extension in team explorer has changed a bit to show options for typing in two users. This allows for comparing shelvesets between two users. However, unlike the first release of the extension, there is still one list to display all users. To separate out shelvesets of two users, an “Owner” column has been added. The column headers are made clickable as well and will sort the rows based on the clicked column.

Screenshot2

Another feature added is the Options panel, allowing users to select whether they want to view the extension as a Team Explorer button or not. Another option is to hide the second user.

 

Screenshot4

The options are there to allow users to customise the view as per their needs.

 

Apart from the new functionality, several fixes and performance improvements have been made

 

Going forward, there is going to be another release by the end of this year, where I will be adding feature to search on a shelveset name. There will be further optimisation in the performance when comparing the contents of two shelvesets.

Thursday 15 May 2014

Feature Toggler – a Simple feature toggle library for .Net

So, you have decided to use Feature Toggling as your branching strategy. You don’t want the hassle of merging and branching and are confident that developers and testers can handle the additional complexity that comes with Feature Toggles. The next step is to decided how to go about using toggles. The simplest and most popular method of doing is to have feature toggles set in configuration files

Ideally, you would want a library that would take care of feature toggling. All you would need to do is to define the features and their toggle value in the configuration file and be able to check if a feature is available with a simple check. Some thing which for a configuration like below

<featureConfiguration>
  <features>
    <add name="PrivateProfiles" toggle="on" />
    <add name="Photosharing" toggle="off" />
    <add name="Videos" toggle="1" />
    <add name="bookmarks" toggle="true" />
  </features>
</featureConfiguration>

would allow having code like following

if (FeatureManager.HasFeature("PrivateProfiles")){

}

Having looked around, there were three libraries of note already available, which were

  1. NFeature
  2. FeatureToggle, and
  3. FeatureSwticher

This blog post gives a good comparison of them and their usability. Having used all three, I felt that all of them, though thorough, were overly complicated for the very simple scenario that I wanted to use. For example, NFeature requires you to create enumerations for all features added in configuration file.

I decided to create a new very simple feature toggling library. https://github.com/hamidshahid/FeatureToggler

The library is available as NuGet. Simply type “Install-package FeatureToggler” in the package manager window of your application. It will add references, add a configuration section in your configuration files and adds a few sample features in your configuration file.

Once you have the reference added, simply add features in the features collection and use them in your code using the FeatureManager.HasFeature(“”) method. Happy Coding!!

Technorati Tags: ,,

Thursday 8 May 2014

Feature Toggles and their limitations

 

This month's MSDN magazine contains an article on feature toggles. The subject has been close to my heart in the last few weeks and I have been weighing up whether they would work for our projects or not .

For those, who are unaware of the term, here is a good post by Martin Fowler describing feature toggles and their merits. He is convinced that feature toggles is the way forward and should be used instead of feature branches. Here is another great blog post explains the differences and recommends to use Feature toggles.

I love the idea of having no features branches … makes life easier. However, my take is that feature toggles is not for everyone and every team. For someone like Plural soft who does continuous delivery (and they use feature toggles), the process is simple. Each release, results in some new "features" being added. The process is generally additive with software becoming more "feature rich" and there is control on the release pipeline.

Now, turn our attention to a simple "message broker" kind application that interface with multiple systems and has no UI. The application handles message say M1 from one applications, does something to it and pass on message M2 to another application. Now, let's say there is a change in message interface because the sending application is changing. We start with a feature to handle the new message interface. Since, the change is a few months away, we need to keep supporting the existing interface. In this case, if feature toggling is involved, we would have to create a parallel code path to handle the new message interface and direct to that code path with feature toggle. If it wasn't the case and we were using branching instead, the change in code would have been much simpler. So in essence, we have replaced the complexity of merging by having a more complex code change.

Take another example, this time we have to delete something from the application, let's say a web service from the system. The feature toggle mechanism would require us to modify it to error on invocation when the feature is on. Compare it with the alternative of removing the service altogether.

Similarly, let's consider a windows/web UI application. One of the features is to re-design of the screens. The redesign includes jigging around all the form controls and include some new graphics. With feature toggling approach, we will either have a condition on display of each of these changes or have a new form created altogether, choosing between the two based on toggle value.

These were only some of the scenarios where feature toggle wouldn't essentially simplify things in my opinion. Others might disagree and I would love to listen to them, so please post your comments if you have any.

 

Technorati Tags:

Wednesday 9 April 2014

TF10201 Source control could not start the manual merge tool

A quick post in the “How I got burnt” today category. I was attempting a merge from one TFS branch to another, when I start getting the following error

screenshot

The error is pretty random in that it doesn’t tell what has gone wrong. However, if you look at the Output window, you will find the real reason for this error, which is that the target merge file doesn’t exist. The error happens when you have a TFS workspace but have deleted the files on your local machine. TFS at this point things that you have the latest source and attempts to merge the file. However, since the files are not there, it throws this error. Please note that this error would only happen for files that have merge conflicts.

The fix is quick. Just do a forced get latest of the files involved and this would go away.

Technorati Tags: ,

Tuesday 1 April 2014

PowerShell – Log off all remote sessions

Needed to have a script that would log off all remote sessions from a given machine. The task is simple. The qwinsta commandlet lists all sessions and rwinsta logs off the session. I couldn’t find a script anywhere that would use the two together, so wrote the following. Enjoy!!

param (
    [String]$computer
)
$sessions = qwinsta /server:$computer
$sessions = $sessions[1..$($sessions.Count - 1)]
foreach ($Result in $sessions) {
    $userName = $Result.Substring(19,22).Trim()
    $id = $Result.Substring(41,7).Trim()                        
    if ($userName -ne ""){
                    rwinsta /server:$computer $id

    }
}   

Technorati Tags: ,

Tuesday 25 March 2014

ALM with Microsoft Dynamics CRM – Deployment

This is the fourth and final post of a multi-part series suggesting an ALM process in projects where Microsoft Dynamics CRM is used as a data store.

In my previous blog post, I explained about including Microsoft Dynamics CRM customisations in your Team Build, how to structure CRM customisations and scripts in TFS and how to produce a deployable managed and/or unmanaged solution as an output. In this post, I will write about a deployment process that enables you to deploy the package produced from Team Build to a target environment.

 

Deployment Overview

The importance of having a reliable, repeatable and well-documented deployment process cannot be understated. Deployment should be planned from the very outset of starting the project scaling it up from a single machine environment to test and staging environments eventually scaling it up for production. Having a repeatable process prevent surprises in the all important go-live process. It also allows you to make regular continuous deliveries.

In this scenario, we are considering deploying a new CRM solution to a new CRM target environment that is to say we are not upgrading to an existing system or deploying to an existing CRM instance. The deployment involves the following steps

  1. Create new CRM Organisation
  2. Set CRM organisation settings such as Currency, Time Zone, etc.
  3. Import Data Maps required before importing CRM Solution
  4. Import Data required before importing CRM Solution.
  5. Import CRM Solution
  6. Import Data Maps for initial data population.
  7. Import Data for initial data population.
  8. Publish SSRS reports.
  9. Import Team Associations.
  10. Publish workflows.

All the  steps apart from step (1) and (4) are optional and applicable only if your CRM customisations require it.

In my last post, I suggested to structure CRM deployable in the following format and we will use the same when writing our deployment scripts.

[Sample%2520CRM%2520Folder%2520Structure%255B2%255D.jpg]

For my deployment scripts, I will use MSBuild using a library called MSBuild Extension Pack. The library provides a rich set of functionality and the March release of the library has Tasks for Microsoft Dynamics CRM as well.

Sample Deployment

Following is the sample listing of the deployment process listed above. For simplicity, I have only included steps 1, 2, 5, 6 and 7 of the above mentioned process.

<Target Name="DeployCrmOrganisation64">
<!-- Creating Crm Organisation-->
<MSBuild.ExtensionPack.Crm.Organization TaskAction="Create" DeploymentUrl=http://CRMServer/XRMDeployment/2011/Deployment.svc Name="organization1" DisplayName="Organization 1" SqlServerInstance="MySqlServer" SsrsUrl="http://reports1/ReportServer" Timeout="20" />

<!-- Update an Organization's Settings -->
<ItemGroup>
        <Settings Include="pricingdecimalprecision">
          <Value>2</Value>
        </Settings>

        <Settings Include="localeid">
          <Value>2057</Value>
        </Settings>  

        <Settings Include="isauditaneabled">
          <Value>false</Value>
        </Settings>
     
<ItemGroup>

<MSBuild.ExtensionPack.Crm.Organization TaskAction="UpdateSetting" OrganizationUrl="http://CRMServer/organization1" Settings="@(Settings)" />

<!-- Import Solutions –>

<MSBuild.ExtensionPack.Crm.Solution TaskAction="Import" OrganizationUrl=”http://CRMServer/organization1 Name="CrmSolution" Path="C:\Solutions" Extension="zip" OverwriteCustomizations="true" EnableSDKProcessingSteps="True" />

<!—Import Data Map-->
<MSBuild.ExtensionPack.Crm.DataMap TaskAction="Import" OrganizationUrl="http://CRMServer/organization1" Name="Organization1" FilePath="C:\DataMapFile1" />

<!—Import Data-->
<MSBuild.ExtensionPack.Crm.Data TaskAction="Import" OrganizationUrl="http://CRMServer/organization1" DataMapName="Entity1DataMap" SourceEntityName="entity1" TargetEntityName="entity1" FilePath="C:\DataFile1.csv" />

</Target>

The first step in the script is creating a new CRM organisation. The task used is “MSBuild.ExtensionPack.Crm.Organization” with a task action of “Create”. It takes a parameter the CRM instance’s deployment URL, the name and display name of the organisation as well as the name of SQL Instance and SSRS instance. The time out parameter is optional and I am specifying it to prevent the deployment script to wait indefinitely.

Once the organization is created, the next step is to set certain organisation settings. Again the task “MSBuild.ExtensionPack.Crm.Organization” with task action “UpdateSetting” allows this. The task takes in an ItemGroup of setting names and values as parameter.

The next step in to import a managed solution into the newly created organization. For this the task used is “MSBuild.ExtensionPack.Crm.Solution” with task action of “Import”. The task requires the path where the solution file is placed, the name and extension of the solution file. Also required are parameters to specify whether to overwrite any already existing customisation in the target organisation and also whether to trigger CRM Plug-ins and workflows as the solution is imported.

The final two steps are simply importing a data map and a data file to the newly created organisation. The parameters are self-explanatory. MSBuild extension pack contains some other useful CRM tasks. For more details read the project documentation at http://msbuildextensionpack.com/.

This culminates our discussion about ALM process for solutions involving Microsoft Dynamics CRM. I hope you find this series useful and do give your feedback.

Sunday 16 March 2014

ALM with Microsoft Dynamics CRM – Setting up Team Build

This is the third of a multi-part series suggesting an ALM process in  projects where Microsoft Dynamics CRM is used as a data store.

My pervious blog post was about setting a Development Build for the developers so that they can build the system (including all latest Microsoft CRM Dynamics artefacts) end to end. In this post, I will write about about setting up a Team Build.

The purpose of the Team build is to compile and build all system artefacts to produce a deployable package. The package is then read by the deployment scripts to deploy the system to a target environment. For a very simple project, the deliverable may be an executable or an MSI. For a more complicated system, it may include published websites, assemblies, databases, etc. For Microsoft Dynamics CRM, the deliverables will be a managed / unmanaged solution along with artefacts such a data maps, data import files, de-duplication rules, SSRS Reports, etc.

CRM Deployment Overview:

Before describing the team build, let’s first take a brief look of what the CRM deployment script would do.

  1. Create a new CRM Organisation.
  2. Set Organisation settings.
  3. Import Solution.
  4. Import Data Maps.
  5. Import Data.
  6. Import Bulk Deletion Operations.
  7. Publish SSRS Reports.
  8. Set Field Level Security.
  9. Publish unpublished workflows.

Above is one of the several possibilities and might not meet your exact requirements. For example, your solution might have to be deployed to an existing organisation, in which case step 1 is not needed. I will detail the deployment process in more detail in the next blog.

Structuring CRM Package

Having taken a look at how the deployment of CRM would take place, let’s take a look at the Dynamics CRM deliverables and how to structure them in the deployment package. Some of the deliverables (such as plug-in assemblies) needs to be compiled, some needs to be taken straight from the source control. In any case, it is essential that the deliverables are taken from source control and not from the a CRM development instance. The following diagram describes how I would structure the deliverables in the CRM folder.

Sample CRM Folder Structure

All these folders are contents of the CRM folder that is included in the cabinet file produced by the build. Let’s have a look at each of the folders

  1. Assemblies: The folder contains Microsoft Dynamics CRM deployment assemblies such as Microsoft.xrm.sdk.deployment.dll.
  2. BulkDeleteOperations: The folder contains the exported Bulk deleted operations files from the Development instance of CRM.
  3. Data: The folder contains initialisation data for the system. The folder contains a csv file for each entity that needs to have initialization data as well as a data map file.
  4. DedupeRules: Contains the de-duplication rules for entities.
  5. FieldLevelSecurity: Contain team association for field level security of custom and out-of-the-box entities.
  6. Reports: Contain details of the the reports to be published.
  7. Settings: Contains organisation setting details.
  8. Solutions: Contains the managed or unmanaged solutions that contains all the customisations.
  9. Workflows: Imported solutions do not have workflows enabled automatically for themselves. This folder contains information of workflows that would need to be enabled.
  10. Structuring CRM Package

Structuring CRM Source

The Microsoft Dynamics CRM source code would be structured as CRM SDK creates them. These will be built as part of compilation of CRM solution in team build. The following diagram describes a typical structure of CRM source code.

image

Team Build:

The Team build will take the contents of the above-mentioned folders, apart from the Solution folder, straight from the source control. The data csv files would be maintained in source control, while other files such as Data Import files,bulk  deletion operations, de-duplication rules, etc would be exported from the CRM development instance and checked-in into source control.

The solutions, on the other hand, would be created by Team Build using the CRM’s SolutionPackager utility. However, before the solution is package, a mapping file should be created to map plug-in assemblies correctly. The FileMapping.ps1file perform this action. The following target in your team build will package the CRM solution for you.

<Target Name="PackageCRMSolution">
  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="...Packaging CRM Solution" Status="Succeeded" Condition="'$(BuildUri)' != ''"/>

  <!-- Copying CRM deployment files-->
  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="......Copying CRM Deployment files" Status="Succeeded" Condition="'$(BuildUri)' != ''"/>
  <ItemGroup>
    <CrmDeploymentFiles Include="$(SolutionRoot)\Build\Deployment\CRM\**\*.*" Exclude="$(SolutionRoot)\Build\Deployment\CRM\Solutions\SolutionFiles\*.zip"/>
  </ItemGroup>
  <Microsoft.Build.Tasks.Copy SourceFiles="@(CrmDeploymentFiles)" DestinationFiles="@(CrmDeploymentFiles-&gt;'$(BinariesRoot)\Release\Server\CRM\%(RecursiveDir)%(Filename)%(Extension)')" />

  <!-- File Mapping -->
  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="......Run File Mapping" Status="Succeeded" Condition="'$(BuildUri)' != ''"/>
  <Microsoft.Build.Tasks.Exec command="powershell $(BuildToolsPath)\FileMapping.ps1 -binarySearchLocation &quot;$(BinariesRoot)\Release\Server\CRM&quot; -unpackFolderLocation &quot;$(SolutionRoot)\Source\CRM\Solution1&quot; -outputLocation &quot;$(BinariesRoot)\Release\Server\CRM&quot;" />

  <!-- Solution Packager UnManaged-->
  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="......Package CRM solution - Unmanaged" Status="Succeeded" Condition="'$(BuildUri)' != ''" />
  <Microsoft.Build.Tasks.Exec command="$(BuildToolsPath)\SolutionPackager /a:Pack /z:&quot;$(BinariesRoot)\Release\Server\CRM\Solutions\CrmSolution1_1_0_0_0_unmanaged.zip&quot; /f:&quot;$(SolutionRoot)\Source\CRM\Solution1&quot; /p:Unmanaged /m:&quot;$(BinariesRoot)\Release\Server\CRM\mapping.xml&quot;"/>

  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="......Package CRM solution - Managed" Status="Succeeded" Condition="'$(BuildUri)' != ''"/>
  <Microsoft.Build.Tasks.Exec command="$(BuildToolsPath)\SolutionPackager /a:Pack /z:&quot;$(BinariesRoot)\Release\Server\CRM\Solutions\CrmSolution_1_0_0_0_managed.zip&quot; /f:&quot;$(SolutionRoot)\Source\CRM\Solution1&quot; /p:Managed /m:&quot;$(BinariesRoot)\Release\Server\CRM\mapping.xml&quot;"/>

  <ItemGroup>
    <CRMFilesToCleanUp Include="$(BinariesRoot)\Release\Server\CRM\*.*" Exclude="$(BinariesRoot)\Release\Server\CRM\*.zip" />
  </ItemGroup>
  <Delete Files="@(CRMFilesToCleanUp)" Condition="@(CRMFilesToCleanUp) != ''" />

  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="...Completed Packaging CRM Solution" Status="Succeeded" Condition="'$(BuildUri)' != ''"/>   
</Target>

Once the above target is included in your team build, your build will produce the CRM deployment folder as described above. I like to create a cabinet (.cab) file out of all files in the Drop folder, but it is certainly optional. In my next blog, I will write about the deployment script to deploy the CRM deliverable files produced by the team build.

Technorati Tags: ,,

Wednesday 12 March 2014

ALM with Microsoft Dynamics CRM – Setting up a Development Build

This is the second of a multi-part series suggesting an ALM process in  projects where Microsoft Dynamics CRM is used as a data store

In my previous blog post, I wrote about establishing an ALM process for projects involving Microsoft Dynamics CRM. The greatest challenge in projects with Microsoft Dynamics CRM is ensuring that the system is restored to a known baseline state and the deployment process is applied such that it deploys code, customisations and data in a repeatable and reliable way. In my post, I wrote about three constituent pieces of the ALM process
  • Development Build
  • Team Build
  • Deployment

In this post, I will elaborate on the “development build” part of the process.

Development Build

The purpose of the development is threefold

1) To ensure that the complete solution can be compiled end-to-end - Usually, a typical software solution consists of more than one visual studio solution such that solutions have  inter-dependencies i.e. libraries from one solutions are used by other solutions. Before checking-in changes, a developer needs to ensure that there are no build breaks in any of the dependent solutions.The Dev Build will build all the visual studio solutions in order, placing the output of each to the location where the dependant solutions are referencing them from.


2) Setup a complete isolated environment locally for developers – Usually a software solution would have quite a few artefacts such as Active Directory users/groups, databases, web services, windows services, etc. Typically the developer at a time would be working on one part of it. The development build would set up his a scaled down system allowing him / her to test their work area without having to rely on an integration environment.


3) Run Integration Tests - Executing Integration tests in one form or another is vital in ensuring that developers are not destabilizing the system as they check-in. This is specially important for bigger teams. Some would argue that this should happen in Continuous Integration builds and in the Continuous Deployment process. In my experience, leaving it ONLY in continuous deployment process makes finding errors more difficult and result in a large number of BVT (Build Verification Testing) failures.


CRM Development Build

If Microsoft Dynamics CRM is part of your end-to-end solution, you can include it in the development build process in one of the following two ways

1) Have a local installation of Microsoft Dynamics CRM on your machine. Each run of development build, will compile the Dynamics CRM code base and deploy a new CRM Organisation using the deployment scripts. The advantage of this approach is that you are always working from checked-in code and can be certain that what you have got in your development machine is what will be deployed to your test environments.


2) Have your own CRM Organisation in a shared “development” CRM server. With this, your CRM team will maintain a database backup of a stable organisation that they have deployed to using CRM deployment scripts. Your development build will restore this database and import it to your organisation.


Given the effort and resources needed to set up Dynamics CRM and the CRM SDK being required to build CRM codebase, I prefer option 2. The downside is that you be relying on your CRM team to provide a stable organisation. However, the advantages are not needing the CRM development tools and a quicker development time.

The following diagram illustrates how CRM organisation is imported during the build process.


 CRM Dev Build

Like any other ALM  processes, the development build process should be repetitive. This means that it should contain the following sequence of actions

  1. Tear Down
  2. Build
  3. Deploy
  4. Start
  5. Test
Or, it would roughly be something like following (deliberately remove tear down and deployment of other artefacts to keep it simple)

<Target Name="Build" DependsOnTargets="TearDownCrm;Build;DeployCrm"/>

The Teardown script would involve running the PowerShell commandlets on the remote CRM server. For this remote power shell should be enabled on the server. Once it is enabled, you can use the following task MSBuild Task to execute PowerShell commandlets on the CRM server remotely

<UsingTask TaskName="PSExecTask" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v12.0.dll" >

  <ParameterGroup>
    <Server ParameterType="System.String" Required="true" />
    <Command ParameterType="System.String" Required="true" />
    <Args ParameterType="System.String" Required="false" />
    <FailOnError ParameterType="System.Boolean" Required="false" Output="false"/>
    <ExePath ParameterType="System.String" Required="true" Output="false"/>
  </ParameterGroup>
  <Task>
    <Using Namespace="System"/>
    <Using Namespace="System.IO"/>
    <Using Namespace=" System.Diagnostics"/>
    <Code Type="Fragment" Language="cs">
    <![CDATA[
        ProcessStartInfo start = new ProcessStartInfo();
        start.Verb = "runas";
        start.FileName = ExePath; // Specify exe name.
        Log.LogMessage(@"\\" + Server + @" " + Args + " " + Command);
        start.Arguments = @"\\" + Server + @" " + Args + " " + Command;
        start.UseShellExecute = false;
        start.RedirectStandardOutput = true;
        start.RedirectStandardError = true;
        try
        {
          using (Process process = Process.Start(start))
          {
            using (StreamReader reader = process.StandardOutput)
            {
              string result;
              result = reader.ReadToEnd();
              Log.LogMessage(result);
            }
            if ((process.ExitCode != 0) && (FailOnError == true))
            {
              Log.LogError("Exit code = {0}", process.ExitCode);
            }
            else
            {
              Log.LogMessage("Exit code = {0}", process.ExitCode);
            }
          }
        }
        catch (Exception ex)
        {
          Log.LogError("PSExec task failed: " + ex.ToString());
        }]]>

    </Code>
  </Task>
</UsingTask>

The Teardown script is shown below

<Target Name="TearDownCrm” Condition="’$(SkipCrmDeployment)’ != ‘true’”>

<PSExecTask Server="$(CRMWEBComputerName” Condition="powershell Add-PSSnapin Microsoft.Crm.Powershell; Disable-CrmOrganisation $(CrmNewOrganisationName); Remote-CrmOrganisation $(CrmNewOrganisationName)” ExePath="$(PsExec)">

<MSBuild.ExtensionPack.SqlServer.SqlExecute TaskAction="Execute"
                                                CommandTimeout="120"
                                                Retry="true"
                                                Sql="ALTER DATABASE $(CrmNewDatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE $(CrmNewDatabaseName);"
                                                ConnectionString="$(CrmDatabaseServerConnectionString)"
                                                ContinueOnError="true"/>
</Target>

The variables used in the script are pretty much self-explanatory. Note the Continue On Error in tear down. This is done because for the first run of the build there won’t be any databases or organisation set up.


The Deployment script is shown below


<Target Name="DeployCrm" DependsOnTargets="RestoreCrmOrganisationDatabase;
                                                        ImportCrmOrganisation
                                      Condition="'$(SkipCrmDeployment)' != 'true'" />
<Target Name="RestoreCrmOrganisationDatabase">
  <MSBuild.ExtensionPack.SqlServer.SqlExecute TaskAction="Execute"
                                                CommandTimeout="120"
                                                Retry="true"
                                                Sql="IF EXISTS(Select * from sysdatabases WHERE NAME LIKE '$(CrmNewDatabaseName)') ALTER DATABASE $(CrmNewDatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE; RESTORE DATABASE $(CrmNewDatabaseName) FROM DISK = N'$(CrmDatabaseBackupFile)' WITH REPLACE, FILE = 1, MOVE N'MSCRM' TO N'$(CrmDatabaseDataFileLocation)\$(CrmNewDatabaseName).mdf', MOVE N'MSCRM_log' TO N'$(CrmDatabaseDataFileLocation)\$(CrmNewDatabaseName)_log.ldf';"
                                               ConnectionString="$(CrmDatabaseServerConnectionString)"/>
</Target>  

<Target Name="ImportCrmOrganisation">
    <Copy SourceFiles="$(MSBuildProjectDirectory)\Resources\$(CrmUserMappingFile)" DestinationFiles="$(CrmFileStore)\$(CrmNewOrganisationName).xml"/>
    <MSBuild.ExtensionPack.FileSystem.Detokenise TaskAction="Detokenise" TargetFiles="$(CrmFileStore)\$(CrmNewOrganisationName).xml" DisplayFiles="true"/>
<PSExecTask Server="$(CRMWEBComputerName)"
            Command="powershell $(CrmImportOrganisationScriptPath) -sqlServerInstance '$(CrmSqlServerInstance)' -databaseName '$(CrmNewDatabaseName)' -reportServerUrl '$(CrmReportServerUrl)' -orgDisplayName $(CrmNewOrganisationName) -orgName $(CrmNewOrganisationName) -userMappingXmlFile '$(CrmFileStore)\$(CrmNewOrganisationName).xml'"
                        ExePath="$(PSExec)"/>
</Target>

The deployment involves restoring database, which is done using the  “SqlExecute” task in MSBuild Extension Pack. Once the database is restored, the next action is to execute the “Import-CrmOrganisation” Commandlet on the remote server.Once imported, the organisation will be created and available for developer from the database backup.

In the next post, I will discuss about setting up Team Build for solutions containing Microsoft Dynamics CRM

Technorati Tags: ,,

Friday 7 March 2014

ALM with Microsoft Dynamics CRM

This is the first of a multi-part series suggesting an ALM process in  projects where Microsoft Dynamics CRM is used as a data store.
 

In the last few years, I have been worked on some solutions that use Microsoft Dynamics CRM as the primary back-office system. Dynamic CRM’s rich feature set and it’s ability to be an XRM (eXtended Relationship Management) system makes it an ideal candidate to be an alternative of bespoke database systems. However, in each of the solution, there were some externally exposed services/systems which needed to access the data in Dynamics CRM. So, there was a services layer that will expose required data to other services/systems. In other words, Dynamics CRM acts as data store for other systems.

With Microsoft Dynamics CRM in the frame establishing ALM process has an additional challenge. This challenge comes from the fact that Dynamics CRM is essentially a platform onto which customisations (such as entities, plug-ins, workflows, data, etc.) are deployed. Moreover, the customisations are additive. Because of this setting up a repeatable process is tricky. It is also vital that the “baseline” of Dynamics CRM system is properly captured in any build and deployment process.

Development Build

One of the foremost activity at the start of the development is to get a development build going. The purpose of development build is to make sure that all constituent parts of the system are compiled, the unit tests are run and some level of integration testing done on development workstations. Developers are, of-course, required to run the development build before they check-in.

The same can be achieved with Gated builds in TFS, but from experience running integration and Coded UI tests on team build is somewhat “high maintenance” and developers don’t get the isolation that they get with a development build on local machines.

 

Using CRM in Development Build

So, with Microsoft Dynamics CRM in the picture, what should the process be? The very first thing for you to decide is how to achieve “isolation of environment” for developers. This is needed because each developer will be running his / her own set of integration tests. There are two options:

 1. Local CRM Instance
  • Each developer has a Microsoft Dynamics CRM server deployed locally.
  • A developer is a deployment admin on her/ his own CRM Instance.
  • The CRM team check-ins CRM code and deployable packages in the repository.
  • Each run of development build sets up a CRM Organization by compiling checked-in code and use deployment and data files from the repository.
2. Single Development CRM Instance
  • A single CRM server for all developers or a group of developers.
  • Each developer has his / her own CRM Organization
  • All developers are deployment administrator on the Development CRM instance.
  • The CRM team check-ins CRM code and deployable packages in the repository AND also “publishes” an CRM organisation  by taking a back-up of a stable version.
  • Each run of development build  sets up the CRM organisation by restoring the “published” CRM database backup and running post organisation restoration packages such as map users.

Each of the two options has advantages and disadvantages.

A local CRM instance on each development machine provides more isolation but requires more local resources and CRM knowledge for the local developers. It also means that the development build is slower as setting up an organization is slow. Also means that the CRM SDK needs to be installed each developer’s machine.

The single development CRM instance means that all the developers are dependant upon availability of one server, however, most of CRM details are hidden from them. As long as they are able to restore an organisation database and import an organisation, they are fine.

PLEASE NOTE: The CRM team should always work on a separate instance either way because publishing customisations in CRM is a resource intensive project and them trying to do it on a CRM server used by all other developers will impact the velocity of the team.

Team Build

Like any other project, two types of Team Builds should be set up.

Continuous Integration Build – To be triggered with each check-in, the purpose of the CI build is to ensure that all checked-in code (including CRM customisation code) is compiled and built well and pass all quality gates.

Product Build – The product build is triggered periodically (overnight in our case) and produces deployable packages from the source repository. The deployable package in terms of CRM were the managed and un-managed solution zip files along with scripts for organization settings, import data, data maps,  etc.

PLEASE NOTE: It is important that you generate your CRM Package from the source repository and not from the CRM development instance (for example taking a backup of the organization database) otherwise your source repository will be side track and people will make code changes and fixes in the development environment without ever checking in source code, population scripts, etc.

The structure of CRM deployment packages warrants a separate post, which I will write in the next post.

Deployment

The last piece of setting up the ALM process is setting up a process for the deployment of CRM to take place. The end goal is to have a deployment process that can deploy the deployable package in a consistent and reliable way. The process needs to be repetitive so that you can do it every time when you move between development to test to staging and then to production. This ensures that you don’t get any surprises when you are deploying to production.

As mentioned earlier, any deployment to Microsoft Dynamics CRM is additive. This means that that you need to make sure that the target system deployment is properly baselined. For example, if your production environment has already got some customizations from say another managed CRM solution, make sure they are present in your functional test, pre-production and any other environments as well.

I prefer to use MSBuild based scripts for deployments. The MSBuild Engine is deployed with .Net Framework and libraries such as the “MSBuild Extension Pack” provides a rich set of functionality. You can well use PowerShell. In fact, Dynamics CRM is well  supported in PowerShell. For CRM, there is good PowerShell support as well. However, the latest release of MSBE has now got support for CRM such as Create Organization, Import Solution etc.

Updated 12/03/2014: In my next blog post, I have elaborated the development build process with sample code.

Technorati Tags: ,

Tuesday 4 March 2014

The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\WebApplications\Microsoft.WebApplication.targets" was not found.

This is another “How I got burnt” today post, so sharing here just in case anyone else finds the same issue. So, I moved one of my team project from TFS 2012 to TFS 2013. After moving the solution, I noticed that one of my builds start producing the following error

The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

This build compiles a solution containing an ASP.Net MVC project, which is where it was complaining about. The solution was already migrated to Visual Studio 2013, so why was it complaining about files for Visual Studio 2012 (v11.0) not being found. Looking closely, it transpired to me that it might be the 2012 Team build template and indeed it was. Although, we have migrated our project to TFS 2013, we were still using the TFS 2012 Build template. The solution was simple – simply passing on the Visual Studio version as an MSBuild argument (as shown below) in the build definition and walla it’s all resolved.

msbuild arguments

Monday 17 February 2014

Git For TFVC Users

One of the biggest features of Team Foundation Server 2013 is it’s support for Git. Git is popular source code repository specially in the open source community. However, unlike Team Foundation Version Control (TFVC), Git is a distributed version control system. This means that for regular users of TFVC, there are some conceptual differences to consider when working with Git. In this post, I will explain some of these differences and also illustrate the equivalent operations in Git for most common TFVC operations.


Centralized Vs Distributed Version Control Systems

TFVC is a centralized version control system while Git is a distributed. That is to say that at any point of time, the server in TFVC contains the “true” version of any file. Anyone making a change is expected to fetch this version, make the change and then check-in against this version. For this to happen, the client must be connected to the server. 

Git, meanwhile, is a Distributed Version Control system where each client machines keeps a local copy of the repository. Any user working with the client can keep working on local version of the file without talking to the server. It is only when the client tries to “push” the changes that a contact to the server is made. Since multiple people can work on the same file in their local repositories, there might be some conflicts. Git addresses this requiring the users to resolve all conflicts with the remote repository before they “push” their changes.


Here, I am “disregarding” local workspaces, which is a feature that was introduced in TFS 2012. Local workspaces has some similarities with Git in that it propones “Change –> Merge –> Push” way of working rather than “Checkout –> Change –> CheckIn”. However, the fact remains that TFVC is a centralised  system.


Changesets Vs Commits

All changes in TFS are checked in as changesets. A changeset is a single unit of change that can be checked in and constitutes the list files which are changed. A changeset can be reverted back and merged to different branches.

The equivalent in Git are commits. They are essentially the same as changesets except that they exist on local repositories. Once pushed to remote repository, they are essentially same as changesets in TFVC.


Until commits are pushed, they reside only on the local repositories and are not visible to other users who clones or fetches the remote repository. You can event amend your commits.


Branching

Branching in TFVC is a very different concept that branching in Git. In TFVC, branching is essentially a “deep copy” of the original branch. . The branching operation is time consuming and is always executed on the server. Once a new branch is created, users can work on it completely independently without needing to even get a local copy its parent. Each branch has its own version history.

In Git, a branch is light weight. In actual, it is merely creating a new head pointer to the version pointed by the parent branch. Once the new branch is created, the new head pointer is moved as new changes are committed. In essence, all branches exists within the same file path. The only difference between an active branch and inactive branches is the head pointer selected at the moment.


Merging

It can be safely stated that merging is one of the strongest feature in Git. TFVC perform merging using the 3-way merge algorithm, in which it attempts to merge the changes from baseline to the newer version and also the changes from the baseline with the version of the file in the current workspace. If it can’t resolve any conflicts it asks the users to resolve conflicts.
Git supports 3-way merge as well as a host of other merging algorithms such as recursive, octopus, etc. While it uses the optimum algorithm for the situation, user has the option to select the merge algorithm through the command line parameter as well.


Security


TFVC is fully integrated with Active Directory and uses it for security. Online version of TFS can use an authentication service such as windows live service to authenticate users. Moreover, there are options of set permissions for individual files, folders and branches.
Git does not have such an extensive security mechanism. Users can be restricted permission such as make them read-only on a repository but there is no support for setting permission for files and branches.


Common Operations

Since the working practices are quite different between Git and TFS version control, there isn’t a simple one to one mapping for quite a few operations but I will make an attempt with the following table.



Get Latest Version

· Fetches the latest version of files from the TFS server to the client.
· Operation can be performed for an entire team project and any folder beneath it.
· There must be a local workspace present with mapping from server folder paths to local directory paths.


There can be three commands corresponding to the Get Latest Version in TFVC.

clone

· Creates a copy of the remote repository locally.
· To be used when you need to get the contents of the remote repository for the first time.
· Note the difference in terminology. It is a copy and not a fetch. In fact, if disk on the server gets corrupted, this can be copied to the server.
· Repository has to be cloned in entirety. Cannot clone parts of the repository.

Fetch
· Used to get the contents of the remote repository.
· To be used when there is already a local repository.
· Doesn’t attempt to merge the changes from the remote repository to the local repository.

Pull
· Similar to “Fetch” except that it also attempts to merge the changes from remote repository on to the local repository.



Add To Source Control
· Allow one or more files or folders to TFS server. The files are only marked for addition and will not be added until the changes are checked-in.
init and add
· Initialize a new repository using the existing directory.
· Once the directory is initialized, the files must be added using the add command.
· Changes are not made permanent in the repository until there is a commit and are not sent to the server until there is a push.




Check Out
· Enable user to change files locally.
· TFS doesn’t fetch the latest version of the file but users should work with the latest version or they will get merge conflicts when they attempt to check-in.
· The changes stay local on end user’s machine until the changes are checked-in.


branch and checkout
· The checkout command in Git switches to the given branch in the local repository.
· Occasionally, the user will require to create a branch before the checkout.
· The branch command in in Git creates a new head to track the version. Creating a branch means that user can keep on working a completely separate version with the option to revert back to the version from which the branch was taken. (More on this later).


Check-In
· Enable users to publish the changes made locally to the server.
· The check-in command checks the version of the file that was taken as a baseline before changes were made locally. If there is a modified version available on the serve, user is required to merge.

commit and push
· The commit command publishes the changes made by the user to local repository. Since, the change is only published to local repository, there is no equivalence in TFVC.
· The push command publishes the changes made locally to the remote repository.
· Like TFVC, if there are subsequent changes made on the remote repository, Git requires to merge the changes.


Shelve Changes
· “Parks” the changes made locally on the server so that they can be fetched either by the same or another user.


· There is no equivalence of shelvesets in Git. The closest is the stash command which publish changes to the local repository. However, unlike shelvesets, these changes are not visible to server.
History
· Displays the history of changes made to a particular branch, folder or file.
· Some of the features such as annotation is supported by TFS Power tools.

log
· Displays the list of commits for the currently active branch.

Blame
· Displays the revisions and the author of each revision for each line of the given file.

Annotate
· Annotates the given file.





Delete / Destroy
· Deletes the given file or folder.
· The destroy command permanent delete files and folders.
· The deletion / destruction is only performed on the server after the deleted files are checked in.


rm and push
· Removes the file from local repository.
· Removal is published to the remote repository after the push operation.
· There isn’t an equivalent of destroy command in Git. However, the reset HEAD command can be used to remove history of older version.




Thursday 16 January 2014

Comparing two Shelvesets

Ever been in a situation where you have created a shelveset for a colleague to review or extend and he/she has created another shelveset for you. It’s often quite time consuming to find out the changes made on top of your shelved changes. The way to compare contents is to unshelve one of them and then compare the files in other with the workspace version of the files. This is not ideal especially if you have already got some other pending changes as well.

Since, neither Visual Studio nor Team Foundation Server Power Tools provides this functionality, I thought it would be useful to create a visual studio extension to allow comparing two shelvesets. Have been meaning to do it for some time but only found time to create one. The extension is now created and published in the visual studio gallery. I have made it an open source project and the source code is available at https://shelvesetcomparer.codeplex.com/.

Once installed the extension appear as a Navigation button in your Team Explorer window.

PreviewImage

Clicking on the button will open up the “Shelveset Comparer” window showing your shelvesets in the default view

clip_image001[7]_ca051aed-1e8f-4503-bfbd-2634176e8c95

You can type another person’s name to fetch his/her shelvesets. Once you have selected the two shelvesets, click the “Compare” button and it will list the files in the two shelvesets side-by-side. It also does a binary comparison of the common files to review if they have the same content or not.

clip_image002_14931e02-0615-43c7-9977-818103db3947

Double click on any file and you will see the contents of the selected files with the changes highlighted.

clip_image003_5e8cb8d5-2f58-47f0-88b6-56d54e6585f4

Please note that the file comparison is only for reference. None of the two files shown in the comparison window are downloaded in your workspace or is the working version of the file.

I hope this extension is useful for the developers community and am looking to hear back with your feedback and suggestion and of course if you want to contribute, please drop me a line. Happy coding!!