cub-e.net

just coding...

Automating CRM Test Cases

The first level of CRM automated test cases consists of the database and schema unit tests that validate the general structure and data of a deployed CRM solution. 


The second level of CRM tests should include more extensive API tests to check the consistency of CRM query and update operations and test the functionalities of deployed plug-ins and workflows. Negative and positive test scenarios should also be included to test the error and exception branches of the application. 


The third level of automated CRM tests consists of the functional tests. Functional testing of CRM solutions are similar to any ASP.NET or other web applications testing methods, leveraging the web testing framework of Visual Studio environment. 

Two possible automation techniques are available using Visual Studio for automated UI tests: Captured web tests and Coded UI tests. 


The first scenario is recommended to quickly capture and create automated test cases for a specific CRM form or functionality. The captured test scenario will contain all web requests and responses executed by the client browser. Visual Studio test framework provides a lot of tools to pre- and post- process the tests; however, to be able to rerun a captured Web Test, a lot of manual work is still required (for example, extracting CRM view IDs, entity IDs, parameterizing data entries, and randomizing search clauses and data). The captured and parameterized Web Tests can usually support only a specific functional version of a CRM form. After changing the flow of the tested functionality, a recapture of the test is usually be required. 



Creating coded UI tests usually requires more effort for developers but can be more general, making it possible to support even larger structural changes on the tested forms. Coded UI tests can be created either by converting a captured Web Test, or by reusing the existing samples and common components provided by the CRM 2011 Performance Toolkit. 


Using automated tests and Visual Studio for tracking requirements and connecting work-items to check-ins provides the ability for bugs and resolution check-ins to be traced back to the specific requirement and test scenario. In fact, with the Visual Studio test framework, it is possible to track code changes and their impacts through to the application and test suites, including which test cases are required to be re-run. 


Note: To have this experience across all aspects of an enterprise Dynamics CRM delivery (including CRM customizations) requires an offline build process using the unpacked customization source tree. 

Dynamics 365 Community Event in Dubai on 9th and 10th February 2018

Dynamics 365 Saturday is a free Technical & Strategy Event Organised by the Microsoft Dynamics Community MVP’s For CRM and ERP professionals, technical consultants & developers. Learn & share new skills whilst promoting best practices, helping organisations overcome the challenges of implementing a successful digital transformation strategy with Microsoft Dynamics 365.

Dynamics 365 Saturday will replace CRM Saturday to provide a single platform to serve the whole Dynamics 365 community, the core customer experience values and ethics of CRM Saturday will continue to live on through 365 Saturday with the rest of the Dynamics Community.

I have a session in this event on 10th February. Session name is “Dynamics 365 V9 New Features & Deprecations” in Microsoft Dubai Office in Media City between 13:00 and 14:00. Also I will share my knowledge on Dynamics 365 in Hackathon on 11th February.

My session focuses on who interested in taking the plunge to “code”. The session will be covering all development structure. Attendees can easily see the difference between versions from a development perspective and will be particularly helpful for those who work on upgrade projects.

For more information and program agenda please visit : http://mwns.co/dj18 

We would like to see you there.

CRM Solution Concepts


Solutions and layering are fundamental concepts within CRM that need to be fully appreciated and understood to construct an approach to lifecycle management for Dynamics CRM applications. There are three key concepts that need to be introduced:

§   Solution Packages: Act as containers for functionality required to be deployed as a unit

§   Layers: The consequence of a specific component being affected by change within one or more solutions

§   Managed Properties: The mechanism to control how layers interact with each other

Note: This appendix provides a “quick reference” overview of more detailed information that is presented in the Dynamics CRM 2011 SDK, in the section Package and Distribute Extensions.

Solution Packages

Solution packages are the container mechanism to transport Dynamics CRM configuration and customization between organizations. There are two types, unmanaged and managed.

Unmanaged

§   Contains only customizations from the unmanaged layer

§   Will not include dependent components

§   Imports only into the unmanaged layer:

       Ideal during development

       Everything within could be changed

       Overwrites existing customizations

§   Contains a complete copy of each component

Managed

§   Will get its own layer on import.

§   Analogous to an installer “.msi” file; it is a distribution package.

§   Contents cannot be directly changed.

o    This does not mean DRM.

§   Components may be customized where managed properties allow.

§   Contains only deltas for component that supports merging (FormXML, Ribbon & Sitemap).

Layers

Layers should be considered on a per-component basis. Although typically drawn to convey the effect of a solution on another solution, this is always at a component level.

Unmanaged

§   There is only one. It always exists.

§   All customizations made on this server/organization will reside here.

§   “Solutions” exist here as containers.

       Ownership by solutions is weak; by-reference only.

       No independence between solutions. If one component exists in two solutions, they both see the current state of the component equally.

§   New customizations made will trump all prior customizations. (They override the lower layers.)

§   Customizations cannot be undone, but they can be deleted.

 

Managed

§   There can be many:

o    Always one for the “system” solution.

o    Discrete layer for each managed solution package imported (at a per-component level).

§   Only created by importing a managed solution.

§   Time of import sets order of precedence.

§   You cannot customize components inside the layer.*

§   They can be deleted, replaced, and versioned.

§   Deleting a layer may delete data.

Managed Properties

Managed properties control how layers interact with each other; they control the level of customization achievable on top of components from a managed solution that has been imported. After releasing a managed solution, the managed properties can only be changed to reduce how restrictive they are.

§   Control order of precedence (which customizations to allow on top of an imported solution’s managed layer).

§   The ability to customize cannot be removed or disabled during solution update.

§   The ability to customize can be enabled during solution update.

§   The default ability to customize is fully customizable.

In general, if the consumers of the solution are not known or trusted or the changes that may be made could break the solution, it is recommended to lock down the managed properties for the solution. These managed properties can be opened back up to enable specific components to be customized at a later date as needed.

Merge Behavior

Unmanaged

New customizations trump all prior customizations, overriding the lower layers.

Important: Changes applied by importing an unmanaged solution cannot be uninstalled. Do not install an unmanaged solution if you want to roll back the changes.

Managed (No Overwrite)

§   Customizations in unmanaged layer are preserved.

§   Importing newer versions creates new managed layers directly above the previous version’s managed layer.

§   Importing the same version replaces contents within the existing layer.

§   Importing cannot remove pre-existing components.

§   Generate and import a minimal solution to “hotfix” a larger solution.

§   Reports, E-mail templates, and plug-in assemblies skip updates if they are not the “top” layer.

Managed (Overwrite)

The recommended approach is to always preserve customizations (no overwrite). If the updates are mandatory for the solution to function appropriately, then overwrite is needed:

§   Replaces the content of the “unmanaged” layer and makes the managed solution the top layer.

§   Ensures that updates included in the solution are effective.

§   Overwrites all customizations and should be used with caution.

§   Greater use of managed properties makes this approach less necessary.

Dependency Tracking

The solutions framework automatically tracks dependencies across components.

§   The following operations perform dependency checks/operations:

       Individual components: CRUD, Add Existing (to a solution).

       Solution: Import, Export, Delete (uninstall).

§   Dependencies are version agnostic. As long as the unique name/id of the component and the package type matches, a dependency is considered valid.

       Managed components cannot depend on unmanaged components.

Shared Publishers

§   Components in managed layers will be owned by the solution publisher.

§   Publisher owns the component, not the solution.

§   Components with same name and publisher will be considered the same thing.

§   Removing a solution does not remove a component when it is referenced by another solution using the same, shared publisher.

§   Be wary of predictable names and collisions.

       For web-resources, create names that imply virtual directories

 

DirectionsEMEA 2017 Madrid – Development on Dynamics 365 CRM

Directions EMEA is an independent conference for Microsoft Dynamics partners from the ERP and CRM channels focusing on the SMB market. It is organized by partners for partners. The conference is where Microsoft Dynamics Partners go to learn first-hand from Microsoft about the Microsoft Dynamics Roadmap and new features of the latest Dynamics NAV version (Tenerife). Following the concept of integrating CRM and ERP within Dynamics 365, Directions EMEA also offers a deep insight into Microsoft Dynamics 365 BE for Finance and Operations, Microsoft Dynamics 365 BE for Sales and Microsoft Dynamics 365 BE for Marketing, as well as ensures a comprehensive understanding of the Cloud Solution Provider program.

Directions EMEA brings together over 2000 developers, implementers, technical experts, sales, marketing and executive/owner representatives from Microsoft Dynamics Partners. For independent software vendors, the event is a unique opportunity to show their solutions to the largest Microsoft Dynamics Partner forum and demonstrate their readiness for Dynamics 365 BE as SaaS providers.

The Directions conference provides the Dynamics community with a forum for knowledge sharing, networking and discovering new opportunities for future growth and collaboration. It is a must-attend event where partners can enhance and build their networks to reach a broader SMB market, learn about the latest product developments and tools, as well as enrich their operational and technical knowledge.

Since 2008 Directions EMEA has grown year by year. In 2016, it attracted almost 1800 attendees from 580 companies as well as 60 sponsors. 2017 is the 10th annual conference and will take place in Madrid on October 4 – 6, 2017.

Our CEO Baris Kanlica has a session in this event on 6th October Friday in Venecia room. Session name is  “Dynamics 365 new development features and deprecations” and it is is focused on those new to CRM development or CRM administrators interested in taking the plunge to “code” customization. The session will be covering all development structure of the Dynamics platform since version 2011. Attendees can easily see the difference between versions from a development perspective and will be particularly helpful for those who work on upgrade projects.

See you there :)

https://mawens.co.uk/directionsemea-2017-madrid-development-on-dynamics-365-crm/

Improve Microsoft Dynamics 365/CRM service channel allocation performance

The OrganizationServiceProxy and DiscoveryServiceProxy service proxy classes provide the ability to establish a connection to and communicate with the available Microsoft Dynamics 365/CRM services. However, improper use of these API's can potentially reduce application performance. Therefore, if you understand when and how to effectively leverage these service client API's, available in the SDK, you can optimize communication traffic between your application the Dynamics 365/CRM server.

 

When you establish a Windows Communication Foundation (WCF) service channel by using one of the service endpoints, for example, the Organization.svc, your application must perform two additional, time-consuming operations: 1) metadata download from the endpoint and 2) user authentication. You can improve performance if your application performs these operations a minimum number of times for each application session. The OrganizationServiceProxy constructor shown below performs both of these operations, in addition to allocating a new service channel, every time a service proxy object is created.

 

Violation Example

new OrganizationServiceProxy(new Uri(this.orgServiceUrl), nullthis.Credentials, null)

A first step towards improving performance is to minimize the number of times your application uses this constructor to create an instance of the service proxy.  If possible, use this constructor only one time during the application session and cache the returned reference for use in different code paths within your application. Notice that the returned service reference is not thread safe so multi-threaded applications will need to allocate one instance per thread. Applications must also call Dispose on the service proxy object before they terminate in order to free service channel allocated resources.

 

The service proxy class performs the two previously mentioned operations, metadata download and user authentication, by using the following public methods available as client API's.

 

Guideline Example

IServiceManagement<IOrganizationService> sm = ServiceConfigurationFactory

        .CreateManagement<IOrganizationService>(new Uri(orgServiceUrl));

 

this.Credentials = sm.Authenticate(this.Credentials);

 

The CreateManagement method performs the metadata download while the Authenticate method authenticates the user. The returned objects from these methods are thread safe and can be statically cached by your application. 

 

As a more efficient alterntive, you can then use these objects to construct a service proxy object via one of the other constructor overloads (depending on the authentication scenario):

 

newOrganizationServiceProxy(sm, this.Credentials.ClientCredentials)

 

newOrganizationServiceProxy(sm, this.Credentials.SecurityTokenResponse)

These constructor overloads bypass the two aforementioned operations and use the referenced service management and authentication credentials passed as method arguments.  By caching the service management and authenticated credential objects, your application can more efficiently construct the service proxy objects and allocate multiple service channels as necessary over the lifetime of an application session without repeating the costly (and redundant) operations. 

 

If you enable early-bound types on OrganizationServiceProxy through one of the EnableProxyTypes methods, you must do the same on all service proxies that are created from the cached IServiceManagement object. If your application uses the metadata, we recommend that it caches the metadata that it retrieves and periodically calls the RetrieveTimestampRequest message to determine whether it must refresh the cache.

 

In addition, monitor your WCF security token (Token) and refresh it before it expires so that you do not lose the token and have to start over with authentication. To check the token, create a custom class that inherits from the OrganizationServiceProxy or DiscoveryServiceProxy class and that implements the business logic to check the token. Or wrap the proxy classes in a new class. Another technique is to explicitly check the token before each call to the web service. Example code that demonstrates these techniques can be found in the ManagedTokenDiscoveryServiceProxy, ManagedTokenOrganizationServiceProxy, and AutoRefreshSecurityToken classes in the Helper Code: ServerConnection Class topic.

 


Performance Best-Practices: 

http://msdn.microsoft.com/en-us/library/gg509027.aspx#Performance

ServiceConfigurationFactory.CreateManagement<TService>(Uri serviceUri)

http://msdn.microsoft.com/en-us/library/hh560440.aspx

IServiceManagement.Authenticate(AuthenticationCredentials authenticationCredentials)

http://msdn.microsoft.com/en-us/library/hh560432.aspx

Use Try and Finally on Disposable Resources

Why

Using a try/finally block ensures that resources are disposed even if an exception occurs. Not disposing resources properly leads to performance degradation over time. 

When

This is important guideline when working with disposable resources such as 

  • Database connections 
  • File handles 
  • Message queue handles 
  • Text reader, writer 
  • Binary reader, writer 
  • Crypto stream 
  • Symmetric, Asymmetric and Hash algorithms. 
  • Windows Identity, Windows Impersonation Context  
  • Timer, Wait Handle in case of threading 
  • Impersonation Context 
  • XML Reader 
  • XML Writer

How

The following code fragment demonstrates disposing resources in a finally block..

SqlConnection conn = new SqlConnection(...)

try

{

   conn.Open();

   ...

}

finally

{

   if(null != conn)

     conn.close();

}

If developing in C# you can use the 'using' keyword which will automatically dispose resources after their usage.

using(SqlConnection conn = new SqlConnection(...))

{

   conn.Open();   

   ....

}


Problem Example

A database connection is opened and used to access data. 

Unfortunately, if there is an exception other than SqlException or if the exception handling code itself throws an exception the database connection won't be closed. This failure to close database connections could cause the application to run out of database connection impacting application performance.

try

{

   conn.Open();

   // do something with the connection

   // some more processing

   // close the connection

   if(conn != null)

     conn.Close();

}

catch(SqlException ex)

{

   // do exception handling

   // close the connection

   if(conn != null)

     conn.Close();

}

Solution Example

A database connection is opened and used to access data. The finally block will be executed whether there is exception or not, ensuring that the database connection is closed. 

SqlConnection conn = new SqlConnection(...)

try

{

   conn.Open();

   // do something with the connection

   // some more processing

}

catch(SqlException ex)

{

   // do exception handling 

  

}

finally

{

   // close the connection

   if(conn != null)

     conn.Close();

}

 

 

Additional Resources

Related Items

Explicitly Call Dispose or Close on Resources You Open

If you use objects that implement the IDisposable interface, make sure you call the Dispose method of the object or the Close method if one is provided. Database connection and files are examples of shared resources that should be explicitly closed. 


Why

Failing to call Close or Dispose prolongs the life of the object in memory long after the client stops using it. This defers the cleanup and will result in inefficient memory usage. 

When

This guideline should be used when working with disposable resources such as:

  • Database connections 
  • File handles 
  • Message queue handles 
  • Text reader, writer 
  • Binary reader, writer 
  • Crypto stream 
  • Symmetric, Asymmetric and Hash algorithms. 
  • Windows Identity, Windows Impersonation Context  
  • Timer, Wait Handle in case of threading 
  • Impersonation Context 
  • XML Reader 
  • XML Writer 

How

When working with disposable resources call Dispose method of the object or Close method, if provided. The Close method internally calls the dispose method. The finally clause of the try/finally block is a good place to ensure that the Close or Dispose method of the object is called.

The following Visual Basic® .NET code fragment demonstrates disposing resources in finally block.

Try

  conn.Open()

…Finally

  If Not(conn Is Nothing) Then

    conn.Close()

  End If

End Try

In Visual C#®, you can wrap resources that should be disposed, by using a using block. When the using block completes, Dispose is called on the object listed in the brackets on the using statement. The following code fragment shows how you can wrap resources that should be disposed by using a using block.

using (SqlConnection conn = new SqlConnection(connString))

{

  conn.Open();

  . . .

} // Dispose is automatically called on the connection object conn here.


Problem Example

A .NET application opens a database connection. Unfortunately, the database connection is never disposed. Since it is not explicitly disposed the connection will stay active until the application terminates.  This unnecessarily increases the application's memory usage. Database connections are a limited resource, failure to dispose could result in the usage of all available database connections.

SqlConnection conn = new SqlConnection(...)

try

{

   conn.Open();

   // do some processing with the connection

}

catch(SqlException ex)

{

   // do exception handling

}


Solution Example

A .NET application opens a database connection. Calling Close method on the connection object in the finally clause ensures that the database connection is disposed after it has been used.  

SqlConnection conn = new SqlConnection(...)

try

{

   conn.Open();

   // do some processing with the connection   

}

catch(SqlException ex)\

{

   // do exception handling    

}

finally

{

   // close the connection

   if(conn!=null)

     conn.Close();

}

 


Additional Resources

Related Items

Use Try and Finally on Disposable Resources

CRM Saturday Oslo 2017

Please join us on the next CRM Saturday event in Oslo (Norway) on 26th August. My session is "D365 New Features & Deprecations"

Full details and registration can be found on the CRM Saturday website: http://mwns.co/cso2017

CRM Saturday is a Free CRM Technical & Strategy Event Organised by the Microsoft Dynamics Community MVP's For Dynamics 365 Professionals, Technical Consultants & Developers. Learn & share new Skills whilst Promoting the CRM Manifesto, helping organisations overcome the challenges of implementing a Successful CRM Strategy with Microsoft Dynamics.

This is a whole day event with a lot of speakers from all around the world. It is an advertising and recruitment free event, and we are setting up both a technical and a business/strategy track so there will be interesting session for everyone.


Managing Complex CRM Scenarios by Using the SolutionPackager Tool


In enterprise scenarios, it is a common requirement to be able to deliver common or separate CRM modules to separate business units according to an enterprise-level or separate roll-out timeline.

The typical challenges of this scenario result from parallel development tracks and frequent changes to enterprise projects, which require the greatest degree of flexibility in every aspect:

§ Sharing common code/customization base

§ Managing solution layering and dependencies

§ Managing localization

§ Managing changes in layering (moving items to common or back, customizing common elements locally)

§ Versioning common and local solutions

§ Parallel development on multiple solutions, multiple solution-layer setups, and multiple versions of a solution

§ Central repository of every/any solution

§ Test and deploy all possible or required combination of solution layers

§ Managing solution branch and merge

§ Regression testing changes and impacts across solutions and branches

§ Common, single versioning of all solution elements

Development and Version Control

You need streamlined and common techniques of development and repository management for complex enterprise deliveries. The standard features of Microsoft Team Foundation Server serve as the basis of larger application delivery management scenarios:


§ ALM

§ Debugging and diagnostics

§ Testing tools

§ Architecture and modeling

§ Database development

§ Integrated development environment

§ Build management and automation

§ Version control

§ Test case management

§ Work item tracking

§ Reports and dashboards

§ Lab management


When working with multiple development teams developing multiple solutions and versions in parallel on an enterprise delivery, the following base process is minimally recommended:


§ Developers should have isolated VMs to work on either hosted locally or centrally to share infrastructure between team members. Every developer work-item needs to be committed to the TFS environment (integration environment).

§ The development lead is in charge of making sure that the developments (customizations, configurations, and custom source files) can integrate in the integration environment without any side effects.

§ The TFS source control should be considered as the central repository for all solution items:

§ All CRM customizations (Entities, forms, views, workflows, templates, web resources, etc.)

§ Base data

§ Source codes and projects for custom components

o Plug-ins, custom activities

o Custom web applications

o Custom .Net components

o Custom services and other external components

o Web resource files


§ External components and references (SDK, Enterprise Library, etc.)

§ Unit test source codes for above components

§ Automated test sources for the entire application

§ Setup package for the entire application

§ Deployment scripts for the packages

§ Build definition for the entire application

The customer requirements and the work-item break-down should be stored in TFS work-item management. This way the development check-ins can be connected to the work-items, and the built-in development tracking and reporting functions of TFS can be leveraged. You will also be able to use the change-impact analysis functions of TFS for risk analysis and selecting the regression test scenarios for a specific change-set.

In multi-feature and multi-team scenarios you usually need multiple development and test environments, probably using different versions of the application. The TFS build automation and lab management functions provide the necessary toolset for managing daily build and deployment steps. You find more details about the topic in the Build and Test sections.

Version Control Process

The CRM customizations are out-of-the-box bound to a single CRM solution zip file, which requires special techniques to manage as part of a source control system.

The SolutionPackager tool (see Use the SolutionPackager Tool to Compress and Extract a Solution File) is designed to unpack (split) a solution .zip file into a broad set of files within folders by entity name. The breakout into files enables that modifications to different components will not cause changes to occur in the same files. For example, modifying an entity Form and a View would result in changes in different files (hence no conflicts to resolve). Pack will take the files from disk and glue them back together to construct a managed or unmanaged .zip file.

The development process for Solution Packages with the SolutionPackager tool is illustrated in the following graphic.



 


The developer works on his/her daily environment to work on the CRM development task allocated. Before starting a work item the developer (or the build master) gets the latest version of customization tree from TFS and uses the SolutionPackager tool to create the solution zip file. The developer (or the build master) deploys the latest solution package on the development environment. The same solution can also be deployed to the test CRM environment.


At the end of a day or after finishing a work item, the developer exports the solution zip file from the development environment. Executing the SolutionPackager tool on the zip file will generate the separate customization elements of the entire solution. The customization files need to be checked out and checked-in individually only when changed.

The Visual Studio project may be created manually. The SolutionPackager tool will automatically create the folder structure below that (the unpack script need to be executed in the solution project folder).

Manual Steps

The current version of SolutionPackager tool provides the basic functionality to convert the packaged customization zip file to a version controllable folder structure. The following manual steps are still needed by the developers or test leads to be able to maintain the source tree:

1.   Check-in check-out of specific customization elements, which will be affected by a work-item change.

2.   Execute the pack/unpack operation manually and selecting the changed customization elements (Note: this can be integrated into the clean and deploy process of the Developer Toolkit).

3.   Taking care of the managed and unmanaged version of components at check-in time.

4.   Managing the assembly versioning (FQ strong names) of plug-ins used by a CRM solution.

5.   Managing RibbonDiffXML (merging changes).

The following techniques may be used to further support the version control process:

§ Removing assembly strong names from customization files to be able to track only the real changes.

§ Some schema parts are reordered randomly during solution import-export; these elements may be ordered alphabetically to more convenient source control tracking.

Localization Challenges

Enterprise CRM projects usually face the requirement of supporting multiple locales. Although CRM has OOB support for localization, it is always a challenge to provide full solution localization because different components required different localization strategies, and the explosion of components may be required.

Localization of some specific CRM components is currently not supported or not applicable. For example:


§ Workflows

§ Dialogs

§ Templates


CRM Developer Toolkit

§ Roles & FLS profiles

§ Plug-in assemblies


The CRM Developer Toolkit may be used for daily developer work for creating streamlined plug-ins, generating codes, solutions, and accessing CRM customization pages. The source code should be checked in directly under TFS. The applied customizations can be exported, unpacked from the CRM organization, and the specific changed elements can be checked in manually.

Note: The Developer Toolkit can support the process of packing and unpacking by turning on the ability to export a CRM Solution as part of the local deployment process.

Sitemap, Ribbon Editor

The CRM customization system only supports editing the sitemap and ribbon definition manually as xml files, after a manual export and followed by an import operation. The SiteMap and Ribbon Editor tools provide a graphical UI for executing these types of customizations.

These tools can be also combined with the unpacked structure:

§ Sitemap.xml is unpacked into a separate xml file and can be directly edited by SiteMap Editor.

§ The Ribbon Editor supports editing directly on a CRM environment, so developers need to edit on the server and then unpack the solution zip file and check-in the changes.

Daily Developer Work

The developers should use unmanaged solutions and shared configuration environments during development time. One developer usually works on one solution (either CRM or other Visual Studio solution), but multiple developers can work on the same solution in parallel.

 

The developer environments should be typically updated on a per-day basis using the last successful build. You should consider the best environment allocation and setup strategy for your team, depending on the actual project and team setup (see Deployment and Test).

The developers typically use standard CRM customization UI for executing their daily work. They create CRM customization elements (entities, screens, ribbons, views, dialogs, plug-in registrations, etc.). They may use Visual Studio for creating custom developed plug-ins, web applications, or other components, and may also use other external tools such as resource editors, designers, sitemap, ribbon, and metadata editor for CRM.

The editing of external components and resources can be typically executed offline on the developer machine after executing the TFS check-out operation (VS built-in editors usually manages check-out automatically).

Just before the check-in, they need to export the solution from the CRM environment and execute a special batch file per solution, which will unpack the customization file.

Developers need to take care of the actual check-in. They need to be aware of the executed changes and which schema members it affected. Only the changed elements need to be checked-in, and there are also special cases when the check-in elements need also to be merged back or compared to the previous version (for example, ribbonDiffXml).

Before the actual check-in operation, the developer must get the latest sources and create a local build, including a solution package, to deploy and test in the development environment; this is the recommended practice for quality check-ins. This process may be also supported by the Gated Check-in or Continuous Integration feature of TFS.

The daily development process is illustrated in the following figure.



 

Build Automation

The build process can be automated using the standard TFS build agent. The build agent is a Windows service that executes the processor-intensive and disk-intensive work associated with tasks such as getting files from and checking files into the version control system, provisioning the workspace for the builds, compiling the source code, and running tests.

 


A typical build process consists of the following steps:

1.   Getting the build definition (You may have multiple parallel projects/builds.)

2.   Update build number (solution versioning)

3.   Prepare build context

a.   Create drop location

b.   Get build agent

c.Get build directory and initialize workspace

4.   Get source files

5.   Compile sources and create setup packages

6.   Run unit tests

7.   Process test results

8.   Drop build and notify

The build process may be extended further by automated deployment and testing (for more details see the Build, Test and Deployment sections):

§ Rollback test environments

§ Deploying setup packages to test environments

§ Executing automated test steps/schedule manual test scenarios






Offline Build Process

The offline build process is designed as an independent build flow enabled to execute the building of a whole CRM solution package without a running instance of Dynamics CRM. The offline build process leverages the static schema of the solution elements and customization files (see Development and Version Control).

 




The build process can be easily extended to use the SolutionPackager tool. The CRM Solution Builder project template and target file samples can be used to integrate the pack operation into the daily build process (see CRM Solution Builder).

During the build process, the Visual Studio standard projects will be compiled, built, and dropped as defined in the build and project definitions.

Building the CRM customization project will trigger the pack operation will the up-to-date customization source tree (see solution tree structure above). All customizations will be merged together as a single customization.xml and solution.xml file. The previously built solution resources and assemblies can be included directly in the solution package.

Note: ILMerge operation may be required to make DB-registered assemblies work with referenced assemblies.

Automating Deployment

The deployment automation of packed CRM solution can be executed identically to standard CRM solution deployment.

On enterprise projects the following scenarios can cover the typical deployment requirements:

§ Automated deployment of development environments, test environments, and integration environments

§ Offline setup packages and scripts to enable deployment to acceptance and production environment as a sandbox

§ Rollback environments to a baseline (both CRM components and the CRM database)

o May already contain existing solutions and possible CRM Update Rollups

§ Deploying CRM solutions

§ Deploying CRM base data and custom security roles

§ Additional configuration of the organization, BU, and user settings using the CRM API

Note: For further details on CRM automated deployment process see the Test and Deployment sections.

Managed/Unmanaged Solutions

The version control differences between managed and unmanaged solutions are already described in Managing Version Requirements section. The SolutionPackager tool will process and unpack both types of solutions. The unpack process will create the differencing elements with “managed” postfix for easier version control management.


The general recommendations for using managed or unmanaged solutions:

§ Use unmanaged solutions for development.

§ Use managed solutions for all downstream environments.

§ Use as few solutions as possible for easier management.

§ Customize existing managed solutions as little as possible.

§ Execute integration tests using managed solutions frequently to test and address possible collisions.

§ Test managed solution combinations for every possible or supported scenario.

Do not use specialized update operation requests in Microsoft Dynamics CRM

In releases prior to Microsoft Dynamics CRM 2015 Update 1, the use specialized messages were required in order to update certain entity attributes. However, with the modern release of CRM, the SDK has been simplified and now supports updating specialized fields using the Update operation. CRM Web API does not support the specialized messages. Once the CRM 2011 services are removed, currently labeled as deprecated, there will be no support for performing these operations.


The following message requests are deprecated as of Microsoft Dynamics CRM 2015 Update 1:

  • AssignRequest 
  • SetStateRequest 
  • SetParentSystemUserRequest 
  • SetParentTeamRequest 
  • SetParentBusinessUnitRequest 
  • SetBusinessEquipmentRequest 
  • SetBusinessSystemUserRequest

In order to update the corresponding specialized attributes, use the UpdateRequest message. Messages can be published from a plug-in, a workflow activity, JavaScript through the services or from an integrating application.

These specialized messages will continue to work with the 2011 endpoint. However, the recommendation is to use the UpdateRequest or Update method when possible to set these attributes. The Update message simplifies the SDK API and makes it easier to code standard data integration tools used with Dynamics CRM. In addition, it is easier to code and register a plug-in to execute for a single Update message instead of multiple specialized messages. The AttributeMetadata.IsValidForUpdate property for the above listed attributes has been changed to true in this release to enable this capability.

You can continue to use these specialized messages of the 2011 endpoint in your code. However, the Web API that eventually replaces the 2011 endpoint supports only the Update message for these types of operations. If you want to get a head start on changing your code to align with the Web API, you can now do so. See Web API Preview for more information.


Impact of this change on plug-ins


When update requests are processed that include both owner fields plus other standard fields for business owned entities, plug-ins registered for the Update message in pipeline stage 20 and/or stage 40 execute once for all non-owner fields, and then once for the owner fields. Examples of owner fields would be businessunit and manager (for a SystemUser entity). Examples of business owned entities include SystemUserBusinessUnitEquipment, and Team.

When update requests are processed that include both state/status fields plus other standard fields, plug-ins registered for the Update message in pipeline stage 20 and/or stage 40 execute once for all non-state/status fields, and then once for the state/status fields.

In order for plug-in code to receive the full data changes of the update, you must register the plug-in in stage 10 and then store relevant information in SharedVariables in the plug-in context for later plug-ins (in the pipeline) to consume.


Impact of this change on workflows

When update requests are processed that include both owner fields plus other standard fields, workflows registered for the Update message execute once for all non-owner fields, and then once for the owner fields. Workflows registered for the Assign message by users continue to be triggered by updates to owner fields.

When update requests are processed that include both state/status fields plus other standard fields, workflows registered for the Update message execute once for all non-state/status fields, and then once for the state/status fields. Workflows registered for the Change Status step continue to be triggered by updates to state/status fields.