What is Delta Live Tables?

Delta Live Tables is a declarative framework for building reliable, maintainable, and testable data processing pipelines. You define the transformations to perform on your data and Delta Live Tables manages task orchestration, cluster management, monitoring, data quality, and error handling.


Delta Live Tables (DLT) is a declarative ETL framework for the Databricks Data Intelligence Platform that helps data teams simplify streaming and batch ETL cost-effectively. Simply define the transformations to perform on your data and let DLT pipelines automatically manage task orchestration, cluster management, monitoring, data quality and error handling.

nd error handling.

DLT flow

How can we incorporate Delta Live Tables into our data pipelines?

Creating Delta Live Tables involves working with the medallion or multi-hop architecture that I explained in my previous post, Databricks Autoloader and Medallion Architecture… Pt 2. Below is an example incorporating the bronze, silver, and gold layers of this architecture within your Databricks notebooks. SQL will be used in this example although the same can be accomplished with pyspark/python.

  1. Create Bronze Layer Tables: Here we are creating a live table called bronze_table and defining the schema/data source. You can also add comments for reference as you create each live table.
-- Bronze Layer Table

CREATE OR REFRESH STREAMING LIVE TABLE bronze_table
COMMENT "This is a sample table"
AS SELECT * FROM cloud_files("${file_path}/table_name_1", "delta",
map("schema", "col_name_1 STRING, col_name_2 LONG"))

2. Create Silver Layer Tables: The next level table involves adding a constraint, expectation, and join to pull specific information for our silver_table. Please note, we are using constrant violation DROP ROW and FAIL UPDATE to discard records or fail pipeline. Databricks allows us to define our constraints with expectations for data quality checks.

-- Silver Layer Table

CREATE OR REFRESH STREAMING LIVE TABLE silver_table (
CONSTRAINT valid_col_name EXPECT (col_name_3 IS NOT NULL) ON VIOLATION DROP ROW
)
TBLPROPERTIES (
"comment" = "Only valid column names with valid_col_name"
)
AS
SELECT
col_name_3,
b.col_name_4,
col_name_5,
c.col_name_6 AS name,
c.col_name_7 AS number,
CAST(time_period(timestamp, 'yyyy-MM-dd HH:mm:ss') AS TIMESTAMP) AS new_timestamp
FROM
STREAM(LIVE.silver_table) a
LEFT JOIN
LIVE.name b
ON
a.col_name_4 = b.col_name_4;

3. Create Gold Layer Tables: The last level table involves selecting a specific subset of data from our bronze and silver layers that contains particular columns we would like to select for given requirements. Please note, the LIVE in LIVE.silver_table is what activates the Delta Live Table in our data pipeline and not having that provided will not allow you to execute your pipeline.

-- Gold Layer Table

CREATE OR REFRESH LIVE TABLE gold_table
COMMENT "Final results from Bronze/Silver Layers"
AS
SELECT col_name_3, name, number, col_name_5, TIMESTAMP
FROM LIVE.silver_table
WHERE id = 'x'
GROUP BY col_name_3, name, number

This is a brief example of how you can incorporate your medallion or multi-hop architecture from autoloader’s cloud files into Delta Live Tables. This can open up a realm of possibilities for your real-time data. By simplifying architecture, ensuring data quality, and delivering high performance, Delta Live Tables can allow you to make informed decisions swiftly and accurately.

Please note, this was an example of Delta Live Table creation in your Databricks notebooks and not covering the aspect of running your pipeline in Databricks UI. More information and documentation regarding that can be found here. Below is an example snapshot from Databricks of a pipeline in Databricks UI:

Source: https://www.databricks.com/product/delta-live-tables

(Disclaimer: Please refer to the official Databricks documentation and resources for the most up-to-date information on Delta Live Tables and related features.)

Delta Live Tables SQL Reference: https://docs.databricks.com/en/delta-live-tables/sql-ref.html

Delta Live Tables Python Reference: https://docs.databricks.com/en/delta-live-tables/python-ref.html

SQL Tutorial: https://docs.databricks.com/en/delta-live-tables/tutorial-sql.html

Python Tutorial: https://docs.databricks.com/en/delta-live-tables/tutorial-python.html

Data Architect Interview Questions with Sample Answers

1. Can you explain the concept of Data Modelling and its importance in the role of a Data Architect?

Data Modelling is a key concept in data architecture, and its understanding showcases the candidate’s ability to comprehend and organize complex data structures. It requires an in-depth understanding, critical thinking, and analytical skills to answer well.

Data Modelling is a method used to define and analyze data requirements needed to support the business processes of an organization. Its main purpose is to represent data objects, the associations between different data objects, and the rules governing these associations. As a Data Architect, it is crucial because it helps in understanding the intricate data relations, ensures data accuracy and quality, and is instrumental in designing databases that meet the organizational needs.

2. How do you approach the challenge of ensuring data security?

The ability to ensure data security is a critical aspect for a Data Architect. This question assesses a candidate’s knowledge of data security measures and strategies used to protect an organization’s data.

I approach data security by implementing a multi-layered approach. This includes the use of encryption, secure network architectures, robust access control, regular audits, and security training for all users. Choosing the right security measures depends largely on understanding the specific data and infrastructure of the organization, as well as the risk and compliance requirements.

3. Can you detail your experience with Database Management Systems (DBMS)?

Interviewees should highlight their practical experience with various DBMS platforms. Their response reveals their technical proficiency and adaptability to different DBMS environments.

Over the years, I have worked with a variety of DBMS including SQL Server, Oracle, and MySQL. I’ve performed tasks from designing and creating databases to optimizing and securing these systems. My exposure to these diverse DBMS platforms has given me a well-rounded understanding of their functionalities, advantages, and drawbacks.

4. What is data normalization, and why is it important?

Understanding of data normalization principles is essential for a Data Architect. The candidate’s answer will demonstrate their knowledge of database design and their ability to optimize databases.

Data normalization is a process in database design that organizes data to minimize redundancy and improve data integrity. It divides larger tables into smaller ones and defines relationships between them. This is important as it reduces the data storage and enhances performance by eliminating redundant data, and ensuring data dependencies make sense.

5. Could you explain the concept of Data Partitioning?

Data partitioning is a vital concept in maintaining large databases and improving their performance. A clear, concise answer will reflect the candidate’s understanding of efficient database management.

Data partitioning is a technique of breaking up a large database into smaller, more manageable parts called partitions. It allows for improved query performance as it reduces the I/O operations. It also makes it easier to manage large databases as operations can be performed on individual partitions rather than the entire database.

6. What role does Data Warehousing play in an organization?

This question tests the candidate’s understanding of data warehousing and its strategic importance in an organization’s decision-making process.

A data warehouse is a system used for reporting and data analysis. It serves as a central repository of data collected from various sources. It plays a vital role in an organization by providing an integrated and consolidated view of the business data, which aids in decision-making and forecasting.

7. What is your experience with cloud-based data solutions?

The candidate’s response will reveal their familiarity with modern data management techniques and their ability to adapt to new technologies.

In my previous role, I worked extensively with cloud-based solutions such as AWS and Azure. I designed and implemented secure and scalable cloud databases, migrated on-premise data to the cloud, and ensured efficient data integration. This experience taught me the advantages of cloud solutions such as scalability, cost-effectiveness, and accessibility.

8. Can you explain the concept of ETL and its importance in data handling?

Understanding of ETL processes is crucial for Data Architects as it forms the backbone of data warehousing. It tests the candidate’s knowledge of data processing and data pipeline design.

ETL stands for Extract, Transform, and Load. It is a process that involves extracting data from source systems, transforming it into a format that can be analyzed, and then loading it into a data warehouse. ETL is important as it enables businesses to consolidate data from different sources into a single, consistent structure that aids in making informed business decisions.

9. How do you handle data redundancy and what techniques do you use?

This question is designed to gauge a candidate’s ability to maintain database efficiency and data integrity.

Data redundancy can be managed by implementing data normalization processes and enforcing integrity constraints in the database. This ensures that the data is organized into separate tables based on relationships and reduces duplication. Regular audits and data cleansing activities are also important to identify and remove redundant data.

10. What is a Data Lake and how does it differ from a Data Warehouse?

Understanding the difference between a data lake and a data warehouse is key for a Data Architect. The candidate’s response will demonstrate their knowledge of data storage systems.

A Data Lake is a storage repository that holds a vast amount of raw data in its native format until it is needed. On the other hand, a Data Warehouse is a structured repository of processed and classified data. While a Data Warehouse is optimized for data analysis and reporting, a Data Lake is more suited for storing large volumes of raw, detailed data.

11. Can you explain Big Data and its relevance in modern business?

The candidate’s understanding of Big Data technologies indicates their ability to work with large data sets and their awareness of current trends in data management.

Big Data refers to extremely large data sets that can be analyzed computationally to reveal patterns, trends, and associations. It is relevant in modern business since it helps organizations to improve operations, make faster and more accurate decisions, and create differentiated, personalized customer experiences.

12. How do you ensure high availability and disaster recovery in databases?

This question evaluates the candidate’s knowledge of reliable database design and their ability to plan for unexpected events.

I ensure high availability and disaster recovery by implementing strategies such as data replication, clustering, and use of standby databases. Regular backups and testing of recovery plans are also crucial to mitigate data loss and downtime during a disaster.

13. Explain your experience with data virtualization.

The candidate’s response will indicate their proficiency with modern data management techniques and ability to create efficient data delivery architectures.

As a Data Architect, I’ve used data virtualization to provide an integrated view of data spread across various sources, without the need for data movement or replication. It enables faster access to data and reduces the cost and complexity of data management.

14. How do you handle change management in database environments?

This question assesses the candidate’s ability to manage changes in data architecture, such as updates and alterations, while maintaining system integrity and consistency.

A structured approach to change management is essential in database environments. This includes documenting all proposed changes, testing them in a controlled environment before deployment, and having a rollback plan in case of issues. Communication and collaboration with all stakeholders is also important for successful change management.

15. Can you explain what a Schema is in database design?

Understanding of Schema in database design demonstrates the candidate’s foundational knowledge of databases. This basic concept is critical for more complex tasks in data architecture.

In database design, a Schema is a blueprint of how data is organized and accessed. It defines the tables, fields, relationships, indexes, and other elements. It is crucial for understanding the data architecture and how different components are interconnected.

PySpark Optimization Techniques for Data Engineers

Optimizing PySpark performance is essential for efficiently processing large-scale data. Here are some key optimization techniques to enhance the performance of your PySpark applications:

Use Broadcast Variables

When joining smaller DataFrames with larger ones, consider using broadcast variables. This technique helps in distributing smaller DataFrames to all worker nodes, reducing data shuffling during the join operation.

from pyspark.sql import SparkSession
from pyspark.sql.functions import broadcast

spark = SparkSession.builder.appName("example").getOrCreate()

small_df = spark.createDataFrame([...])
large_df = spark.createDataFrame([...])

result_df = large_df.join(broadcast(small_df), "common_column")

Partitioning

Ensure that your DataFrames are properly partitioned to optimize data distribution across worker nodes. Choose appropriate partitioning columns to minimize data shuffling during transformations.

df = df.repartition("column_name")

Persist Intermediate Results

If you have multiple operations on the same DataFrame, consider persisting the intermediate results in memory or disk. This prevents recomputation and improves performance.

df.persist(StorageLevel.MEMORY_AND_DISK)

Adjust Memory Configurations

Tune the memory configurations for your PySpark application based on the available resources. This includes configuring executor memory, driver memory, and other related parameters in the SparkConf

conf = SparkConf().set("spark.executor.memory", "4g").set("spark.driver.memory", "2g")

Use DataFrames API Instead of RDDs

The DataFrame API in PySpark is optimized and performs better than the RDD API. Whenever possible, prefer using DataFrames for transformations and actions.

Avoid Using UDFs (User-Defined Functions) When Not Necessary

User-Defined Functions in PySpark can be less performant than built-in functions. If there’s an equivalent built-in function, use it instead of a UDF.

Use Spark SQL Caching

Leverage Spark SQL’s caching mechanism to cache tables or DataFrames in memory, especially for frequently accessed data.

spark.sql("CACHE TABLE your_table")

Use Catalyst Optimizer and Tungsten Execution Engine

PySpark utilizes the Catalyst optimizer and Tungsten execution engine to optimize query plans. Keep your PySpark version updated to benefit from the latest optimizations.

Increase Parallelism

Adjust the level of parallelism by configuring the number of partitions in transformations like repartition or coalesce. This can enhance the parallel execution of tasks.

Minimize Data Shuffling

Data shuffling is an expensive operation. Minimize unnecessary shuffling by carefully choosing join keys and optimizing your data layout.

Optimize Serialization Formats

Choose the appropriate serialization format based on your data and processing needs. Consider using more efficient serialization formats like Parquet.

Leverage Cluster Resources Efficiently

Take advantage of the cluster resources by understanding the available hardware and configuring Spark accordingly. Distribute the load evenly across nodes.

Applying these optimization techniques can significantly enhance the performance of your PySpark applications, especially when dealing with large datasets and complex transformations. Keep in mind that the effectiveness of these techniques may vary based on your specific use case and data characteristics. Experimentation and profiling are essential to identify the most impactful optimizations for your scenario.

Top 10 use cases for AI and ML in Banking Domain with Cloud

Today , We will go through an Overview of Top 10 use cases for AI and ML in Banking Domain with Cloud.

So Let’s Begin :

Introduction

Artificial Intelligence (AI): Artificial Intelligence refers to the development of computer systems or machines that can perform tasks that typically require human intelligence. AI aims to create intelligent machines capable of perceiving, reasoning, learning, and problem-solving. It encompasses a wide range of techniques, including machine learning, natural language processing, computer vision, robotics, and expert systems.

Machine Learning (ML): Machine Learning is a subset of AI that focuses on the development of algorithms and models that enable computer systems to learn and make predictions or decisions based on data, without being explicitly programmed. ML algorithms analyze and learn from large datasets, identifying patterns, trends, and relationships. Through this process, they can make accurate predictions or take actions in new, unseen situations.

In essence, AI is a broader concept that encompasses the development of intelligent systems, while ML is a specific approach within AI that focuses on algorithms and models that can learn from data to make predictions or decisions. ML is often used as a tool or technique to implement AI systems.

https://microsoft.github.io/AI-For-Beginners/

Artificial Intelligence (AI): AI involves creating intelligent systems that can perform tasks requiring human-like intelligence.

Here’s a basic example:

Example: Virtual Personal Assistants Virtual personal assistants like Apple’s Siri, Amazon’s Alexa, or Google Assistant are examples of AI. They use natural language processing and machine learning techniques to understand spoken commands, answer questions, perform tasks, and provide personalized recommendations. These assistants can schedule appointments, play music, provide weather updates, and even control smart home devices.

Machine Learning (ML): Machine Learning focuses on algorithms and models that can learn from data and make predictions or decisions without being explicitly programmed. Here’s an example:

Example: Email Spam Filtering Email spam filtering is a classic example of ML. ML models are trained on a large dataset of emails, where each email is labeled as spam or not spam. The model analyzes the features (words, phrases, sender information, etc.) of emails and learns to differentiate between spam and legitimate emails. Once trained, the model can accurately classify incoming emails as spam or not, helping users filter out unwanted messages.

In this example, the ML model learns to identify patterns and features that distinguish spam emails from legitimate ones. It generalizes this learning to new, unseen emails, making predictions based on the patterns it learned during training.

It’s important to note that ML is a subset of AI. AI encompasses a broader range of techniques and applications, while ML focuses specifically on algorithms that can learn from data. ML is a key tool used to implement AI systems, allowing them to learn and adapt based on available information.

The skill sets required for working in AI and ML involves a combination of technical knowledge, programming skills, and a solid understanding of mathematical and statistical concepts.

Here are some key skills commonly associated with AI and ML:

Mathematics and Statistics: A strong foundation in mathematics, including linear algebra, calculus, and probability theory, is crucial for understanding the underlying principles of AI and ML algorithms. Statistical knowledge helps in analyzing and interpreting data.

Programming Skills: Proficiency in programming languages such as Python, R, or Java is essential for implementing AI and ML algorithms. You should be comfortable writing code, manipulating data structures, and working with libraries and frameworks specific to AI and ML, such as TensorFlow, PyTorch, or scikit-learn.

Machine Learning Algorithms and Techniques: Familiarity with various ML algorithms (such as regression, classification, clustering, and neural networks) and techniques (such as feature engineering, dimensionality reduction, and model evaluation) is vital for selecting and applying the most appropriate approaches to solve problems.

Data Preprocessing and Feature Engineering: Knowledge of techniques for cleaning, transforming, and preprocessing data is important to ensure data quality and suitability for ML algorithms. Feature engineering involves selecting, extracting, and creating meaningful features from raw data that can improve model performance.

Deep Learning: Deep learning, a subset of ML, involves neural networks with multiple layers. Understanding deep learning concepts, architectures (such as convolutional neural networks or recurrent neural networks), and frameworks like TensorFlow or PyTorch is beneficial for tasks like image recognition, natural language processing, and speech recognition.

Data Visualization: Proficiency in data visualization tools and libraries like Matplotlib, Seaborn, or Tableau is valuable for effectively communicating insights and results from AI and ML models to stakeholders.

Problem-Solving and Critical Thinking: AI and ML professionals should possess strong problem-solving and critical thinking skills. This involves the ability to formulate problems, analyze complex situations, and develop creative solutions using AI and ML techniques.

Domain Knowledge: Having domain knowledge in specific areas, such as finance, healthcare, or e-commerce, can be advantageous. It helps in understanding the context of the problem, selecting relevant features, and developing domain-specific AI and ML solutions.

Continuous Learning and Adaptability: The field of AI and ML is continuously evolving. Staying updated with the latest research, techniques, and tools is essential. Being adaptable and open to learning new concepts and technologies is crucial to thrive in this fast-paced field.

Remember, the specific skill set required may vary depending on the particular role or application within AI and ML. It’s important to continuously learn and expand your knowledge as AI and ML technologies evolve.

Artificial intelligence (AI) and machine learning (ML) have numerous applications in the corporate banking sector. Here are the top 10 use cases for AI and ML in corporate banking:

  1. Fraud Detection and Prevention: AI and ML algorithms can analyze large volumes of transaction data in real-time to identify patterns and anomalies that may indicate fraudulent activities. By continuously monitoring transactions, these models can flag suspicious behavior and help prevent fraudulent transactions.
  2. Risk Assessment and Credit Scoring: AI and ML models can analyze historical financial data, credit reports, and other relevant information to assess the creditworthiness of borrowers. By considering various factors, such as payment history, income, and outstanding debts, these models can provide more accurate risk scores, enabling banks to make informed lending decisions.
  3. Customer Service and Chatbots: AI-powered chatbots can handle customer basic queries , provide personalized recommendations, and improving customer service and reducing response times.AI-powered chatbots can also handle specific product ore service related customer inquiries, provide 24/7 assistance, and perform basic banking tasks such as balance inquiries or fund transfers. By leveraging natural language processing (NLP) techniques, these chatbots can understand customer queries and provide personalized responses, enhancing the customer service experience.
  4. Anti-Money Laundering (AML) Compliance: AI and ML techniques can automate AML compliance processes by analyzing large volumes of data and flagging suspicious activities, helping banks comply with regulations and identify potential money laundering activities.Banks are required to comply with AML regulations and monitor transactions for suspicious activities. AI and ML can automate this process by analyzing vast amounts of transactional data, customer information, and external data sources to identify potential money laundering activities. These models can flag suspicious transactions, enabling banks to investigate and report them as per regulatory requirements.
  5. Predictive Analytics for Marketing: AI and ML algorithms can analyze customer data, transaction history, and external market trends to provide insights for targeted marketing campaigns and cross-selling opportunities. By understanding customer preferences and patterns, banks can offer personalized product recommendations, improve customer engagement, and provide value added services .
  6. Investment and Portfolio Management: AI algorithms can analyze vast amounts of financial market data, news, and historical performance to generate investment insights. These models can assist investment managers in making informed decisions about portfolio allocation, risk management, and asset selection, ultimately optimizing investment strategies.
  7. Regulatory Compliance and Reporting: Banks must comply with various financial regulations and reporting requirements. AI systems can automate compliance processes by extracting and analyzing relevant data from multiple sources. These systems can ensure accuracy, reduce manual errors, and streamline regulatory reporting, saving time and resources.
  8. Risk Management and Fraudulent Activity Monitoring: AI and ML models can continuously monitor and analyze transactions, market conditions, and external factors to identify potential risks and fraudulent activities, enabling banks to take proactive measures to mitigate them.AI and ML models can continuously monitor transactions, market conditions, and external factors to identify potential risks and fraudulent activities. By analyzing patterns, anomalies, and behavioral changes, these models can provide early warnings, enabling banks to take proactive measures to mitigate risks and prevent fraudulent activities.
  9. Process Automation and Optimization: AI-powered robotic process automation (RPA) can automate repetitive and rule-based tasks such as data entry, document processing, and customer onboarding. By eliminating manual efforts, RPA improves operational efficiency, reduces costs, and frees up employees to focus on more complex tasks that require human expertise.
  10. Cybersecurity and Threat Detection: Banks face significant cybersecurity risks, and AI and ML techniques can enhance their defense mechanisms. AI models can analyze network traffic, identify anomalies, and detect potential security breaches. These models can strengthen security systems, protect sensitive customer data, and help prevent cyber-attacks.

These use cases demonstrate how AI and ML technologies are transforming the banking sector, enhancing operational efficiency, improving customer experience, mitigating risks, and enabling banks to make data-driven decisions.

Implementing these use cases using the Google Cloud Platform (GCP) involves utilizing various services and tools offered by Google. Here’s a high-level overview of how you can implement some of the use cases using GCP:

  1. Fraud Detection and Prevention: You can use Google Cloud’s BigQuery to store and analyze transactional data. Apply machine learning techniques using Google Cloud’s AI Platform to build fraud detection models. Use Cloud Pub/Sub for real-time data streaming and Cloud Dataflow for processing and analysis.
  2. Risk Assessment and Credit Scoring: Leverage Google Cloud’s BigQuery for storing and analyzing historical financial data. Utilize AI Platform for training and deploying machine learning models to assess creditworthiness. Use AutoML for building custom credit scoring models.
  3. Customer Service and Chatbots: Use Dialogflow, Google Cloud’s NLP platform, to build chatbots and virtual assistants that can interact with customers. Integrate Dialogflow with other GCP services like Cloud Functions, Cloud Pub/Sub, or App Engine to handle customer inquiries and perform banking tasks.
  4. AML Compliance: Utilize BigQuery to store and analyze transactional and customer data. Apply AI and ML techniques using AI Platform or Cloud Dataflow to identify suspicious activities and flag potential money laundering transactions.
  5. Predictive Analytics for Sales and Marketing: Utilize BigQuery to store and analyze customer data and market trends. Apply AI and ML techniques using AI Platform or BigQuery ML to generate predictive models for targeted marketing campaigns and personalized recommendations.
  6. Investment and Portfolio Management: Use BigQuery to store financial market data and historical performance. Utilize AI Platform or AutoML to build investment models and analyze market trends for portfolio optimization.
  7. Regulatory Compliance and Reporting: Utilize BigQuery to store and manage relevant data for compliance reporting. Use Dataflow or Cloud Functions to automate data extraction and processing. Apply AI and ML techniques to ensure accuracy and automate compliance checks.
  8. Risk Management and Fraudulent Activity Monitoring: Utilize BigQuery or Cloud Spanner for storing transactional data. Apply AI and ML techniques using AI Platform or Cloud Dataflow to continuously monitor transactions and identify potential risks or fraudulent activities.
  9. Process Automation and Optimization: Use Cloud Functions, Cloud Pub/Sub, or Cloud Dataflow to automate repetitive tasks and streamline processes. Leverage AI and ML services like AI Platform or AutoML for intelligent process automation.
  10. Cybersecurity and Threat Detection: Utilize Google Cloud’s security services like Cloud Security Command Center, Cloud Armor, or Cloud DLP for detecting and preventing security threats. Apply AI and ML techniques to analyze network traffic and identify anomalies using Cloud AI Platform or Cloud Dataflow.

It’s recommended to explore the specific GCP services, documentation, and tutorials provided by Google to gain a deeper understanding of how to implement each use case effectively.

Implementing these use cases using the AWS (Amazon Web Services) cloud platform involves utilizing various services and tools provided by AWS. Here’s a high-level overview of how you can implement some of the use cases using AWS:

  1. Fraud Detection and Prevention: Use services like Amazon S3 or Amazon DynamoDB to store and process transactional data. Apply machine learning techniques using Amazon SageMaker to build fraud detection models. Use services like Amazon Kinesis or AWS Lambda for real-time data processing and analysis.
  2. Risk Assessment and Credit Scoring: Leverage AWS data storage services like Amazon S3 or Amazon RDS to store historical financial data. Utilize Amazon SageMaker for training and deploying machine learning models to assess creditworthiness. Use AWS Glue for data preparation and transformation tasks.
  3. Customer Service and Chatbots: Use Amazon Lex, a conversational AI service, to build chatbots and virtual assistants that can interact with customers. Integrate Lex with other AWS services like AWS Lambda or Amazon Connect to handle customer inquiries and perform banking tasks.
  4. AML Compliance: Utilize AWS data storage services like Amazon S3 or Amazon DynamoDB to store and analyze transactional and customer data. Apply AI and ML techniques using Amazon SageMaker to identify suspicious activities and flag potential money laundering transactions.
  5. Predictive Analytics for Sales and Marketing: Leverage AWS data storage services like Amazon Redshift or Amazon RDS to store customer data and market trends. Apply AI and ML techniques using Amazon SageMaker to generate predictive models for targeted marketing campaigns and personalized recommendations.
  6. Investment and Portfolio Management: Utilize AWS data storage services like Amazon S3 or Amazon RDS to store financial market data and historical performance. Utilize Amazon SageMaker for building investment models and analyzing market trends for portfolio optimization.
  7. Regulatory Compliance and Reporting: Use AWS data storage services like Amazon S3 or Amazon Redshift to store relevant data for compliance reporting. Utilize AWS Glue for data extraction and transformation tasks. Apply AI and ML techniques using Amazon SageMaker for accuracy checks and automating compliance processes.
  8. Risk Management and Fraudulent Activity Monitoring: Utilize AWS data storage services like Amazon S3 or Amazon DynamoDB for storing transactional data. Apply AI and ML techniques using Amazon SageMaker to continuously monitor transactions and identify potential risks or fraudulent activities.
  9. Process Automation and Optimization: Use AWS Lambda, Amazon Step Functions, or AWS Glue to automate repetitive tasks and streamline processes. Leverage AI and ML services like Amazon SageMaker to automate intelligent decision-making and process optimization.
  10. Cybersecurity and Threat Detection: Utilize AWS security services like Amazon GuardDuty, AWS WAF, or AWS Macie for detecting and preventing security threats. Apply AI and ML techniques to analyze network traffic and identify anomalies using services like Amazon SageMaker or Amazon Rekognition.

These are general guidelines, and the specific implementation details may vary depending on your exact requirements and AWS services you choose. It’s recommended to explore the AWS documentation, AWS console, and AWS Marketplace for specific services, tools, and tutorials related to each use case.

Implementing these use cases using the Azure cloud platform involves utilizing various services and tools provided by Microsoft Azure. Here’s a high-level overview of how you can implement some of the use cases using Azure:

  1. Fraud Detection and Prevention: Use Azure services like Azure Storage or Azure Cosmos DB to store and process transactional data. Apply machine learning techniques using Azure Machine Learning to build fraud detection models. Utilize services like Azure Event Hubs or Azure Functions for real-time data processing and analysis.
  2. Risk Assessment and Credit Scoring: Leverage Azure data storage services like Azure SQL Database or Azure Data Lake Storage to store historical financial data. Utilize Azure Machine Learning for training and deploying machine learning models to assess creditworthiness. Use Azure Databricks for data preparation and transformation tasks.
  3. Customer Service and Chatbots: Use Azure Bot Service to build chatbots and virtual assistants that can interact with customers. Integrate the chatbots with other Azure services like Azure Functions or Azure Logic Apps to handle customer inquiries and perform banking tasks.
  4. AML Compliance: Utilize Azure data storage services like Azure SQL Database or Azure Cosmos DB to store and analyze transactional and customer data. Apply AI and ML techniques using Azure Machine Learning to identify suspicious activities and flag potential money laundering transactions.
  5. Predictive Analytics for Sales and Marketing: Leverage Azure data storage services like Azure SQL Database or Azure Data Lake Storage to store customer data and market trends. Apply AI and ML techniques using Azure Machine Learning to generate predictive models for targeted marketing campaigns and personalized recommendations.
  6. Investment and Portfolio Management: Utilize Azure data storage services like Azure SQL Database or Azure Cosmos DB to store financial market data and historical performance. Utilize Azure Machine Learning for building investment models and analyzing market trends for portfolio optimization.
  7. Regulatory Compliance and Reporting: Use Azure data storage services like Azure SQL Database or Azure Blob Storage to store relevant data for compliance reporting. Utilize Azure Data Factory for data extraction and transformation tasks. Apply AI and ML techniques using Azure Machine Learning for accuracy checks and automating compliance processes.
  8. Risk Management and Fraudulent Activity Monitoring: Utilize Azure data storage services like Azure SQL Database or Azure Cosmos DB for storing transactional data. Apply AI and ML techniques using Azure Machine Learning to continuously monitor transactions and identify potential risks or fraudulent activities.
  9. Process Automation and Optimization: Use Azure Logic Apps, Azure Functions, or Azure Automation to automate repetitive tasks and streamline processes. Leverage AI and ML services like Azure Machine Learning to automate intelligent decision-making and process optimization.
  10. Cybersecurity and Threat Detection: Utilize Azure security services like Azure Sentinel, Azure Security Center, or Azure Firewall for detecting and preventing security threats. Apply AI and ML techniques to analyze network traffic and identify anomalies using services like Azure Machine Learning or Azure Cognitive Services.

These are general guidelines, and the specific implementation details may vary depending on your exact requirements and Azure services you choose. It’s recommended to explore the Azure documentation, Azure portal, and Azure Marketplace for specific services, tools, and tutorials related to each use case.

Conclusion :

In conclusion, the field of Artificial Intelligence (AI) and Machine Learning (ML) offers immense potential and is transforming various industries, including finance, healthcare, retail, and more. To succeed in AI and ML, individuals need a combination of technical skills, mathematical understanding, programming proficiency, and problem-solving abilities.

A strong foundation in mathematics and statistics is essential to grasp the underlying principles of AI and ML algorithms. Programming skills, particularly in languages like Python or R, are vital for implementing algorithms, manipulating data, and working with specialized libraries and frameworks.

Knowledge of ML algorithms and techniques, such as regression, classification, clustering, and neural networks, empowers practitioners to choose the right approach for solving problems effectively. Furthermore, understanding data preprocessing, feature engineering, and visualization techniques is crucial for preparing data and extracting meaningful insights.

Deep learning, a subset of ML, requires familiarity with neural network architectures and frameworks like TensorFlow or PyTorch. Domain knowledge in specific industries enhances the ability to develop domain-specific AI and ML solutions.

Critical thinking, problem-solving skills, and adaptability are highly valuable in this evolving field, which demands continuous learning to stay updated with the latest advancements and technologies.

By mastering these skills and continuously expanding their knowledge, professionals in AI and ML can unlock new possibilities, develop innovative solutions, and contribute to the growth and transformation of industries across the globe. Embracing AI and ML opens the doors to creating intelligent systems that can revolutionize the way we live, work, and interact with technology.

Generics in C#

Generics introduced in C# 2.0. Generics allow you to define a class with placeholders for the type of its fields, methods, parameters, etc. Generics replace these placeholders with some specific type at compile time.

A generic class can be defined using angle brackets <>. For example, the following is a simple generic class with a generic member variable, generic method and property.

Example: Generic class

class MyGenericClass<T>
{
    private T genericMemberVariable;

    public MyGenericClass(T value)
    {
        genericMemberVariable = value;
    }

    public T genericMethod(T genericParameter)
    {
        Console.WriteLine("Parameter type: {0}, value: {1}", typeof(T).ToString(),genericParameter);
        Console.WriteLine("Return type: {0}, value: {1}", typeof(T).ToString(), genericMemberVariable);
            
        return genericMemberVariable;
    }

    public T genericProperty { get; set; }
}

As you can see in the above code, MyGenericClass is defined with <T>. <> indicates that MyGenericClass is generic and the underlying type would be defined later, for now consider it as T . You can take any character or word instead of T.

Now, the compiler assigns the type based on the type passed by the caller when instantiating a class. For example, the following code uses the int data type:

Instantiate generic class:

MyGenericClass<int> intGenericClass = new MyGenericClass<int>(10);

int val = intGenericClass.genericMethod(200);
Output:

Parameter type: int, value: 200
Return type: int, value: 10

The following figure illustrates how the compiler will replace T with int in MyGenericClass.

Generic class

The above MyGenericClass<int> class would be compiled, as shown below.

Example: Generic class

class MyGenericClass
{
    private int genericMemberVariable;

    public MyGenericClass(int value)
    {
        genericMemberVariable = value;
    }

    public int genericMethod(int genericParameter)
    {
        Console.WriteLine("Parameter type: {0}, value: {1}", typeof(int).ToString(), genericParameter);
        Console.WriteLine("Return type: {0}, value: {1}", typeof(int).ToString(), genericMemberVariable);

        return genericMemberVariable;
    }

    public int genericProperty { get; set; }

}

You can use any type while instantiating a MyGenricClass. For example, the following example uses a string type.

Example: Generic class

MyGenericClass<string> strGenericClass = new MyGenericClass<string>("Hello Generic World");

strGenericClass.genericProperty = "This is a generic property example.";
string result = strGenericClass.genericMethod("Generic Parameter");
Output:

Parameter type: string, value: Generic Parameter
Return type: string, value: Hello Generic World

Generic base class:

When deriving from a generic base class, you must provide a type argument instead of the base-class’s generic type parameter as shown below.

Example: Generic base class

class MyDerivedClass : MyGenericClass<string>
{ 
    //implementation
}
    

If you want the derived class to be generic then no need to specify type for the generic base class.

Example: Generic derived class

class MyDerivedClass<U> : MyGenericClass<U>
{ 
    //implementation
}
    

If the generic base class has constraints, the derived class must use the same constraints.


class MyGenericClass<T> where T: class 
{
        // Implementation 
}

class MyDerivedClass<U> : MyGenericClass<U> where U: class
{ 
        //implementation
}

Generic Delegates:

As you have already learned in the previous section, the delegate defines the signature of the method which it can invoke. A generic delegate can be defined the same way as delegate but with generic type.

For example, consider the following generic delegate that takes two generic parameters.

Example: Generic Delegate

class Program
{
    public delegate T add<T>(T param1, T param2);

    static void Main(string[] args)
    {
        add<int> sum = AddNumber;

        Console.WriteLine(sum(10, 20));

        add<string> conct = Concate;

        Console.WriteLine(conct("Hello","World!!"));
    }

    public static int AddNumber(int val1, int val2)
    {
        return val1 + val2;
    }

    public static string Concate(string str1, string str2)
    {
        return str1 + str2;
    }
}
Output:

30
Hello World!!

In the above example, add delegate is generic. In the Main() method, it has defined add delegate of int type variable sum. So it can point to the AddNumber() method which has int type parameters. Another variable of add delegate uses string type, so it can point to the Concate method. In this way, you can use generic delegates for different methods of different types of parameters.

Note :A generic delegate can point to methods with different parameter types. However, the number of parameters should be the same.

Generics can be applied to the following:

  • Interface
  • Abstract class
  • Class
  • Method
  • Static method
  • Property
  • Event
  • Delegates
  • Operator

Advantages of Generic:

  1. Increases the reusability of the code.
  2. Generic are type safe. You get compile time errors if you try to use a different type of data than the one specified in the definition.
  3. Generic has a performance advantage because it removes the possibilities of boxing and unboxing.

Further reading:

Points to Remember :

  1. Generics denotes with angel bracket <>.
  2. Compiler applys specified type for generics at compile time.
  3. Generics can be applied to interface, abstrct class, method, static method, property, event, delegate and operator.
  4. Generics performs faster by not doing boxing & unboxing.

Difference between Retrieve and Retrieve Multiple

Retrieve retrieves a single record given the record ID, the method requires 3 parameters: the entity logicalname, the record ID and the columns (attributes) you want to retrieve. It throws an exception if the record ID is not found.

RetrieveMultiple runs a Query against CRM, normally a QueryExpression is used that defines the entity logicalname, the conditions of the query and the columns (attributes) you want to retrieve. It returns always an EntityCollection object and the Entities property contains the list of the records that satisfy the query conditions, so you can have 0, 1 or n records returned.

Retrieve:

Retrieve retrieves a single record given the record ID, the method requires 3 parameters: the entity logicalname, the record ID and the columns (attributes) you want to retrieve. It throws an exception if the record ID is not found.

For better performance, use this method instead of using the Execute method with the Retrieve message.
Returns the BusinessEntity requested. The BusinessEntity contains only the columns specified by the columnSet parameter. The entity is of the type specified by the entityName parameter.

  1. It is used to retrieve a single entity.
  2. This method is strongly typed.
  3. Retrieves an entity instance using the specified ID.

Parameters :

entityName

Specifies a String containing the name of the entity to retrieve. For more information, see Entity Names.

id

Specifies a Guid containing the ID of the entity to retrieve.

columnSet

Specifies the set of columns to retrieve. Pass null to retrieve only the primary key. To retrieve all columns, pass a new instance of the AllColumns class. See ColumnSetBase.

Return Value

Returns the BusinessEntity requested. The BusinessEntity contains only the columns specified by the columnSet parameter. The entity is of the type specified by the entityName parameter.

 Synatx :

public BusinessEntity Retrieve(

string  entityName,

Guid  id,

ColumnSetBase  columnSet

);

Example :

 // Instantiate an account

object.Entity account = new Entity(“account”);

// Set the required attributes.

For account, only the name is required.

// See the Entity Metadata topic in the SDK documentation to determine

// which attributes must be set for each

entity.account[“name”] = “Fourth Coffee”;

// Create an account record named Fourth Coffee.

_accountId = _service.Create(account);

Console.Write(“{0} {1} created, “, account.LogicalName, account.Attributes[“name”]);

// Create a column set to define which attributes should be

retrieved.ColumnSet attributes = new ColumnSet(new string[] { “name”, “ownerid” });

// Retrieve the account and its name and ownerid

attributes.account = _service.Retrieve(account.LogicalName, _accountId, attributes);

Console.Write(“retrieved, “);

// Update the postal code

attribute.account[“address1_postalcode”] = “98052”;

// The address 2 postal code was set accidentally, so set it to null.account[“address2_postalcode”] = null;

// Shows use of Money.

account[“revenue”] = new Money(5000000);

// Shows use of

boolean.account[“creditonhold”] = false;

// Update the account.

_service.Update(account);Console.WriteLine(“and updated.”);

 RetrieveMultiple :

RetrieveMultiple runs a Query against CRM, normally a QueryExpression is used that defines the entity logicalname, the conditions of the query and the columns (attributes) you want to retrieve.

It returns always an EntityCollection object and the Entities property contains the list of the records that satisfy the query conditions, so you can have 0, 1 or n records returned.

Retrieves a collection of entity instances based on the specified query criteria.

  1. The ColumnSet specified within the QueryExpression can only include the objects of the type of the calling entity.
  2. This method will return the result as a BusinessEntityCollection.
  3. (Multiple records of an a Single entity).Use this method to retrieve one or more entity instances based on criteria specified in the QueryExpression.

Syntax :

public BusinessEntityCollection RetrieveMultiple(

QueryBase  query

);

Parameters

query

Specifies either a QueryExpression or a QueryByAttribute object derived from the QueryBase class. This is the query to be executed for an entity.

The QueryExpression or QueryByAttribute object contains the type information for the entity.

Return Value: A BusinessEntityCollection that is a collection of entities of the type of specified in the query parameter.

 Remarks

Use this method to retrieve one or more entity instances based on criteria specified in the QueryExpression. For better performance, use this method instead of using the Execute method with the RetrieveMultiple message.

To perform this action, the caller must have access rights on the entity instance specified in the request class.

For a list of required privileges, see Retrieve Privileges.

The ColumnSet specified within the QueryExpression can only include the objects of the type of the calling entity.

For more information, see Using ColumnSet.

Example :

 // Create the ColumnSet that indicates the properties to be retrieved.

ColumnSet cols = new ColumnSet();

// Set the properties of the ColumnSet.

cols.Attributes = new string [] {“fullname”, “contactid”};

// Create the ConditionExpression.

ConditionExpression condition = new ConditionExpression();

// Set the condition for the retrieval to be when the contact’s address’ city is Sammamish.

condition.AttributeName = “address1_city”;

condition.Operator = ConditionOperator. Like;

condition.Values = new string [] {“Sammamish”};

// Create the FilterExpression.

FilterExpression filter = new FilterExpression();

// Set the properties of the filter.

filter.FilterOperator = LogicalOperator.And;

filter.Conditions = new ConditionExpression[] {condition};

// Create the QueryExpression object.

QueryExpression query = new QueryExpression();

// Set the properties of the QueryExpression object.

query.EntityName = EntityName.contact.ToString();

query.ColumnSet = cols;

query.Criteria = filter;

// Retrieve the contacts.

BusinessEntityCollection contacts = service.RetrieveMultiple(query);

 Execute Method:

The Execute method executes business logic.

  • It returns a Response object and accepts a parameter as the input of the Request type.
  • You can use this method as a wildcard for all the other methods.
  • This means that you can create an Account by using this method because the class called CreateRequest derives from Request and can be used as the input parameter;
  • you receive a CreateResponse as the result.
  • The same happens for UpdateRequest, UpdateResponse, RetrieveRequest, and RetrieveResponse.
  • The Execute method executes a message that represents either a specialized method or specific business logic.
  •  This method is strongly typed.
  •  It is used to retrieve a single entity.
  • The return types will be a Business Entity.

 Syntax :

 public Response Execute(  Request  Request);

 Parameters :

Request

Specifies a specific Request instance.

Return Value

Returns an instance of a Response. You must cast the return value of the Execute method to the specific instance of the response that corresponds to the Request parameter.

Remarks

To perform this action, the caller must have the necessary privileges to the entity type specified in the request class. The caller must also have access rights on the entity instances specified in the request class.

Example :

 // Set up the CRM

Service.CrmAuthenticationToken token = new CrmAuthenticationToken();

// You can use enums.cs from the SDK\Helpers folder to get the enumeration for Active Directory

authentication.token.AuthenticationType = 0;

token.OrganizationName = “AdventureWorksCycle”;

CrmService service = new CrmService();

service.Url = “http://<servername&gt;:port>/mscrmservices/2007/crmservice.asmx”;

service.CrmAuthenticationTokenValue = token;

service.Credentials = System.Net.CredentialCache.DefaultCredentials;

// Create the request

object.AddItemCampaignRequest add = new AddItemCampaignRequest();

// Set the properties of the request

object.add.CampaignId = campaignId;

add.EntityId = productId;

add.EntityName = EntityName.product;

// Execute the

request.AddItemCampaignResponse added = (AddItemCampaignResponse) service.Execute(add);

Fetch Method

Retrieves entity instances in XML format based on the specified query expressed in the FetchXML query language.

1. Use this method to execute a query expressed in the FetchXML query language
2. Results single or multiple entity
3. This method will return the resultant XML as a string.
4. Not strongly Typed

 Syntax :

public string Fetch(

string  fetchXml

);

Parameters :

fetchXml

Specifies a String that contains the fetch query string to be executed.

Return Value

Returns an XML String type that contains the results of the query.

Remarks

Use this method to execute a query expressed in the FetchXML query language.

To perform this action, the caller must have the Read privilege to the entity types being retrieved and access rights on the entity instances retrieved.

Example :

 // Set up the CRM

Service.CrmAuthenticationToken token = new CrmAuthenticationToken();

// You can use enums.cs from the SDK\Helpers folder to get the enumeration for Active Directory

Authentication.token.AuthenticationType = 0;

token.OrganizationName = “AdventureWorksCycle”;

CrmService service = new CrmService();

service.Url = “http://<servername&gt;:<port>/mscrmservices/2007/crmservice.asmx”;

service.CrmAuthenticationTokenValue = token;

service.Credentials = System.Net.CredentialCache.DefaultCredentials;

// Retrieve all attributes for all accounts.

// Be aware that using all-attributes may adversely affect

// performance and cause unwanted cascading in subsequent

// updates. A best practice is to retrieve the least amount of

// data

required.string fetch1 = @”   <fetch mapping=””logical””>                  <entity name=””account””>                     <all-attributes/>                  </entity>               </fetch>”; // Fetch the results.String result1 = service.Fetch(fetch1); // Retrieve the name and account ID for all accounts where// the account owner’s last name is not Cannon.string fetch2 = @”<fetch mapping=””logical””>                  <entity name=””account””>                     <attribute name=””accountid””/>                     <attribute name=””name””/>                     <link-entity name=””systemuser”” to=””owninguser””>                        <filter type=””and””>                           <condition attribute=””lastname”” operator=””ne”” value=””Cannon””/>                        </filter>                     </link-entity>                  </entity>               </fetch>”;

// Fetch the results.

String result2 = service.Fetch(fetch2);

 Pre-Validate, Pre-Operation, Post-Operation :

In Microsoft Dynamics CRM 2011, plugins can be triggered to fire on a variety of messages (update, create, win etc).

However you can also specify when exactly the plugin will fire in relation to the given message.

Your choice is the following:

Pre-validation of the given message (pre-validation of a record deletion for example).

Pre-operation (pre-operation of a record update for example).

Post-operation (post-operation of a record create).
What trigger point you choose here can be very important, I will attempt to explain here when you should use each type:

Pre-Validation
Pre-Validation plugins run before the actual operation is validated within CRM and could allow the user to do such things as custom checks to determine if a record can be deleted at this point in time.
Basically the Pre-validation should allow your plugin to run outside the SQL transaction, so it runs before the form is validated.

Pre-Operation
Plugins that fire on this message tend to involve operations which could result in the updating of the target record itself before it is actually saved in the system.

The Pre-operation runs after validation and before the values are saved to the database

Post-Operation
Note, avoid using this mode if your plugin is going to be updating the current entity. This mode is best utilized when the plugin simply needs to reference the newly created record (but not actually update it) in order to perform further operations on other records in the system.  Problems can arise if you try to update the target entity in a post-op plugin, for example if you have a plugin that runs post-update of an entity and the same plugin updates the target entity, then an infinite loop will occur.

The Post-Operation operation plugin runs after the values have been inserted/changed in the database.

Event Stage name Stage number Description
Pre-Event Pre-validation 10 Stage in the pipeline for plug-ins that are to execute before the main system operation. Plug-ins registered in this stage may execute outside the database transaction.

Security Note
The pre-validation stage occurs prior to security checks being performed to verify the calling or logged on user has the correct permissions to perform the intended operation.
Pre-Event Pre-operation 20 Stage in the pipeline for plug-ins that are to execute before the main system operation. Plug-ins registered in this stage are executed within the database transaction.
Platform Core Operation MainOperation 30 In-transaction main operation of the system, such as create, update, delete, and so on. No custom plug-ins can be registered in this stage. For internal use only.
Post-Event Post-operation 40 Stage in the pipeline for plug-ins which are to execute after the main operation. Plug-ins registered in this stage are executed within the database transaction.

what is crmsvcutil exe?

Developer Extensions for Microsoft Dynamics CRM 2015 provides an extension to the CrmSvcUtil.exe command-line tool, called the Microsoft.Xrm.Client.CodeGeneration extension. 

Advanced Developer Extensions for Microsoft Dynamics CRM provides a command-line code generation tool called CrmSvcUtil.exe that is used to generate a data context class as well as data transfer objects (DTOs) for all Microsoft Dynamics CRM entities.

We use CrmSvcUtil to generate early bind classes in CRM2011. Also, we have an option to generate Organisation service context. The only problem is that it is a command line tool. We have to go to command prompt, type the command with required parameters and then move the generated file to our project. It is a bit annoying.
How to use CrmSvcUtil tool in Visual Studio. Here are the steps.

  1. In Visual Studio, create a new “Class Library” project as shown in the following screen shot
  2. Delete the class file “class1.cs” created by the Visual Studio.
  1. Add the following files to the project from SDK\Bin folder of CRM SDK.
  • CrmSvcUtil.exe
  • Microsoft.Crm.Sdk.Proxy.dll
  • Microsoft.Xrm.Sdk.dll
  1. Add an  application configuration file to the project and name it “CrmSvcUtil.exe.config”. This file will contain all the parameters we can pass to “CrmSvcUtil.exe”. The solution explorer will look like a following screen.
  2. Here is a list of all the parameters we can use with CrmSvcUtil.
  3. Add the following keys to CrmSvcUtil.exe.config file.

<?xml version=”1.0″ encoding=”utf-8″ ?>

<configuration>

<appSettings>

<add key=”url” value=”https://cdc.crm5.dynamics.com/XRMServices/2011/Organization.svc”/&gt;

<add key=”o” value=”CrmProxy.cs”/>

<add key=”u” value=”username@live.com”/>

<add key=”p” value=”password”/>

<add key=”servicecontextname” value=”XrmContext”/>

</appSettings>

</configuration>

  1. Now the interesting part, right click on project node and press properties.
  2. It will pop up a following dialog box. Click on the “Debug” tab
  3. Select “Start external program” and choose the CrmSvcUtil.exe, we added in step 3.
  4. Now choose the “Working directory” where you want the output file to go.
  5. Debug the code and it will come up with following screen.
  6. You can check the “Output Window” of  Visual Studio for the result. If everything goes smoothly, it will create “CrmProxy.cs” file in the folder selected in “Working directory” in step
  7. Include the “CrmProxy.cs” file into the project.
  8. Check the CrmProxy.cs, it will have all the crm entities classes and “XrmContext”.

Tips

You can add, remove and edit keys in CrmSvcUtil.exe.config to pass parameters to code generator tool.Try accessing the CRM through the browser before debugging, if you are working with CRM Online. You can add this project to any of your crm solution. Change the “Working directory” of the project to generate the CrmProxy file in a desired folder.

Life cycle of plug-in

Event Stage name Stage number Description
Pre-Event Pre-validation 10 Stage in the pipeline for plug-ins that are to execute before the main system operation. Plug-ins registered in this stage may execute outside the database transaction.

Security Note
The pre-validation stage occurs prior to security checks being performed to verify the calling or logged on user has the correct permissions to perform the intended operation.
Pre-Event Pre-operation 20 Stage in the pipeline for plug-ins that are to execute before the main system operation. Plug-ins registered in this stage are executed within the database transaction.
Platform Core Operation MainOperation 30 In-transaction main operation of the system, such as create, update, delete, and so on. No custom plug-ins can be registered in this stage. For internal use only.
Post-Event Post-operation 40 Stage in the pipeline for plug-ins which are to execute after the main operation. Plug-ins registered in this stage are executed within the database transaction.

 Difference between Secure / Unsecure Configuration of Plugin Registration tool 

Unsecure Configuration of

Plugin Registration tool in CRM 2011

Secure Configuration of Plugin

Registration tool in CRM 2011

 

 

Unsecure configuration information could be read by any user in CRM. Remember its public information (Eg: Parameter strings to be used in plugin could be supplied here)

The Secure Configuration information could be read only by CRM Administrators.(Eg: Restricted data from normal user could be supplied here)
Imagine that you include a plugin, plugin steps and activate them in a solution. Later solution was exported as Managed Solution to another environment. In this scenario, the supplied Unsecure configuration values would be available in the new environment.  

Imagine that you include a plugin, plugin steps and activate them in a solution. Later solution was exported as Managed Solution to another environment. In this scenario, the supplied Secure configuration  information would NOT be available in the new environment. The simple  reason behind this is to provide more security to the contents of Secure Configuration.

 What is Metadata?

 Microsoft Dynamics CRM 2015 and Microsoft Dynamics CRM Online uses a metadata driven architecture to provide the flexibility to create custom entities and additional system entity attributes.

All the information necessary for Microsoft Dynamics CRM server to operate is stored in the Microsoft Dynamics CRM metadata. This includes information about entities, attributes, relationships, and option sets.

Metadata object Description
Entity An entity is a container for data, similar to a table in a traditional database. Each entity contains a set of attributes. For Microsoft Dynamics CRM, there are a set of entities that exist when you first install. Some of these are customizable. In addition, you can create custom entities to contain business data.
Attribute An attribute is a container for a piece of data in an entity. Microsoft Dynamics CRM supports a wide variety of attribute types.
Relationship A relationship defines an association between two entities: one-to-many, many-to-one, many-to-many, and self-referential.
Option Set An option set defines a set of options provided for a picklist. Several picklist attributes may use a global option set so that the options they provide are always the same and can be maintained in one location.
Option An option is one of the values available in an option set. Each option in an option set has a unique integer value and an associated set of localized labels.

 Note: Want to know the quickest way to view CRM metadata? This short and sweet blog will show you how you how to you can use the Organization Data service’s $metadata option to generate a quick view of the CRM entities.

Simply navigate to System > Customization > Developer Resources to see the Organization Data service endpoint.

Click on the link and a new window will be displayed.

Append the /$metadata option at the end of the URL and hit Enter.

Registering and Deploying Plug-ins

 Registering and deploying plug-ins can be done using the plug-in registration tool.

  1. Connect to your organization.

If you have access to multiple organizations in the server, choose the one to connect to.

  1. Register a new assembly.
  2. Browse the assembly file, select Isolation Mode and specify where the assembly is stored.
  3. Next you’ll need to select the registered assembly. It can contain multiple plug-ins. Select the plugin you are adding steps to, and register one or more steps.
  4. Fill in the following information for the steps:

Message

Entity

Filtering Attributes if applicable. In above example, the plugin will only trigger for statecode or salesstagecode updates. Selecting the attributes will prevent plugin triggering accidentally or needlessly when unrelated field is updated.

Event Pipeline

Execution Mode

  1. Fill in the Unsecure Configuration/Secure Configuration sections.These sections can be used to pass configuration information to the plug-in, such as user credentials or URLs. The difference between secure and unsecure configuration is as follows:

Secure Configuration does not move with solutions. It has to be re-configured for each environment.

Secure Configuration can be viewed only by CRM administrators.

  1. If applicable, select step and register an image. Choose whether it’s a Pre Image or Post Image.

You’ll also need to select attributes that you would like to be contained within the image.

  1. After the plug-in and the steps have been registered, they can now be included in CRM solutions and deployed with unmanaged or managed solutions.

 DEPLOYING THE PLUGIN

Now you have finished writing the code, it’s time to deploy the plugin in CRM.  Your user will need to be a CRM Administrator.  If you are deploying plugins which are not in sandbox Isolation mode then you will also need to be a Deployment Administrator.

As I am deploying to CRM 2013 online then the plugin has to be only a CRM Administrator because the plugin has to be in sandbox isolation mode.

Right Click on CrmPackage

Press the Deploy button

You will either get an error or it will have worked.  Often when it has worked it will inform you the RegisterFile.crmregister file has changed, this is CRM 2013 Developer Toolkit updating the file with Guid’s for the plugins.

Check the plugin has been registered in the CRM organisation\database by looking at the CRM Explorer and checking in the Plug-in Assemblies section, you should see your new plugin highlighted in green

You can also check by opening your Solution and looking in the plugin assembles section.

Step by step plugin tutorial using Developer’s Toolkit

Install the developer’s toolkit.The Developer toolkit for Microsoft Dynamics CRM 2011 was released as part of UR5 SDK release and is available for download here.

  1. Create a new solution in CRM2011. I named my solution “CRM Plugin Solution”. This is optional but I would recommend you do that.
  2. Open Visual Studio 2010. Select File—New –Project. It will display new project templates dialog as shown in the screen sheet below
  3. Select “Dynamics CRM 2011 Package” project. This is also optional. You can go ahead and select “Dynamics CRM 2011 Plugin Library”, But then you cannot deploy the plugin straight from the Visual Studio. You have to use Plugin Registration tool to register the plugin. Enter the name of project/solution.
  4. VS studio will display following dialog.Enter you CRM 2011 server details.Select the solution name we created in step 1 and click ok
  5. Now right click on the solution and add “Dynamics CRM 2011 Plugin Library” project to the solution. The plugin project will already have a plugin.cs file.
  6. Now sign the plugin assembly. Right click on Plugin Project and select properties. Select Signing tab from left navigation of project property page. On the Signing tab, select the Sign the assembly check box and set the strong name key file of your choice.At a minimum, you must specify a new key file name. Do not protect your key file by using a password.
  7. If you cannot see the “CRM Explorer” window on the upper left side of the VS studio, click on View menu and select “CRM Explorer”.
  8. Now expand “Entities” Node. Right Click the entity on want to create the plugin for and select “Create Plugin”.
  9. It will display a following screen.It is equivalent to “Create Step” screen in plugin registration tool. The dialog will pick up the name of the entity and other information. Choose the message and the pipeline stage. You can also change of the Class attribute. Press Ok.
  10. It will create a .cs file with name mentioned in “Class” attribute in screen shot above. Double click on the class file(PostAccountCreate) and scroll down to following lines of code. You write your business logic here.
  11. protectedvoidExecutePostAccountCreate(LocalPluginContext localContext){      if(localContext ==null)      {          thrownewArgumentNullException(“localContext”);      }      // TODO: Implement your custom Plug-in business logic.

}

  1. The first thing you will do is to get the plugin context, CRMService instance and TracingService instance using localContext passed to the function. All these objects are defined in the built in plugin.cs class.
  2. IPluginExecutionContext context = localContext.PluginExecutionContext;IOrganizationService service = localContext.OrganizationService;

//ITracingService tracingService = localContext.TracingService;

  1. Here is code. It will check if the “account number” is null or empty and  create a task for a user to enter the account number.
  2. protectedvoidExecutePostAccountCreate(LocalPluginContext localContext){       if(localContext ==null)       {           thrownewArgumentNullException(“localContext”);       }           // TODO: Implement your custom Plug-in business logic.       // Obtain the execution context from the service provider.       IPluginExecutionContext context = localContext.PluginExecutionContext;       IOrganizationService service = localContext.OrganizationService;       //ITracingService tracingService = localContext.TracingService;               // The InputParameters collection contains all the data passed in the message request.       if(context.InputParameters.Contains(“Target”)&&       context.InputParameters[“Target”]isEntity)       {           // Obtain the target entity from the input parmameters.           Entity entity =(Entity)context.InputParameters[“Target”];                              //EntityReference pp = entity.GetAttributeValue(“primarycontactid”);           //tracingService.Trace(pp.LogicalName);                                                try           {               //check if the account number exist                   if(entity.Attributes.Contains(“accountnumber”)==false)               {                       //create a task                   Entity task =newEntity(“task”);                   task[“subject”]=”Account number is missing”;                   task[“regardingobjectid”]=newEntityReference(“account”,newGuid(context.OutputParameters[“id”].ToString()));                       //adding attribute using the add function                   // task[“description”] = “Account number is missng for the following account. Please enter the account number”;                   task.Attributes.Add(“description”,”Account number is missng for the following account. Please enter the account number”);                                              // Create the task in Microsoft Dynamics CRM.                   service.Create(task);                       }           }               catch(FaultException ex)           {               thrownewInvalidPluginExecutionException(“An error occurred in the plug-in.”, ex);           }           }

}

I also left few commented lines in the code to show “How to use tracingservice to write in trace log”. You will able to see the trace only if there is an error in the plugin.

  1. Now right click on CRM Package project we created in step 2 and select deploy. It will register the plugin assembly as well as step for the plugin. Click on CRM Explorer to check the deployed plugin as shown in the following screen shot.
  2. Create a new account and test the plugin.

Context.Depth

 Used by the platform for infinite loop prevention. In most cases, this property can be ignored.

Every time a running plug-in or Workflow issues a message request to the Web services that triggers another plug-in or Workflow to execute, the Depth property of the execution context is increased. If the depth property increments to its maximum value within the configured time limit, the platform considers this behavior an infinite loop and further plug-in or Workflow execution is aborted.

The maximum depth (8) and time limit (one hour) are configurable by the Microsoft Dynamics CRM administrator using the PowerShell command Set-CrmSetting. The setting is WorkflowSettings.MaxDepth.

   (or)

 This property is read only.

A plug-in can invoke a Web service call which causes another plug-in to executed, and so on. The system limits depth to a value of eight (8). After a depth of 8, an exception is thrown by the system to avoid an infinite loop.

Used for infinite loop prevention. In most cases, this property can be ignored.

context.Depth   :  It returns integer value.

Example:

  1. User clicks save on a form for an entity which has a plugin registered.  This plugin will have a Depth = 1.  The plugin logic creates an instance of another entity.
  2. The second entity has a plugin registered on the Create event and when fired from the about plugin during the update will get a Depth = 2.  If it was triggering directly from the form it would have a Depth of 1.
  3. Depth is specific to the context but not the message instance triggered.

What is Queue entity in MSCRM?

Queues are containers that hold activities and incidents (cases) in Microsoft Dynamics CRM 4. They make it easier for these entities to be moved around the system, handled and assigned to different individuals within the system.

A queue is a collection of unassigned cases and activities for an organization. You can move activities and incidents (cases) between queues using the Route message. Incidents can also be assigned to a queue by a workflow rule. There are three types of queues:

  • A public queue is created by the business unit to help organize where activities and cases should be routed. The business unit can add queues, delete queues, and update queues.
  • A private queue contains all items assigned to a user that they have not started working on yet. This queue is fixed and the user cannot delete it. Each user has a private queue.
  • A work in progress queue contains all items assigned to a user that they are currently working on. Each user has a work-in-progress queue. This queue is fixed and the user cannot change it.

 Queue is a holding place for work that needs to be done.

  1. Queue can be filled with cases to respond to or activities to complete.
  2. You might also be able to send leads, accounts, contacts or other records to queues.
  3. Queues improve dropping and sharing of work by making cases and activities to be available at some centralized place that everyone can access.
  4. Queue items are either assigned to you or to the team you are member of.
  5. After you select an item only you can work on it until you reset back to the queue or else assign someone else to work on.

 

Early binding and Late binding

Early Binding:
To use early binding, you must generate the entity classes by using the code
generation tool called CrmSvcUtil.exe, a console application that you can find in the SDK\bin folder
CrmSvcUtil.exe /url:http://crmwoifd/demo/XRMServices/2011&#8230;
After installing it we get GeneratedCode.cs through which we can write Early binding code.
Early binding which provides IntelliSense support and better code development after using of .NET technologies. For example we r writing code for account when we will write account. it will give all the field name and so on.

Compiler binds the objects to methods at the compile time.All the type checking will be done at compile time only. This is called early binding or static binding. Function overloading is example for early binding.

Late Binding:
Compiler binds the objects to methods at the runtime. All the type checking will be done at runtime rather than Compile time.This is called late binding or dynamic binding. Function overriding is example for late binding.

When using late binding, accessing and using the Organization Service is difficult and you need to explicitly type both the entity and attribute’s names.
This can lead to make mistakes while writing the code and get an error.

Performance issue:
The Late-binding has a good performance as compared to early binding bcoz late-binding process starts fast and remain constant throuhout the process but in the case of early binding the process starts slow and keep increasing its process speed but it never crosses the process speed of late binding as I read in Microsoft MSDN.

note : /out:GeneratedCode.cs /username:bill /password:p@ssword!

In early biniding we r able to download a proxy classes which is something like a service classes, when we r consuming the web sservices at that tym we r able to download WSDL files and proxy files which will automatically create the classes by the help of those classes we r able to write some coding that is early binding.
In Early binding which provides IntelliSense support and better code development after using of .NET technologies such as the .NET Framework Language-Integrated Query (LinQ). suppose we r writing code for account when we will write account. it will give all the field name and so on.

The key difference between early and late binding involves type conversion. While early binding provides compile-time checking of all types so that no implicit casts occur, late binding checks types only when the object is created or an action is performed on the type.
The Entity class requires types to be explicitly specified to prevent implicit casts.

we r defining the structure when we r executing the codes that tym it will validate for all the datatypes means all the checking will be done at runtime but in the case of early all the type checking and all it will done when we r building the application

 

 

 

Sample Console Application Using Service End Point-WhoAmIResponse

new_name —- Zipcode
new_owner1 —-Owner
new_firstname —FirstName
new_lastname —-Lastname
new_name —–Name
new_emailaddress —- Email Address
new_salary ——- Salary

new_iscontactcreatedbycrm
new_city
new_dateofbirth

/*using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Crm.Sdk;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Client;
using System.Configuration;
using System.ServiceModel;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Crm.Sdk.Messages;
using Microsoft.Xrm.Client.Services;
using System.ServiceModel.Description;
using Microsoft.Xrm.Sdk.Client;
*/
namespace Datatypes
{
class Program
{
private Guid _accountId;
static IOrganizationService _orgService;
static void Main(string[] args)
{

//String ConnectionString = ConfigurationManager.ConnectionStrings[“connection”].ToString();
//String[] appConfigEntity = ConfigurationManager.ConnectionStrings[“ConnectionString”].Split(‘,’);
//CrmServiceClient conn = new CrmServiceClient(ConfigurationManager.ConnectionStrings[“Connection”].ConnectionString);
////var connection = CrmConnection.Parse(“Url=crm url; Domain=org22ddaf0b; Username=yourusername; Password=yourpassword;”);
//_service = (IOrganizationService)conn.OrganizationWebProxyClient != null ? (IOrganizationService)conn.OrganizationWebProxyClient : (IOrganizationService)conn.OrganizationServiceProxy;

//Guid userid = ((WhoAmIResponse)_service.Execute(new WhoAmIRequest())).UserId;
////SystemUser systemUser = (SystemUser)_service.Retrieve(“systemuser”, userid, new ColumnSet(new string[] { “firstname”, “lastname” }));
////Console.WriteLine(“Logged on user is {0} {1}.”, systemUser.FirstName, systemUser.LastName);
//Console.ReadLine();

 

//String ConnectionString = ConfigurationManager.ConnectionStrings[“Crm”].ToString();
//CrmConnection connection = CrmConnection.Parse(ConnectionString);
ClientCredentials credentials = new ClientCredentials();
credentials.UserName.UserName = “chaitanya@mkkgroup.onmicrosoft.com”;
credentials.UserName.Password = “uday@KIRAN1188”;
Uri serviceUri = new Uri(“https://mkkgroup.api.crm8.dynamics.com/XRMServices/2011/Organization.svc&#8221;);
OrganizationServiceProxy proxy = new OrganizationServiceProxy(serviceUri, null, credentials, null);
proxy.EnableProxyTypes();
_orgService = (IOrganizationService)proxy;
//OrganizationService _orgService = new OrganizationService(connection);

//Guid userid = ((WhoAmIResponse)_orgService.Execute(new WhoAmIRequest())).UserId;
Entity contact = new Entity();
contact.LogicalName = “contact”;
Console.WriteLine(“To Fill the Account details to create an account Press Enter”);
Console.ReadLine();
Console.WriteLine(“Press enter to give City”);
if (Console.ReadLine() == “Hyderabad”)
{
OptionSetValue obj = new OptionSetValue(Convert.ToInt32(100000000));
contact.Attributes[“new_city”] = obj;
}
Console.WriteLine(“Type yes if it is from crm or else no “);
if (Console.ReadLine() == “yes”)
{
contact.Attributes[“new_iscontactcreatedbycrm”] = true;
}
Console.WriteLine(“Enter First Name”);
contact.Attributes[“firstname”] = Console.ReadLine().ToString();
Console.WriteLine(“Enter Date of Birth”);
Console.ReadLine();
Console.WriteLine(“Enter Day”);
String Day = Console.ReadLine().ToString();
Console.WriteLine(“Enter Month”);
String Month = Console.ReadLine().ToString();
Console.WriteLine(“Enter Year”);
String Year = Console.ReadLine().ToString();
DateTime dob = new DateTime(Convert.ToInt32(Year), Convert.ToInt32(Month), Convert.ToInt32(Day));
contact.Attributes[“new_dateofbirth”] = dob;
_orgService.Create(contact);
}
}
}

MSCRM Tricky Questions

1) If there are three entities..when I am updating field of Opportunity then it will update field of account and when it is being updated then field of contact will be updated…How to achieve this requirement??

Ans– Using OOB workflow

2)If there are three fields in an entity then how to make searchable through all these fields in Quick Find??

Ans– Under Quick find view –> Add Find Columns (for searching) Under Quick find view –> Add view columns(For displaying)

3)If there are Sales Manager and salesman and there are 3 fields like Itemcost, selling price and revenue…and revenue is not visible for Salesman…if those two fields is updated then revenue will be updated….How to achieve it??

Hint— Impersonation

Ans– Need to login with Admin context means here Sales Manager In Plugin if we are using Impersonation means login with Admin’s context then we r able to  trigger the plugin for updating the revenue field based on Itemcost but still that revenue field will be not visible.
So I think write answer will be If revenue field is not visible then we can find this field in Advanced Find.

If data of revenue is not visible then we need to apply field level security on that field means remove field level security of that field for that user means for Salesman.

4)If there  is a entity named as Account and it has related contacts records…if parent account record will be deleted then all its child record is deleted but when parent record is assigned to another entity then child record will not assigned… How to achieve this??

Ans– In Configurable cascade There are six options like Assign, Share , Unshare, Delete, Reparent, Merge.

We need to select Configurable Cascading  For Assign — Cascade None For Delete — Cascade All

5)There are 2 entities Account and Contact.. if field of Account is updated then it will update the field of contact..For this I used Plugin and if it is not working properly then what is the  resolution step for it…For this I used OOB Workflow and if it is not working properly then what is the resolution step for it…
Ans — If it is On-premise we can resolve it from back-end means from Sql Server
Right answer will be “Trace Message”means We have used trace service 10 times or 12 times in the code (If it is Online MSCRM instance) then we r able to trace means where the code has problem means giving error. We can see the error in tracingService.Tracelog record.

We can find Tracelog record in “Settings –> Plugin Trace Log Entity”.
For enable trace logSettings –> Administration –> System Settings –> Customization tab –> Enable logging to plugin tracing Bu default it is off, we need to choose All or Exception.
Note : The contents of this trace log is only output when an exception is thrown in a plugin/workflow activity.

With CRM2015 Update 1 – you can turn on trace logging for ‘All’ in the Settings and this will write to the ‘Plugin Trace’ entity.
For On-premise Tracing service is automatically enabled and we need to use Fiddler in which it will be traced and it will show upper left side when we click the that line of code it will show the error at the lower right side in fiddler.Or we can see in EventViewer(Eventvwr for opening using Windows+R) regarding tracing.

There we can find about tracing in Application.

Ans

Suppose plugin is synchronous then it will give error instantly and suppose it is asynchronous then we will check system jobs–> logAfter seeing error I’ll go to dev anf ifx it and deploy to Production.

6) If some ribbon buttons is not working properly then what will be the resolution step??Ans : That error can be catched in Fiddler from there wwe can resolve it or we need to check SDK , beacuse there can be an error in SDK file.If it is not catched in Fiddler Secondly we can debug using F12 button and place breakpoint for finding error if the ribbon button is not OOB button.And if it is a OOB button then first we need to open that entity in SDK –> Resources, Beacuse SDK has all entities RibbonXml . So open that particular entityRibbonXml and go to that ribbon button and check which function is used for that ribbon button. Copy function of that ribbon button  and search it in Debugger after pressing F12 and place Breakpoint multiple times where this function is called.

Note: For OOB button, function is called multiple times but      For Custom button, function is called only one time.

7) Can we hide the  ribbon buttons??

Ans – Using Enable/Disable rules.

if any ribbon button is not visible unintentionally then we need to check Enable/Disable rule.

8)Can we hide the Entity?

Ans — yes, using security role.Create a user and then assign any role for example Salesperson .

After that go to salesperson security role and give none for all privilege for any entity suppose Invoices then Invoice entity will be not visible for that user.
Note: Whatever we do customization in MSCRM (whether it is done bu any user), it happens at Organization level not user level.
Note : Impersonation can be applied on Plugin and synchronous workflow.

9)If  email is not receiving what would be the resolution step?? How to resolve it??Ans — If email is not receiving and if we used Server Side Synchronization then go to Settings –> Email Configuration –>Mailboxes

Double click on that user or customer and then click on “Test and enable Mailbox”and if we get an error then only we can say what is the problem and we’ll fix it.
Same if we use Email Router means In Email Router also we will test Mailbox means test for incoming mails and after getting error we will fix it.

10)Have U ever worked on integration part of CRM??

Ans — No Integration of CRM means Data Migration of CRM
11) If a customer wants a new dashboard/Chart which is not available in MSCRM then how to create that dashboard/chart??

Ans — For Dashboard we need to edit DashboardXml file secondly we can create a web portal using ASP.Net but it is not recommended.For creating new dashboard which is not in CRM then Create a web-resource and insert a IFrame and give URL of any .rdl file(means for SQL Server report, Fetch-xml report etc). Then in this scenerio we can create a dashboard.

12)How to connect A CRM with .NET application??

13) A customer wants to introduce some visual cues to their Opportunity records. You have a requirement to create a web resource that simply changes it’s background colour , based on the rating of the opportunity

When the rating is ‘Cold’ it must display Blue

for ‘Warm’ it should display Orange

And for ‘Hot’ it needs to display Red.

Please detail how you would approach this requirements, what steps you would take to create the components, display in on the form and an outline of how you would provide colour coding based on a record’s value.

Using the Client Side SDK, how would you approach the following requirement

  • On the create of a new Case
  • A user selects a customer before saving the form.
  • Once this customer is selected
  • Autocomplete a lookup on the form “Last Raised Case”
  • This value should be set as the last case raised for the Customer.

Please describe / pseudocode as much detail as possible

Using the Client Side SDK, how would you validate the value held in a phone number attribute, when a user types in a new value, to ensure it is always formatted in a specific way?

(e.g.  +44 1234 567 890)

Which types of Entity images are available, for a plugin step registered on a post operation stage of the execution pipeline?

You have created a plugin which is registered In the Pre-operation stage, of the execution pipeline.

 

The Plugin Is triggered on Create of a new contact.

The Target of the create request, is the contact entity.

 

You have been asked to make a change to the plugin to include the following logic.

  • When the contact creates
  • Set a field named “new_needsassesment” to the value of ‘true’

 

Please describe or Pseudocode below, the basic steps you would take to meet this need.

Using the CRM SDK, specifically the RetrieveMultiple Request of the Organisation Service, you have a requirement to retrieve all contacts in the system where the parent Company is

“Adventure Works”.  Each of the contacts needs to be updated, setting the field DoNotSendEmail as ‘true’.

 

The Response of this retrieve Multiple contains over 500 contacts.

 

Describe (or pseudocode) the most efficient way to update all these accounts.

Include the SDK messages you would use to perform this task.

You have been asked to write a plugin, which Sends an email to the customer assigned to a case record, once the case is resolved. The Plugin needs to send email using an existing template that is setup in the system.

When Interacting with Dynamics CRM, What SDK Message would you use to close a Quote?

(The standard Quote Entity, provided as part of the sales module)