COVID-19: Temporary changes to Subject Access Request guidelines
Do not meet others, even friends or family.
You can spread the virus even if you don’t have symptoms.
Do not meet others, even friends or family.
You can spread the virus even if you don’t have symptoms.

In light of the enhanced precautions on social distancing we must take as a result of coronavirus (COVID-19), the Veterinary Medicines Directorate (VMD) has relaxed the enforcement of the Veterinary Medicines Regulations (VMR) to allow the supply of veterinary medicines by veterinary surgeons from a temporary premises when access to the Veterinary Practice Premises (VPP) registered with the RCVS is limited or no longer possible.
Wholesale dealers will be allowed to supply veterinary medicines to veterinary surgeons at such temporary premises.
This decision has immediate effect and will last until 31 August and then cease.
The VMD will not issue an improvement notice if there are breaches of the relevant provisions of Schedule 3 of the VMR during this period. For further information read the .
This is a temporary VMD enforcement policy in relation to specific obligations under the VMR only. The VMR continue to apply. The VMD may continue to take action to enforce those obligations in cases where the procedures described in the attached statement are not followed.
The Royal College of Veterinary Surgeons (RCVS) have issued guidance for veterinary professionals on how to continue to undertake veterinary work.
Further information is available on the RCVS website in respect of Controlled Drugs Guidance
Phasing out COVID-19 facilitations
Date of temporary facilitations changed to 31 July
Temporary facilitation will last until 30 June when we will review it.
VPP temporary change of premises – review date now 31 May
Link added to RCVS guidance for veterinary professionals on how to continue to undertake veterinary work.
First published.
DVLA:
As this project was part of a pilot, the team shared their objectives with Salesforce. The team wanted to:
The DVLA is an executive agency, sponsored by the Department for Transport. The DVLA contact centre is the largest single site contact centre in government with over 1,200 staff.
The organisation holds 80 million driver and vehicle records. In a typical month they answer over:
The agency has won a Customer Service Excellence accreditation and the Contact Center Associationing (CCA) award for Best Use of Technology in 2017 and Team of the Year Public Sector in 2019. The agency recently retained CCA accreditation for the 12th consecutive year.
Since 2016, the Swansea-based contact centre has used Salesforce Live Agent as their single platform for customer interactions on telephone, email, webchat and social media. This was part of a transformation project to create a unified platform for all contact services, increase IT capability to meet DVLA’s IT strategy and support customer demand. The service cloud platform has helped teams manage different contact channels more easily as they do not have to switch between systems. The platform has also helped the DVLA to administer and configure their technology faster.
The contact centre used webchat on GOV.UK for their main contact services and wanted to explore automating frequent customer enquiries through a chatbot. Webchat was launched on one service which was supporting customers with transactional failures, but DVLA have now rolled it out across all services.
Integrating their webchat with their knowledge base and customer relationship management (CRM) tool has helped resolve customer enquiries faster as advisers can look up chat history and information from the platform. Other ways of working that helped DVLA manage their webchat service include:
In 2018, Salesforce asked if the contact centre would take part in Salesforce’s Einstein Chatbot pilot. The team built the chatbot into their existing Salesforce Live Agent (webchat) platform.
The team held planning sessions both internally and with the supplier. The pilot began in April 2018 and lasted a month. The teams initially identified 2 main use cases. They deployed a menu-based chatbot for 3 months that allowed users to select a predefined set of options through buttons. They then rolled out their NLP chatbot, which read and collected information from users.
The plan was to create a chatbot users would interact with to handle their enquiries or collect information. If the user needed an adviser the chatbot would transfer them to the webchat.
Due to the DVLA IT strategy, the agency was open to the operational area of the contact centre leading on a technical initiative. Two team members, who were previously webchat advisers, led the training and administration of the pilot chatbot. The operational team was best placed to configure the technology as they understood the platform and the common user enquiries. The software needed minimal development so the team could learn how to use the chatbot with webinars and reference manuals. They also had access to the Salesforce production team for any queries.
Before the team started they had to overcome some obstacles. For example, there was some concern from contact centre staff about chatbots providing poor customer service, which would have reputational impact.
To mitigate issues or security concerns, the team:
This was a low risk, simple delivery. The team had 30,000 free chatbot conversations per month supplied as part of the pilot. The team used the pilot to show that the tool met user needs before requesting any resources from DVLA.
Setting up the chatbot took one month and went live in June 2018. Over the next 3 months the team gradually rolled out menu-based bots to the service. Menu-based bots were useful in funnelling customers to the right channel. The team was able to add NLP bots to the service by August 2019. The NLP capability helped the chatbot recognise and understand more varied user requests.
Throughout the project the team:
The team found the chatbot tool easy to learn and use. There were no issues with integrating the chatbot as it was from the same suite of tools from the supplier. Another benefit for the team was having direct access to the supplier’s product team and being able to influence changes to the tool to meet the service requirements.
The chatbot also had good testing environments that were quick to build. The team could use these environments to test and prepare each version of the service before making it public.
Initial research showed there were 150 different ways that users made requests. The team had to enter each request into the tool individually so the tool could respond with the right information. Testing in these environments showed that the team created responses that were between 95% and 98% accurate.
To make sure they got feedback from users, the team used the Salesforce native reports and dashboards through their management information (MI) suite to measure and monitor the impact on existing channels. The team received approximately 20,000 individual customer feedback forms each month.
The team discovered issues, mainly around NLP matching. Depending on how a question is phrased, the bot could route a user to 2 different sets of information. For example, if a user only types ‘vehicle tax’ for their question the chatbot has several options it could return as answers, including checking if a vehicle is taxed and renewing your vehicle tax. The team worked with the supplier’s product team to solve these issues.
In one iteration, the team made one of the menu-based bots too complex by offering users too many options. The team learned to have an immediate rollback process in place for the times when the tool was not working for the users.
The team found that a chatbot was not the best option for very complex queries, but for the majority of customer queries it worked extremely well and did reduce demand on the telephone service and the webchat by automating answers for the simpler, regular queries.
The team was happy with the results as they did not expect the project to deliver so much so quickly.
Due to a combination of the chatbot and webchat, the contact centre has seen a significant drop in demand for contact through email and telephone.
Customers are benefitting from the chatbot project because it:
Wider organisational benefits of the project included:
The main lessons the team learned included:
DVLA has also hosted successful ‘Hack to The Future’ events to find new ways to use AI and chatbot technology in the public service.
The team plans to:
After the success of the pilot, there is a greater openness to trying pilots. It has also led to the DVLA assigning more resources and seeing chatbots as a strategic tool.
MoJ:
The goal was to run a proof of concept project on the Child Arrangements Information Tool (CAIT), to understand how users would interact with content delivered through the chatbot. The team wanted to know whether separated parents are more likely to engage with content delivered by a chatbot rather than flat content. They also wanted to know which method was easiest to use and provided the most flexibility. This project would be part of a larger project around the CAIT.
MoJ is responsible for the courts, prisons, probation services and attendance centres. The User Centred Policy Design team ran the chatbot project, alongside other proof of concepts with video and audio formats. They work with policy and operational teams to:
The team started this project because of research which found that:
The team also looked at service data and how other departments provided evidence of users accessing similar data.
The team found there was a lot of content available online but it was not engaging enough for some users. So the team set out a proof of concept to test their assumptions and to better understand what users need.
“Our hypothesis was that if parents ‘in crisis’ receive information in a different format, they would be more likely to engage with the content and find relevant information to solve their issues.” – Service Designer
The team wrote 5 statements to test about user interaction with their content:
The team then assigned a metric for each statement. For example, by looking at retention and completion rates to measure if users have had significant interaction with the chatbot. Each metric had a target, which the team selected based on a benchmark and research. They used this target to measure the success of the results.
The service team ran the project in 3 phases – planning, implementation and evaluation. They worked closely with the policy team to make sure the chatbot met policy and legal requirements.
As this was a quick, proof of concept project, the team chose to use Landbot rather than developing their own tool. The team did not need to invest in training as they could learn about the tool by using it.
The chatbot they created used a closed script which included a number of user journeys depending on what path users selected. The content designer wrote the child arrangement guidance based on current website information. They worked with their provider to make the chatbot look similar to the GOV.UK Design System.
During the main sprint, the team embedded the chatbot on the homepage of the service. It was available to the general public for 12 days and during that time 1,121 users visited the website (125 unique visitors per day) and 26% of these users initiated a conversation with the bot, with 15% of this group continuing their conversation after their initial interactions.
The team also ran 5 further face-to-face usability sessions in a lab environment to get qualitative feedback on the experiment. The team considered doing A/B testing, but the service did not have enough users visiting the site at the time the experiments were carried out.
During the first iteration the team collected feedback from users with a short questionnaire and 2 simple follow-up questions on the chatbot to rate their experience. The questions were “was this conversation useful?” and “do you know more about your options now?”.
The first version was a closed bot and during user testing sessions, users quickly understood that they were in a closed loop of information. In the second iteration, the team added more content and allowed users greater input.
During the first iteration, some users did not get as far as the feedback questions if they were given a link to another website as part of their journey. To resolve this in the second iteration, the team embedded the feedback in the conversation flow.
The users needed tailored content, and an automated chatbot cannot provide that. The scripted chatbot felt restrictive and not intelligent enough to provide users with the content they needed. A potential solution to this was to provide a webchat or a more intelligent chatbot with machine learning.
With the second iteration of their chatbot, the team found flaws in the analysis provided by Landbot and ended up manually analysing the data.
Due to a resource issue, there was no research testing for the second iteration of the chatbot and the team could not directly compare the first and second iteration.
The participating users gave positive feedback, and were pleasantly surprised to see the government try something new. The project allowed the team to sense check their theories in an agile way to better understand and help users. They found that users are willing to engage with content delivered through chatbots, although the team needs to do more testing to make sure the users find the support they are looking for.
At the end of this project the team found:
One of the major benefits of the project is that the policy team is more open to alternative solutions and that testing solutions before using them is essential.
The team would like to try a minimum viable product of a webchat, possibly using Landbot which has a webchat feature. The team would also like a subject matter expert to provide content and answer questions.
The team is also considering using a structured menu so that users can navigate through the content and onto the webchat. With a structured menu, the team can test a closed bot, a semi-open bot and webchat.
Do not meet others, even friends or family.
You can spread the virus even if you don’t have symptoms.