Pages

Showing posts with label XenDesktop. Show all posts
Showing posts with label XenDesktop. Show all posts

Monday, November 2, 2020

Successfully logging onto Citrix StoreFront displays the message: "There are no apps or desktops available to you at this time."

Problem

Users have complained that they no longer see published apps and desktops after successfully logging onto Citrix as they only see the message:

There are no apps or desktops available to you at this time.

image

Reviewing the Citrix Delivery Services event logs on the Citrix StoreFront server displays the following errors:

None of the Citrix XML Services configured for farm Controller are in the list of active services, so none were contacted.

Log Name: Citrix Delivery Services
Source: Citrix Store Service
Event ID: 4012
Level: Error

image

Failed to launch the resource 'Controller.GP' as it was not found.

Log Name: Citrix Delivery Services
Source: Citrix Store Service
Event ID: 28
Level: Warning

image

None of the Citrix XML Services configured for farm Controller are in the list of active services, so none were contacted.

Log Name: Citrix Delivery Services
Source: Citrix Store Service
Event ID: 4012
Level: Error

image

Solution

The most important entry in the event logs written for this issue could easily be missed because the entry that provides the cause of the issue is actually labeled as Information. Continuing to move to earlier logs will reveal the following entry indicating that the SSL certificate on the Delivery Controller has expired:

The Citrix XML Service at address svr-ctxdc-02.ccs.int:443 has failed the background health check and has been temporarily removed from the list of active services. Failure details: An SSL connection could not be established: The server sent an expired security certificate. The certificate *.ccs.int, *.ccs.int is valid from 10/29/2018 9:37:20 AM until 10/28/2020 9:37:20 AM.. This message was reported from the Citrix XML Service at address https://svr-ctxdc-02.ccs.int/scripts/wpnbr.dll[UnknownRequest].

image

You would not be able to see this entry if you are reviewing the logs in the Administrative Events, which does not display Information entries.

image

To correct the issue, simply issue a new SSL certificate to replace the expired certificate on the Delivery Controller (or controllers if there are more than one), then update the bindings in IIS Manager:

imageimage

Successfully updating the SSL certificate will re-establish communication between the StoreFront server and the Delivery Controller(s).

Saturday, October 3, 2020

Successfully authenticating with Citrix ADC / Netscaler Gateway displays the error: "Http/1.1 Internal Server Error 43531"

I recently ran into an issue with a Citrix ADC / NetScaler NS13.0 36.27.nc after a reboot where the following error is displayed upon successfully authenticating:

Http/1.1 Internal Server Error 43531

The URL displayed ends with /cgi/dlge:

https://workspace.contoso.com/cgi/dlge

image

No configuration changes have been made for months. I combed through the configuration but could not determine why this error was being thrown so a ticket was opened with Citrix. The engineer went through the configuration and decided to change the Web Interface Address FQDN in the Citrix Gateway Session Profile to use the IP address instead of the DNS of the StoreFront server as shown in the screenshot below, which immediately corrected the issue:

image

We originally thought that there was something wrong with DNS but a DIG for the storefront.contoso.com URL returned the correct IP address for the Load Balancing Virtual Server that load balanced the two StoreFront servers:

root@CTXNETSCALER# dig storefront.contoso.com

; <<>> DiG 9.10.6 <<>> storefront.contoso.com

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31561

;; flags: qr aa rd ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 1280

;; QUESTION SECTION:

;storefront. contoso.com. IN A

;; ANSWER SECTION:

storefront. contoso.com. 3600 IN A 10.0.1.17

;; Query time: 0 msec

;; SERVER: 127.0.0.2#53(127.0.0.2)

;; WHEN: Tue Sep 29 19:25:36 UTC 2020

;; MSG SIZE rcvd: 69

root@CTXNETSCALER#

I haven’t gotten to the root cause of this issue but noticed that there were no recent posts for this error and thought I’d write a post in case someone else encounters this issue. We were told that an upgrade from the current version 13 Build 36.28 to version 13 Build 64.35 would resolve the issue so I will update this post when I determine whether it resolves the issue.

Sunday, June 7, 2020

Event ID 3001 Error constantly logged on Citrix Cloud Connectors after FortiOS upgrade to 6.2.3 causing virtual desktop connectivity issues

Problem

You’ve just recently upgraded the FortiOS of a FortiGate 600D to version 6.2.3 and began to experience connectivity issues to a Citrix Virtual Apps and Desktops 1909 environment where users are unable to connect to desktops and receive the following error:

Cannot start desktop “Desktop Name”.

image

Desktops in the Citrix Studio also show that the VDA agents would suddenly become unregistered and later registered again but regardless of their state, brokered sessions fail majority of the time.

Errors on Citrix Cloud Connector Servers

Logging onto the Citrix Cloud Connectors reveal that the following event ID is constantly logged every 5 to 7 minutes:

Log Name: Application

Source: Citrix Remote Broker Provider

Event ID: 3001

Level: Error

User: NETWORK SERVICE

HA Mode Checking Start - component Broker Proxy has reported a failure with reason = Received: HAModeException - No WebSocket channels are available. (Target url: contoso.xendesktop.net/Citrix/XaXdProxy/)

image

image

Log Name: Application

Source: Citrix Remote Broker Provider

Event ID: 3001

Level: Error

User: NETWORK SERVICE

HA Mode Checking Start - component XmlServicesPlugin has reported a failure with reason = The underlying connection was closed: An unexpected error occurred on a receive.(Target Url: https://contoso.xendesktop.net/scripts/wpnbr.dll)

image

Running the Cloud Connector Connectivity Check utility from: https://support.citrix.com/article/CTX260337

image

Will show inconsistent results where various URLs will fail at different times:

image

Performing Wireshark on the Citrix Cloud Connectors will reveal that there are a lot of connection resets between the Citrix Cloud and the Cloud Connectors.

A short trace of 229 packets using filter ip.addr eq 20.41.61.15 and tcp.analysis.flags, reveals that 66 packets are TCP retransitions equating to almost 29% with and 12 TCP resets coming from connector.

**Note that the 20.41.61.15 IP resolves to the URL that the Citrix Cloud Connector is having issues connecting to.

image

Errors on Citrix StoreFront Servers

Logging onto the Citrix StoreFront servers will reveal the following events constantly logged repetitive:

image

Log Name: Citrix Delivery Services

Source: Citrix Store Service

Event ID: 4011

Level: Information

User: N/A

The Citrix XML Service at address citrixcloud1.contoso.com:80 has passed the background health check and has been restored to the list of active services.

image

Log Name: Citrix Delivery Services

Source: Citrix Store Service

Event ID: 0

Level: Error

User: N/A

The Citrix servers sent HTTP headers indicating that an error occurred: 500 Internal Server Error. This message was reported from the XML Service at address http://citrixcloud2.contoso.com/scripts/wpnbr.dll [NFuseProtocol.TRequestAddress]. The specified Citrix XML Service could not be contacted and has been temporarily removed from the list of active services.

image

**The above error will cycle through all of the Citrix Cloud Connectors.

Log Name: Citrix Delivery Services

Source: Citrix Store Service

Event ID: 4003

Level: Error

User: N/A

All the Citrix XML Services configured for farm Cloud failed to respond to this XML Service transaction.

image

Log Name: Citrix Delivery Services

Source: Citrix Store Service

Event ID: 28

Level: Warning

User: N/A

Failed to launch the resource 'Cloud.Workspace $S32-61' using the Citrix XML Service at address 'http://citrixcloud1.contoso.com/scripts/wpnbr.dll'. All the Citrix XML Services configured for farm Cloud failed to respond to this XML Service transaction.

com.citrix.wing.SourceUnavailableException, PublicAPI, Version=3.12.0.0, Culture=neutral, PublicKeyToken=null

All the Citrix XML Services configured for farm Cloud failed to respond to this XML Service transaction.

at com.citrix.wing.core.mpssourceimpl.MPSFarmFacade.GetAddress(Context ctxt, String appName, String deviceId, String clientName, Boolean alternate, MPSAddressingType requestedAddressType, String friendlyName, String hostId, String hostIdType, String sessionId, NameValuePair[] cookies, ClientType clientType, String retryKey, LaunchOverride launchOverride, Nullable`1 isPrelaunch, Nullable`1 disableAutoLogoff, Nullable`1 tenantId, String anonymousUserId)

at com.citrix.wing.core.mpssourceimpl.MPSLaunchImpl.GetAddress(Context env, String appName, String deviceId, String clientName, Boolean alternate, MPSAddressingType requestedAddressType, String friendlyName, String hostId, String hostIdType, String sessionId, NameValuePair[] cookies, ClientType clientType, String retryKey, LaunchOverride launchOverride, Nullable`1 isPrelaunch, Nullable`1 disableAutoLogoff, Nullable`1 tenantId, String anonymousUserId)

at com.citrix.wing.core.mpssourceimpl.MPSLaunchImpl.LaunchRemoted(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.mpssourceimpl.MPSLaunchImpl.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.applyaccessprefs.AAPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.clientproxyprovider.CPPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.connectionroutingprovider.CRPLaunch.LaunchInternal(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams, Boolean useAlternateAddress)

at com.citrix.wing.core.connectionroutingprovider.CRPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.bandwidthcontrolprovider.BCPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at Citrix.DeliveryServices.ResourcesCommon.Wing.WingAdaptors.OverrideIcaFileLaunch.Launch(Dictionary`2 launchParams, Context env, AppLaunchParams appLaunchParams)

at Citrix.DeliveryServices.ResourcesCommon.Wing.WingAdaptors.LaunchUtilities.IcaLaunch(IRequestWrapper request, Resource resource, LaunchSettings launchSettings, String retryKey)

com.citrix.wing.core.xmlclient.types.WireException, Private, Version=3.12.0.0, Culture=neutral, PublicKeyToken=null

HttpErrorPacket(500,Internal Server Error)

at com.citrix.wing.core.xmlclient.transactions.TransactionTransport.handleHttpErrorPacket(Int32 httpErrorStatus, String httpReasonPhrase)

at com.citrix.wing.core.xmlclient.transactions.CtxTransactionTransport.receiveTransportHeaders()

at com.citrix.wing.core.xmlclient.transactions.CtxTransactionTransport.receiveResponsePacketImpl(XmlMarshall marshaller)

at com.citrix.wing.core.xmlclient.transactions.ParsedTransaction.sendAndReceiveXmlMessage(XmlMessage request, AccessToken accessToken)

at com.citrix.wing.core.xmlclient.transactions.nfuse.NFuseProtocolTransaction.SendAndReceiveSingleNFuseMessage[TRequest,TResponse](TRequest request, AccessToken accessToken)

at com.citrix.wing.core.xmlclient.transactions.nfuse.AddressTransaction.TransactImpl()

at com.citrix.wing.core.xmlclient.transactions.ParsedTransaction.Transact()

at com.citrix.wing.core.mpssourceimpl.MPSFarmFacade.GetAddress(Context ctxt, String appName, String deviceId, String clientName, Boolean alternate, MPSAddressingType requestedAddressType, String friendlyName, String hostId, String hostIdType, String sessionId, NameValuePair[] cookies, ClientType clientType, String retryKey, LaunchOverride launchOverride, Nullable`1 isPrelaunch, Nullable`1 disableAutoLogoff, Nullable`1 tenantId, String anonymousUserId)

image

Errors on VDAs (Virtual Desktop Agents / VDIs)

Logging directly onto the VDAs will reveal many warnings and errors related to the Citrix Cloud Connector connectivity:

Log Name: Application
Source: Citrix Desktop Service

Event ID: 1014

Level: Warning

The Citrix Desktop Service lost contact with the Citrix Desktop Delivery Controller Service on server ‘citrixcloud1.contoso.com’. The service will now attempt to register again.

image

Citrix Cloud Connectors

Review the Cloud Connector connectivity via the Citrix portal will show the cloud connectors with warnings at times and green at other times.

image

Running a Run Health Check will take more than expected and while it completes, the status of the connector may or may not indicate the last checked date.

image

Citrix Cloud Backend Logs

Opening a ticket with Citrix Support and having the engineer review the backend Citrix Cloud connections will reveal an abnormal amount of disconnects. The following was the report we received:

13k events related to Connected/Disconnected/ConnectingFailed in the past 24 hours

038041d1-acac-4903-b88a-817b312f2a1c = citrixcloud2.contoso.com 2270 events disconnected

0812a411-754e-4b03-a6cf-382764a63a6 = citrixcloud3.contoso.com 1782 events disconnected

5ebc62a9-a015-492a-81dd-ceb649fda8f3 citrixcloud1.contoso.com 2508 for disconnected

image

Solution

This issue took quite a bit of time to resolve as the FortiOS upgrade to 6.2.3 was completed 2 weeks prior to the virtual desktop connectivity issues to begin so it was the last place I thought would be the problem. After eliminating every single possibility that there was something wrong with the Citrix environment, I asked the network engineer to open up a ticket with Fortinet to see if they can perform a more in depth tracing for the packets sent and received between the firewall and the Citrix Cloud. To our surprise, the Fortinet engineer who finally gave us a call back immediately indicated that we may be experiencing a bug in the FortiOS 6.2.3, which could cause such an issue. The following are the messages we received from the support engineer:

I informed you that, as you have SSO in your config, you could be very well hitting the known issue for internal servers due session being deleted. We will need to run the flow trace at time of disconnect, so that we can confirm the behavior.

We got on a call with the engineer and was able to determine that it was indeed a bug in this version of the FortiOS. The recommended remediation was to upgrade to either a special build of 6.2.3 that addressed this issue or upgrade to 6.2.4. We ended up upgrading to the 6.2.3 build8283 (GA) which resolved our issue.

image

For those who are interested, the following is the case summary the engineer provided:

1) We discussed the citrix applications were hanging for a prolong period.

2) FGT is currently running the firmware version 6.2.3 and the Citrix server 20.41.61.15 is accessed on the port 443

3) We checked the session list for one of the machines reporting the issue 192.168.5.71 session info: proto=6 proto_state=01 duration=269 expire=268 timeout=300 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=5 origin-shaper= reply-shaper=high-priority prio=2 guarantee 0Bps max 134217728Bps traffic 5525Bps drops 0B per_ip_shaper= class_id=0 shaping_policy_id=6 ha_id=0 policy_dir=0 tunnel=/ vlan_cos=0/255 user=MROGERS auth_server=BCAUTH state=log may_dirty npu rs f00 acct-ext statistic(bytes/packets/allow_err): org=30896/83/1 reply=35645/59/1 tuples=2 tx speed(Bps/kbps): 114/0 rx speed(Bps/kbps): 132/1 orgin->sink: org pre->post, reply pre->post dev=11->25/25->11 gwy=198.182.170.1/192.168.5.71 hook=post dir=org act=snat 192.168.5.71:58467->20.41.61.15:443(198.182.170.253:58467) hook=pre dir=reply act=dnat 20.41.61.15:443->198.182.170.253:58467(192.168.5.71:58467) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:50:56:b0:c1:53 misc=0 policy_id=213 auth_info=0 chk_client_info=0 vd=0 serial=0a35b6a8 tos=ff/ff app_list=0 app=0 url_cat=0 rpdb_link_id = ff000001 ngfwid=n/a dd_type=0 dd_mode=0 npu_state=0x000c00 npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=154/140, ipid=140/154, vlan=0x0000/0x0000 vlifid=140/154, vtag_in=0x0000/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=7/0

4) In the diagnose firewall auth list we could see the source 192.168.1.57 BC-CC-600D-FW01 # diagnose firewall auth list 192.168.5.71, MROGERS type: fsso, id: 0, duration: 559, idled: 0 server: BCAUTH packets: in 2254 out 2034, bytes: in 933560 out 856864 group_id: 4 33554905 33554989 33555163 33555200 33555204 33555203 33555198 33554433 group_name: ALL_BC_AD_USERS CN=OPERATIONS,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=SECURITY DEPT,OU=SECURITY,OU=OFFICEADMIN,DC=CONTOSO,DC=COM CN=WIRELESSACCESS,OU=SECURITY GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=TESTALLEMPLOYEES,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=ALL EMPLOYEES,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=ALLEMPLOYEES,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=ALLSUPPORTSTAFF,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=Domain Users,CN=Users,DC=CONTOSO,DC=COM

5) Further in the debug flow we could see the msg="no session matched" 2020-06-06 18:49:56 id=20085 trace_id=55 func=print_pkt_detail line=5501 msg="vd-root:0 received a packet(proto=6, 192.168.5.71:57775->20.41.61.15:443) from port12. flag [.], seq 1161501577, ack 1787291322, win 255" 2020-06-06 18:49:56 id=20085 trace_id=55 func=vf_ip_route_input_common line=2581 msg="Match policy routing id=2133000193: to 20.41.61.15 via ifindex-25" 2020-06-06 18:49:56 id=20085 trace_id=55 func=vf_ip_route_input_common line=2596 msg="find a route: flag=04000000 gw-198.182.170.1 via port18" 2020-06-06 18:49:56 id=20085 trace_id=55 func=fw_forward_dirty_handler line=385 msg="no session matched"

6) As discussed, we have a know issue of RDP and other applications freezing due to no session match error Bug Id ==> 0605950

7) It seems that when the authed session is changed it clears the non-auth session for the same Ip.

8) The issue is resolved in the newer firmware version 6.2.4. 9) You did not want to upgrade to 6.2.4 so we do have a special build 8283 that resolves this issue. Please upgraded the firmware to this attached build and let us know.

Thursday, May 21, 2020

Upgrading Citrix Licensing from 11.14.0.1 build 23101 to 11.16.3.0 build 30000 fails with: "Upgrade failed. The server.xml and other configuration files were not found. Contact Citrix."

Problem

You attempt to upgrade Citrix Licensing from 11.14.0.1 build 23101 to 11.16.3.0 build 30000 but the process fails with:

Failed to configure Citrix Licensing.

Upgrade failed. The server.xml and other configuration files were not found. Contact Citrix.

image

image

Fiddling around with the executable and MSI to ensure that it is not blocked does not correct the issue:

imageimage

Solution

I’m not sure about what causes this issue but it has happened to me several times during my upgrades of the Licensing server role. To complete the upgrade, simply uninstall the Citrix Licensing application in Programs and Features:

image

Then install the licensing server role again:

image

image

The license files aren’t removed during an uninstall but ensure that you have access to the Citrix portal to redownload the license if it is not present.

Thursday, May 14, 2020

Troubleshooting slow Windows VDI logon performance with Citrix Director and Windows Event Logs

One of the most common engagements I am engaged with for post Citrix or VMware virtual desktop deployments is to troubleshoot slow Windows VDI login performance and I’ve found that if profile management solutions such as Citrix UPM was deployed correctly, the root cause would typically be the Active Directory Group Policy Objects that are either intentionally or unintentionally applied to the user accounts or desktops. Having gone through an exercise I’ve done numerous in the past today and realizing I never wrote a blog post about it, this post will serve to demonstrate how I typically approach the troubleshooting process in a Citrix XenDesktop VDI environment.

Step #1 – Replicate the issue

The first step is to replicate the issue so use an account that is experiencing the slow logon times and log onto a virtual desktop while making a note of when you logged on and when it completed as you will need these later.

Step #2 – Review the duration for each component during the logon process

Leave the session to the desktop connected once the login process has completed and then proceed to launch the Citrix Director or the Monitor console in Citrix Cloud’s Virtual Apps and Desktops Service and then click on the Trends option:

image

With the Trends console displayed, click on the Logon Performance tab and scroll down to the Logon Duration by User Session section:

image

In the Search associated users field, search for the login name of the user account that was used to test:

image

As shown in the screenshot above, a breakdown of the duration for the logon is displayed in multiple columns. The headings that typically consumes the most amount of time are:

  1. GPOs
  2. Profile Load

GPOs are the Active Directory Group Policy objects that are applied to the account during logon and the Profile Load is, in this case, Citrix UPM that is configured.

From the metrics provided above, we can see that the Profile Load takes approximately 10.966 seconds while the GPOs take 35.99 seconds.

Step #3 – Review Citrix UPM logs

To gain a better understanding of the processes contributing to the Profile Load process, navigate back to the virtual desktop that was used for testing and navigate to the following directory:

C:\Windows\System32\LogFiles\UserProfileManager

**Note that this directory is configurable but this environment uses the default.

The log we are interested in ends with the desktop name_pm.log as shown in the screenshot below:

image

Open the log and navigate to the line with Starting logon processing to review how long each process has taken:

image

Step #4 – Review Event Logs for GPOs

If the GPOs being applied to the user and virtual desktop is causing an extended amount of time during the logon process then the next step is to launch the event logs of the virtual desktop and navigate to the following event log:

Microsoft/Windows/GroupPolicy/Operational

Scroll and locate the event ID 5324 that has a time stamp close to the time you initiated the login:

image

Scroll upwards to later events and you should see an event ID 4001 that states the following details: Starting user logon Policy processing for <your username>:

image

Continue to scroll up and you’ll see another event 4017 entry that specifies which domain controller was used for the LDAP bind:

image 

Continuing to the next event ID 5017 will show how long the LDAP bind took:

image 

The next event ID 5326 will indicate how long the domain controller discovery took:

image 

An event ID 5327 will be written to provide an estimated bandwidth:

image 

From here on the following event entries will allow you to determine how long each component is consuming during the login process for the user.

Registry Extension Processing duration:

image 

Citrix Group Policy Extension Processing duration:

image 

Folder Redirection Extension Processing duration:

image 

Group Policy Drive Maps Extension Processing duration:

image 

Group Policy Printers Extension Processing duration:

image

The last event ID 8001 will display how long the GPO process took for the user:

clip_image002

Note how it is identical to what is logged by Citrix:

clip_image002[5]

Hope this helps anyone who may need to go through this process of identifying the potential causes for long login times.