Friday, July 10, 2015

Using remote PowerShell to log into Office 365 and review archive mailboxes

I’m not much of an Office 365 expert but have recently had the opportunity to work with a client to migrate their on prem Exchange 2010 archive over to O365.  The process of deploying ADFS and DirSync was fairly painless but there seemed to be some confusion when I called into Microsoft Office 365 support where the support engineer did not understand why the migrated archived mailbox did not show up in the EAC (Exchange Admin Center).  To make a long story short, it was not until I worked with the third engineer when I was finally told it’s not to supposed to show up if the user’s mailbox is hosted on prem while the archive is hosted on Office 365.  The purpose of this post is for me to list the steps for reviewing the migrated archive mailboxes in case I need it again in the future.

The first step is to connect to Office 365 by launching PowerShell and execute the following:

$LiveCred = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection

Set-ExecutionPolicy Unrestricted -Force

Import-PSSession $Session

clip_image002

Log in with your administrative O365 credentials:

image

You should see the following output once successfully authenticated:

clip_image002[4]

Proceed an execute the following cmdlet to list a specific user’s archive mailbox:

Get-MailUser -Identity <userName@domain.com> |fl *archive*

image

Do not attempt to use the cmdlet:

Get-Mailbox -archive

… to list a user a user’s archive because it will not work when the user has an on prem mailbox with an O365 hosted archive:

image

To verify that the archive mailbox located on O365 is belongs to the on prem mailbox, compare the ArchiveGuid listed for the archive on O365 and the ArchiveGuid for the mailbox on the on prem mailbox by executing the following cmdlet:

Get-Mailbox -Identity <userName@domain.com> |fl *archive*

image

If you ever wanted to check whether a user’s archive is located on prem or Office 365, you can launch the Test E-mail AutoConfiguration option for Outlook, run the test, navigate to the XML tab and look for the <Type>Archive</T> tag that specifies the SmtpAddress as Office 365 rather than the internal on prem Exchange server:

imageimage

Wednesday, July 8, 2015

Searching through OWA (Outlook Web App) on Exchange 2013 returns only 1 month of results

Problem

You’ve received complaints that users searching their inbox with OWA or Outlook in Online mode only returns 1 month of results.  Using the following Get-MailboxDatabaseCopyStatus cmdlet:

Get-MailboxDatabaseCopyStatus –server <mailboxServerName> | FL *index*,*ma*ser*,databasename

image

… shows that the ContentIndexState is listed as Healthy for the mailbox databases.

You proceed to stop the following services:

  • Microsoft Exchange Search Host Controller
  • Microsoft Exchange Search

Then rename or delete the content index folder named with the GUID of the database and restart the services again forcing Exchange to rebuild the content indexes.  However, you notice that searching still continues to return the same incomplete results.

Solution

This issue took me a bit of time to troubleshoot because attempting to search for anything related to searching points to the solution above but the environment I was working on did not have the ContentIndexState listed as FailedAndSuspended.  I tried searching for the ContentIndexRetryQueueSize variable because the value was high and all of the results pointed me to install CU7 when I already had CU8 installed.

What I found that eventually led me to the underlying issue was the following warning that got repeatedly written to the application log:

Log Name: Application

Source: MSExchangeFastSearch

Event ID: 1009

Level: Warning

image

The indexing of mailbox database Admin encountered an unexpected exception. Error details: Microsoft.Exchange.Search.Core.Abstraction.OperationFailedException: The component operation has failed. ---> Microsoft.Exchange.Search.Core.Abstraction.ComponentFailedPermanentException: Failed to read notifications, MDB: 8f76b2d9-77dd-44e6-a8ef-73d2a2539ae1. ---> Microsoft.Mapi.MapiExceptionMdbOffline: MapiExceptionMdbOffline: Unable to read events. (hr=0x80004005, ec=1142)

Diagnostic context:

Lid: 49384

Lid: 51176 StoreEc: 0x476

Lid: 40680 StoreEc: 0x476

Lid: 43980

Lid: 16354 StoreEc: 0x476

Lid: 38985 StoreEc: 0x476

Lid: 20098

Lid: 20585 StoreEc: 0x476

at Microsoft.Mapi.MapiExceptionHelper.InternalThrowIfErrorOrWarning(String message, Int32 hresult, Boolean allowWarnings, Int32 ec, DiagnosticContext diagCtx, Exception innerException)

at Microsoft.Mapi.MapiExceptionHelper.ThrowIfError(String message, Int32 hresult, IExInterface iUnknown, Exception innerException)

at Microsoft.Mapi.MapiEventManager.ReadEvents(Int64 startCounter, Int32 eventCountWanted, Int32 eventCountToCheck, Restriction filter, ReadEventsFlags flags, Boolean includeSid, Int64& endCounter)

at Microsoft.Exchange.Search.Mdb.NotificationsEventSource.<>c__DisplayClass3.<ReadEvents>b__1()

at Microsoft.Exchange.Search.Mdb.MapiUtil.<>c__DisplayClass1`1.<TranslateMapiExceptionsWithReturnValue>b__0()

at Microsoft.Exchange.Search.Mdb.MapiUtil.TranslateMapiExceptions(IDiagnosticsSession tracer, LocalizedString errorString, Action mapiCall)

--- End of inner exception stack trace ---

at Microsoft.Exchange.Search.Mdb.MapiUtil.TranslateMapiExceptions(IDiagnosticsSession tracer, LocalizedString errorString, Action mapiCall)

at Microsoft.Exchange.Search.Mdb.MapiUtil.TranslateMapiExceptionsWithReturnValue[TReturnValue](IDiagnosticsSession tracer, LocalizedString errorString, Func`1 mapiCall)

at Microsoft.Exchange.Search.Mdb.NotificationsEventSource.ReadEvents(Int64 startCounter, Int32 eventCountWanted, ReadEventsFlags flags, Int64& endCounter)

at Microsoft.Exchange.Search.Mdb.NotificationsEventSource.ReadFirstEventCounter()

at Microsoft.Exchange.Search.Engine.NotificationsEventSourceInfo..ctor(IWatermarkStorage watermarkStorage, INotificationsEventSource eventSource, IDiagnosticsSession diagnosticsSession, MdbInfo mdbInfo)

at Microsoft.Exchange.Search.Engine.SearchFeedingController.DetermineFeederStateAndStartFeeders()

at Microsoft.Exchange.Search.Engine.SearchFeedingController.InternalExecutionStart()

at Microsoft.Exchange.Search.Core.Common.Executable.InternalExecutionStart(Object state)

--- End of inner exception stack trace ---

at Microsoft.Exchange.Search.Core.Common.Executable.EndExecute(IAsyncResult asyncResult)

at Microsoft.Exchange.Search.Engine.SearchRootController.ExecuteComplete(IAsyncResult asyncResult)

From here, I went ahead to compare the content index catalog sizes between the 3 mailbox databases in the environment and quickly noticed that the problematic mailbox database had a catalog that was 1.5GB while the others were 9.5GBs or more.

My hunch was that the Veeam backups which take place at 11:00p.m. every evening was conflicting with the index building engine so I proceeded to stop the backups for the evening, forced Exchange to rebuild the index catalog over the evening.  After leaving the environment for a day, I went back to test searching on both and Outlook client in Online mode and through OWA and was able to retrieve results older than 1 month.

Thursday, June 25, 2015

Inbound mail submission disabled with event ID: 15004 warning logged on Exchange Server 2013

I recently had to troubleshoot an issue for a client who only had about 30 people in the organization with a mix of Macs and PCs notebooks/desktops accessing a single Exchange Server 2013 server with all roles installed via a mix of MAPI, IMAP4, and EWS protocols. The volume of email exchange within the organization isn’t particularly large but users do tend to send attachments as large as 40MB and Exchange is configured to journal to a 3rd party provider and therefore doubling every attachment email that is sent.

What one of the users noticed was that he would receive the following message on his Mac mail intermittently at various times of the week:

Cannot send message using the server

The sender address some@emailaddress.com was rejected by the server webmail.url.com

The server response was: Insufficient system resources

Select a different outgoing mail server from the list below or click Try Later to leave the message in your Outbox until it can be sent.

image

Reviewing the event logs show that when the user receives the error message above, Exchange also logs the following:

Log Name: Application

Source: MSExchangeTransport

Event ID: 15004 warning:

The resource pressure increased from Medium to High.

The following resources are under pressure:

Version buckets = 278 [High] [Normal=80 Medium=120 High=200]

The following components are disabled due to back pressure:

Inbound mail submission from Hub Transport servers

Inbound mail submission from the Internet

Mail submission from Pickup directory

Mail submission from Replay directory

Mail submission from Mailbox server

Mail delivery to remote domains

Content aggregation

Mail resubmission from the Message Resubmission component.

Mail resubmission from the Shadow Redundancy Component

The following resources are in normal state:

Queue database and disk space ("C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que") = 77% [Normal] [Normal=95% Medium=97% High=99%]

Queue database logging disk space ("C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\") = 80% [Normal] [Normal=94% Medium=96% High=98%]

Private bytes = 6% [Normal] [Normal=71% Medium=73% High=75%]

Physical memory load = 67% [limit is 94% to start dehydrating messages.]

Submission Queue = 0 [Normal] [Normal=2000 Medium=4000 High=10000]

Temporary Storage disk space ("C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Temp") = 80% [Normal] [Normal=95% Medium=97% High=99%]

image

Aside from seeing the pressure go from Medium to High, I’ve also seen pressure go from Normal to High:

The resource pressure increased from Normal to High.

The following resources are under pressure:

Version buckets = 155 [High] [Normal=80 Medium=120 High=200]

The following components are disabled due to back pressure:

Inbound mail submission from Hub Transport servers

Inbound mail submission from the Internet

Mail submission from Pickup directory

Mail submission from Replay directory

Mail submission from Mailbox server

Mail delivery to remote domains

Content aggregation

Mail resubmission from the Message Resubmission component.

Mail resubmission from the Shadow Redundancy Component

The following resources are in normal state:

Queue database and disk space ("C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que") = 77% [Normal] [Normal=95% Medium=97% High=99%]

Queue database logging disk space ("C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\") = 80% [Normal] [Normal=94% Medium=96% High=98%]

Private bytes = 5% [Normal] [Normal=71% Medium=73% High=75%]

Physical memory load = 67% [limit is 94% to start dehydrating messages.]

Submission Queue = 0 [Normal] [Normal=2000 Medium=4000 High=10000]

Temporary Storage disk space ("C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Temp") = 80% [Normal] [Normal=95% Medium=97% High=99%]

image

A bit of researching on the internet pointed me to various reasons why this would happen but one that appeared to the cause of this error in the environment was that users were sending attachments that are too large and therefore filling up the version bucket too fast for Exchange to process. One of the cmdlets a forum users suggested to check was the following:

get-messagetrackinglog -resultsize unlimited -start "04/03/2014 00:00:00" | select sender, subject, recipients,totalbytes | where {$_.totalbytes -gt "20240000"}

Executing the cmdlet above displayed the following:

image

Further digging into the logs revealed that there were quite a few emails with large attachments sent around the time when the warning was logged and after consulting with our Partner Forum support engineer, I decided to go ahead and increase the thresholds that deem the pressure as being Normal, Medium or High.  The keys of interest are found on the Exchange server in the following folder:

C:\Program Files\Microsoft\Exchange Server\V15\Bin

… in the following file:

EdgeTransport.exe.config

image

From within this file, look for the following keys:

<add key=”VersionBucketsHighThreshold” value=”200″ />

<add key=”VersionBucketsMediumThreshold” value=”120″ />

<add key=”VersionBucketsNormalThreshold” value=”80″ />

image

Proceed and change the values to a higher number (I simply doubled the number):

<add key=”VersionBucketsHighThreshold” value=”400″ />

<add key=”VersionBucketsMediumThreshold” value=”240″ />

<add key=”VersionBucketsNormalThreshold” value=”160″ />

image

Another suggested key to change was the:

<add key=”DatabaseMaxCacheSize” value=”134217728” />

… but the value on Exchange 2013 already defaults to the 512MB value so there was no need to modify it.

With these new values in pace, I went ahead and restarted the Microsoft Exchange Transport service to make the new variables take effect.

A month goes by without any complaints and logs reveal that while the version buckets have risen to the Medium threshold, it does not reach the new 400 high threshold. 

image

Another month passes and I get another call from the same user indicating the problem has reappeared.  A quick look at the logs shows that the threshold did reach the new high of 400. With no ideas left, I went ahead and opened a support call with Microsoft and the engineer basically told me the same cause and that was there is most likely a lot of large attachments being sent and thus Exchange is unable to flush the messages held in memory which results in this.  I explained to the engineer that there’s only 30 people and the attachments aren’t that big so the engineer went ahead to export the tracking logs so she could review them.  After about 30 minutes she said that I was right in the sense that while there are attachments but none of them exceed 40MB.  At this point, she said the configuration changes we can try are the following:

  1. Modify the Normal, Medium, High version buckets keys
  2. Modify the DatabaseMaxCacheSize
  3. Modify the QueueDatabaseLoggingFileSize
  4. Modify the DatabaseCheckPointDepthMax
  5. Set limits on send message sizes
  6. Increase the memory on the server

After reviewing the version bucket thresholds I’ve set when I doubled the default values, she said we don’t need to increase them further.  The questions I immediately asked was whether I could if I wanted to and she said yes but we could run the risk of causing the server to crash if it’s set to high that the server runs out of memory.

The DatabaseMaxCacheSize key was unchanged at 512MB so she asked me to change it to:

<add key=”DatabaseMaxCacheSize” value=”1073741824” />

image

The <add key="QueueDatabaseLoggingFileSize" value="5MB" />:

image

… was changed to <add key="QueueDatabaseLoggingFileSize" value="31457280" />:

image

The <add key="DatabaseCheckPointDepthMax" = value="384MB" />:

image

… was changed to <add key="DatabaseCheckPointDepthMax" = value="512MB" />:

image

Next, she used the following cmdlets to review the message size limits currently set:

Get-ExchangeServer | fl name, admindisplayversion, serverrole, site, edition

Get-Transportconfig | fl *size*

Get-sendconnector | fl Name, *size*

Get-receiveconnector | fl Name, *size*

Get-Mailbox -ResultSize Unlimited | fl Name, MaxSendSize, MaxReceiveSize >C:\mbx.txt

Get-MailboxDatabase | FL

Get-Mailboxdatabase -Server <serverName> | FL Identity,IssueWarningQuota,ProhibitSendQuota,ProhibitSendReceiveQuota

Get-Mailbox -server <ServerName> -ResultSize unlimited | Where {$_.UseDatabaseQuotaDefaults -eq $false} |ft DisplayName,IssueWarningQuota,ProhibitSendQuota,@{label="TotalItemSize(MB)";expression={(get-mailboxstatistics $_).TotalItemSize.Value.ToMB()}}

She noticed that I’ve already set limits on the mailbox database but wanted me to sent all of the other connectors and some other configurations to have 100MB limits by executing the following cmdlet:

Get-Transportconfig | Set-Transportconfig -maxreceivesize 100MB Get-Transportconfig | Set-Transportconfig -maxsendsize 100MB Get-sendconnector | Set-sendconnector -MaxMessageSize 100MB Get-receiveconnector | set-receiveconnector -MaxMessageSize 100MB Get-TransportConfig | Set-TransportConfig -ExternalDSNmaxmessageattachsize 100MB -InternalDSNmaxmessageattachsize 100MB Get-Mailbox -ResultSize Unlimited | Set-Mailbox -MaxSendSize 100MB -MaxReceiveSize 100MB

The last item that we could modify was the memory of the server but as there was already 16GB assigned, she said we could leave it as is for now and monitor the event logs over the next while.  It has been 3 weeks and while the version bucket has reached medium, it has been consistently 282 or less than the 400 High threshold set.

Troubleshooting this issue was quite frustrating for me as there wasn’t really a KB article with clear instructions for all these checks so I hope this post will help anyone out there who may experience this issue in their environment.

Friday, June 19, 2015

Adding an account from an external domain with a forest trust configured throws the error: “The security identifer could not be resolved…”

Problem

You’ve successfully deployed a new Windows Server 2012 R2 Remote Desktop Services farm in your environment and have begun assigning permissions to users located in another forest that you are forest trust with:

image

While you are able to browse the domain in the separate forest and select a user or group, you quickly notice you receive the following error message when you attempt to apply the settings:

The security identifier could not be resolved. Ensure that a two-way trust exists for the domain of the selected users.

Exception: The network path was not found.

image

Solution

I’ve come across the same problem with a Windows Server 2008 R2 Remote Desktop Services deployment and it looks like this problem still persists in the newer Windows Server 2012 R2 version. To get around this issue, we would need to create a Domain local group in the domain where the RDS server is installed:

image

… then proceed and add the user or group from the federated forest domain into the Domain local group:

image

… and because we can’t add a Domain local group into any other type of group such as Global or Universal in the domain, we would have to assign it directly to the RDS Collection and RemoteApp:

image

Not exactly the most elegant solution but good enough for a workaround.

Wednesday, June 17, 2015

Removing the: “A website is trying to run a RemoteApp program. Make sure that you trust the publisher before you connect to run the program.” message prompt when launching RD Web Access RemoteApp

Problem

You’ve configured your RemoteApp resources on your Remote Desktop Services and attempt to launch an application but receive the following warning message:

A website is trying to run a RemoteApp program. Make sure that you trust the publisher before you connect to run the program.

This RemoteApp program could harm your local or remote computer. Make sure that you trust the publisher before you connect to run this program.

Don’t ask me again for remote connections from this publisher

image

imageimage

As shown in the screenshots above, you have the option of checking the checkbox that reads:

Don’t ask me again for remote connections from this publisher

… to remove this prompt but you do not want everyone in the organization to receive this prompt.

Solution

One of the ways to remove this warning prompt is to implement a GPO and apply it to the user or computer account to trust the SHA1 thumbprint of the certificate presented.  Begin by opening the properties of the certificate and navigating to the Details tab that is used for your Remote Desktop Services portal:

image

Scroll down to the bottom where the Thumbprint is listed:

image

Select the Thumbprint field:

image

Select the thumbprint and copy the text:

image

Now before we proceed to copy this into the setting of the GPO we’ll be using, it is important to paste the thumbprint you have just copied into a command prompt as such:

image

Notice how there is a question mark: ? in front of the thumbprint? Note that paste this into Notepad does not reveal this unwanted question mark:

image

Proceed and copy the thumbprint from the command prompt without the question mark.

Next, create a new GPO or open an existing GPO that you would like to use and navigate to:

Policies\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Connection Client

Note that this policy can be applied to either a computer object or a user account so use whichever fits better for your environment.

image

Proceed and open the Specify SHA1 thumbprints of certificates representing trusted .rdp publishers:

image

Paste the copied thumbprint into the Comma-separated list of SHA1 trusted certificate thumbprints field:

image

Apply the configuration:

image

The user should no longer see the warning prompt once the policy is applied to a computer object or user account.

Wednesday, June 10, 2015

Recovering Cisco UCS Fabric Interconnect from the loader prompt

Problem

I recently had an issue with a Cisco UCS 6120 fabric interconnect we received from RMA that would no longer boot properly and simply presents the loader prompt no matter how many times you restart it:

image

Hitting the question mark ? would display the following available commands:

  • dir
  • reboot
  • serial
  • show
  • boot
  • help
  • resetcmos
  • set

image

Executing the dir command would display the following files:

image

A bit of researching on Google has blogs and forum posts recommending to simply execute the boot command along with the kickstart file as such:

boot ucs-6100-k9-kickstart.4.1.3.N2.1.1l.bin

imageimage

The boot process eventually brings you to the switch(boot)# prompt:

image

From here, some blog posts indicates that you can use the erase configuration command to erase the configuration on the fabric interconnect and start fresh but the command does not work as suggested:

erase configuration

% invalid command detected at ‘^’ marker.

image

It’s no surprise because executing the question mark ? command brings up the following available commands in this context:

  • clear
  • config
  • copy
  • delete
  • dir
  • exit
  • find
  • format
  • init
  • load
  • mkdir
  • move
  • no
  • pwd
  • rmdir
  • show
  • sleep
  • tail
  • terminal

image

It is possible to assign an IP address under this switch(boot) prompt as such:

config t

interface mgmt 0

ip address <ipAddress> <subnetMask>

no shut

exit

ip default <defaultGateway>

exit

image

While you can ping the interface by assigning an IP, you won’t be able to browse to it via http or https:

image

Solution

The way to properly boot the fabric interconnect from the loader prompt is to restart the fabric interconnect:

image

Boot the fabric interconnect with the kickstart and system bin files as such:

boot ucs-6100-k9-kickstart.4.1.3.N2.1.11.bin ucs-7100-k9-system.4.1.3.N2.1.1l.bin

imageimage

imageimage

imageimage

image

Once the boot process has completed, the IP address assigned earlier should now respond to pings:

image

… and you should be able to browse to the web page:

image

From here, you can use the prompt to use the console prompt to execute connect local-mgmt:

image

… and then execute a erase configuration to remove the config:

image