Jump to content

proximagr

Moderators
  • Posts

    2468
  • Joined

  • Last visited

  • Days Won

    12

Blog Entries posted by proximagr

  1. proximagr
    Bulletproof manage your Azure VMs
    Continuing the Azure Security Center posts, today we will see a new feature of the Security Center, called Just in Time VM Access.
    As best security practice, all the management ports of a Virtual Machine should be closed using Network Security Groups. Only the ports required for any published services should be opened, if any.
    However there are many occasions that we are requested to open a management port for administration or a service port for some tests for short time. This action has two major problems, first it requires a lot of administration time, because the administrator must go to the Azure Portal and add a rule at the VM’s NSG. The second problem is that many time the port is forgotten open and this is a major vulnerability since the majority of the Brute Force attacks are performed to the management ports, 22 and 3389.
    Here comes the Azure Security Center, with the Just in Time VM Access feature. With this feature we can use the RBAC of the azure Portal and allow specific users to Request a predefined port to be opened for a short time frame. JIT Configuration
    Lets see how we configure the JIT. First we need to go to the Azure Security Center. Scroll down to the ADVANCED CLOUD DEFENSE and click the “Just in time VM Access”. Since it is at a Preview you need to press the “Try Just in time VM access”

    After we enable JIT, the window displays tree tabs, the Configured, the Recommended and the No recommendation. The Configured tab displays the Virtual Machines that we have already enabled JIT. The recommended are VMs that have NSGs and are recommended to be enabled for JIT. The No recommendation are Classic VMs or VMs that don’t have attached NSG.

    To enable JIT for a VM, go to the Recommended tab, select one or more VMs and press “Enable JIT on x VMs”

    At the “JIT VM access configuration” the Security Center proposes rule with the default management ports. We can add other ports that we need and also remove any of them that are unnecessary.
    At each rule we can configure the Port, the Protocol, the Source IP and the Maximum request time.
    If we leave the “Allowed source IPs” to “Per request” then we allow the requester to decide. One very interesting setting here is that when a user requests access it has the option to allow only the Public IP that he is using at that time automatically.
    With the last option, the “Max request time” we narrow down the maximum time that we will allow a port to be opened.

    After we configure all the parameters we click Save and the VM moves to the Configured tab. At any time we can change the configuration by selecting the VM, press the three dots at the end of the line (…) and click Edit.

    The Propertied button opens the VM’s blade, the Activity log shows all the users that requested access and the Remove of course disabled the JIT. Behind the scene
    What really happens to the VM? if you browse to the NSG that is attached to the VM you will see that all the port rules configured at the JIT are added as NSG Rules with lower priority than all the other rules. All other rules automatically changed priority to higher.

    Lets see how we request access and what happens in the background. To request access go to the Security Center / JIT , select the VM and press “Request Access”

    At the “Request access” blade switch on the desired port, select “My IP” or “IP Range” and the Timerange, all according to the JIT configuration of the VM. Finally press “Open Ports”

    At the above example I select “My IP” so if you go to the VM’s NSG you will see that the 3389 port rule changed to “Allow” and for Source has my current Public IP. Also it moved at first priority.

    After the expiration of the time rage the port will change to “Deny” and move back to its prior priority.
  2. proximagr
    <p>Azure blob storage is billed based to how much data you use. So you can have an 1023 GB disk but if you use only 20 GB you will be billed for 20 GB. But, <img src="https://s.w.org/images/core/emoji/72x72/1f642.png"alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> , if you write more data, lets say 50 GB and then you erase them, the free space will not automatically be released.</p>
    <p>sandrinodimattia, https://github.com/sandrinodimattia, released an app that allows to check the actual size of a VHD on Azure. It works on both ASM and ARM.</p>
    <p>You can download the executable at: https://github.com/sandrinodimattia/WindowsAzure-VhdSize/releases</p>
    <p>The command is:</p>
    <p>wazvhdsize.exe “storageaccountname” “storageaccountaccesskey==” containername</p>
    <p> </p>
    <p>Source: https://github.com/sandrinodimattia/WindowsAzure-VhdSize</p>
    <p><a class="a2a_button_email" href="http://www.addtoany.com/add_to/email?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fazure%2Fcalculate-azure-vhd-actualbilling-size%2F&linkname=Calculate%20Azure%20VHD%20actual%2Fbilling%20size"title="Email" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/email.png" width="16" height="16" alt="Email"/></a><a class="a2a_button_print" href="http://www.addtoany.com/add_to/print?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fazure%2Fcalculate-azure-vhd-actualbilling-size%2F&linkname=Calculate%20Azure%20VHD%20actual%2Fbilling%20size" title="Print" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/print.png" width="16" height="16" alt="Print"/></a><a class="a2a_dd a2a_target addtoany_share_save" href="https://www.addtoany.com/share#url=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fazure%2Fcalculate-azure-vhd-actualbilling-size%2F&title=Calculate%20Azure%20VHD%20actual%2Fbilling%20size" id="wpa2a_4"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a></p><p>The post <a rel="nofollow" href="http://www.e-apostolidis.gr/microsoft/azure/calculate-azure-vhd-actualbilling-size/">Calculate Azure VHD actual/billing size</a> appeared first on <a rel="nofollow" href="http://www.e-apostolidis.gr">Proxima's IT Corner</a>.</p>


    <a href="http://www.e-apostolidis.gr/microsoft/azure/calculate-azure-vhd-actualbilling-size/"class='bbc_url' rel='nofollow external'>Source</a>
  3. proximagr
    <p>I was looking for a way to have a list with many details about VMs of Azure Classic deployment. Some of the details are VM Name, HostName, Service Name, IP address, Instance Size, Availability Set, Operating System, Disk Name (OS), SourceImageName (OS), MediaLink (OS), HostCaching (OS), Subnet, DataDisk Name, DataDisk HostCaching, DataDisk MediaLink, DataDisk Size.</p>
    <p>I started with PowerShell ISE and some technet search and after a lot of test I created this script:</p><pre class="crayon-plain-tag">Add-AzureAccount
    Select-AzureSubscription -SubscriptionId xxxxxxx-xxxxxxxx-xxxxxx-xxxxxx
    $VMlist = ForEach ($VM in (Get-AzureVM))
    { Get-AzureOSDisk -VM $VM | Select @{Label="VM";Expression={$VM.Name}},`
    @{Label="HostName";Expression={$VM.HostName}},`
    @{Label="Service";Expression={$VM.ServiceName}},`
    @{Label="IP";Expression={$VM.IpAddress}},`
    @{Label="InstanceSize";Expression={$VM.InstanceSize}},`
    @{Label="AvailabilitySet";Expression={$VM.AvailabilitySetName}},`
    OS,DiskName,SourceImageName,MediaLink,HostCaching, `
    @{Label="Subnet";Expression={(Get-AzureSubnet -VM $VM)}},`
    @{Label="DataDiskName";Expression={(Get-AzureDataDisk -VM $VM).DiskName}},`
    @{Label="DDHostCaching";Expression={(Get-AzureDataDisk -VM $VM).HostCaching}},`
    @{Label="DDMediaLink";Expression={(Get-AzureDataDisk -VM $VM).MediaLink}},`
    @{Label="DDSize";Expression={(Get-AzureDataDisk -VM $VM).LogicalDiskSizeInGB}}
    }
    $VMlist | Sort VM,SourceImageName | Export-CSV C:vms_alldata.csv -NoTypeInformation</pre><p>Just open the vms_alldata.csv with Excel, convert test to columns and insert table and voila:</p>
    <p><a href="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/allvms.jpg"><imgclass="alignnone wp-image-990 size-full" src="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/allvms.jpg" alt="allvms" width="1017" height="58" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/allvms.jpg 1017w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/allvms-300x17.jpg 300w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/allvms-768x44.jpg 768w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/allvms-660x38.jpg 660w" sizes="(max-width: 1017px) 100vw, 1017px" /></a></p>
    <p><a class="a2a_button_email" href="http://www.addtoany.com/add_to/email?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fazure%2Fclassic-azure-vm-details%2F&linkname=Classic%20Azure%20VM%20Details"title="Email" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/email.png" width="16" height="16" alt="Email"/></a><a class="a2a_button_print" href="http://www.addtoany.com/add_to/print?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fazure%2Fclassic-azure-vm-details%2F&linkname=Classic%20Azure%20VM%20Details" title="Print" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/print.png" width="16" height="16" alt="Print"/></a><a class="a2a_dd a2a_target addtoany_share_save" href="https://www.addtoany.com/share#url=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fazure%2Fclassic-azure-vm-details%2F&title=Classic%20Azure%20VM%20Details" id="wpa2a_2"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a></p><p>The post <a rel="nofollow" href="http://www.e-apostolidis.gr/microsoft/azure/classic-azure-vm-details/">Classic Azure VM Details</a> appeared first on <a rel="nofollow" href="http://www.e-apostolidis.gr">Proxima's IT Corner</a>.</p>


    <a href="http://www.e-apostolidis.gr/microsoft/azure/classic-azure-vm-details/"class='bbc_url' rel='nofollow external'>Source</a>
  4. proximagr
    Compliance Report using Azure Policy
    Azure Policy is a powerful tool for Azure Governance. With Azure Policy we can define rules for all Azure Subscriptions the we manage. We can use this rules for simple limitation actions, like permitting only specific VM Series and Sizes that can be created and also more complex rule sets that helps you standardize the whole Azure deployment. At my previous posts, we learned How to limit the Azure VM Sizes and How to enforce tags for resources creation
    At the current post we will learn how to use Azure Policy to have a compliance report for our deployment. We will learn this by using an example. Then we will create two Virtual Networks and we will add a Network Security Group only to the first one. Finally we will use the Policy to audit whether the Subnets have assigned the NSG or Not.
    First we need two Virtual Networks. You can create the Virtual Networks using the Azure Portal or using ARM template, like mine from my Github account: https://github.com/proximagr/ARMTemplates/blob/master/2vnets.json
    After applying the template you will have two VNETs like that:

    Then we will a Network Security Group (NSG) only to the MyVNET01 Virtual Network. Again using Azure Portal, PowerShell or my ARM Template for NSG
    Assign the NSG to the MyVNET01 VIrtual Network

    Add the Policy
    Go to Azure Policy -> Definitions and click the “+ Policy definition” to create a new policy definition.

    At the New Policy definition page, select the subscription (location) that the policy will be saved, then add a name. in this case we will use the sample policy template from Microsoft docs so I will add the same name.
    Copy the policy Json text from https://docs.microsoft.com/en-us/azure/governance/policy/samples/nsg-on-subnet and paste it at the POLICY RULE below and Save.

    At the “effect” part of the Json, change the “deny” to “audit”.

    If you search for “NSG” you will see our new policy definition, ready to be assigned.

    Click on the definition’s name to open it and press Assign.

    I will just target the “ComplianceReport” Resource Group

    At the parameters, I added the Resource ID of the NSG, “MyNSG01”

    Evaluate the results
    To check the compliance, go to Policy – Compliance page and search for nsg. You have to wait for about 15 minutes for the compliance policy to evaluate the resources.
    If you search “nsg” you will see that the “Audit NSG on Subnet” policy is 50% compliant. Click on the policy’s name to view more details.

    The assignment details page will open where we can see what resources are not compliant.

    Click on the three dots (…) next to the non-compliant subnet and select “view compliance details” to check why this resource is not compliant.

    The compliance details reports that the value is null and what the required (target) value must be.

    If you want to trigger an on-demand compliance check, you need to make a POST request. You can follow my post Validate Azure Resource Move with Postman to create the access Token and then use it to make a POST request to the Resource Group sung this POST:
    https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{YourRG}/providers/Microsoft.PolicyInsights/policyStates/latest/triggerEvaluation?api-version=2018-07-01-preview
    Source:
    https://docs.microsoft.com/en-us/azure/governance/policy/concepts/effects
    https://docs.microsoft.com/en-us/azure/governance/policy/samples/nsg-on-subnet
    https://docs.microsoft.com/en-us/azure/governance/policy/how-to/get-compliance-data#evaluation-triggers

    The post Compliance Report using Azure Policy appeared first on Apostolidis IT Corner.
     
     
     
  5. proximagr
    Azure Web Application Firewall (WAF) is a function of the Azure Application Gateway that detects and prevents exploits and attacks to a web application. Using a WAF we add an additional security layer in front of our application. To have a sneak peak at the most common web application attacks, take a look at the OWASP Top 10 Most Critical Web Application Security Risks .
    At my previous posts we have seen how to Protect your Web App using Azure Application Gateway Web Application Firewall and Use Log Analytics to Query the WAF Logs and email those logs to the Admins. At this post I want to share some tips on how to configure the Azure Web Application Firewall.
    The Azure Web Application Firewall, like all WAFs, needs a period of detection “the training period”, in order to gather logs about what is logged as blocked so to configure it accordingly before turning the WAF to Prevention mode. The Azure Web Application Firewall uses OWASP ModSecurity Core Rule Set (CRS). You can select version 2.2.9 or version 3.0 of the OWASP ModSecurity Core Rule Set. These rules include protection against attacks such as SQL injection, cross-site scripting attacks, and session hijacks.
    The configuration of the Azure Web Application Firewall has two parts. One part is the OWASP rules custom configuration, where we can check / uncheck the OWASP rules that the WAF will use to analyse the requests:
    and the second part is the Exclusions and the Request Size Limits:
    Let’s see how we can find out what to exclude and what to customize. Once you setup the Azure Application Gateway and Publish your web application turn of the Firewall in Detection mode. Enable the Diagnostic Logs and send the logs to Log Analytics and start using the we application. I have covered all those steps at my previous posts, Protect your Web App using Azure Application Gateway Web Application Firewall and Use Log Analytics to Query the WAF Logs and email those logs to the Admins. To make it more fun you can actually attack your application using sample attacks, like SQL Injection samples from this link: https://www.owasp.org/index.php/Testing_for_SQL_Injection_(OTG-INPVAL-005)and Cross-site Scripting (XSS) from this link: https://www.owasp.org/index.php/Cross-site_Scripting_(XSS). Both links are from OWASP for testing.
    After a while run the query to check the Azure Web Application Firewall logs:
     



    1



    AzureDiagnostics | where Resource == "PROWAF" and OperationName == "ApplicationGatewayFirewall" | where TimeGenerated &gt; ago(24h) | summarize count() by TimeGenerated, clientIp_s , TimeGenerated , ruleId_s , Message , details_message_s , requestUri_s, details_file_s , hostname_s
    You will get the below results:
    At the Message part of the Log you will see the kind of attack that the WAF has detected.
    At the ruleId_s you can find the OWASP rule ID. With this information you can search the Rule ID at the Advanced rule configuration and uncheck the specific rule. Of course every rule you uncheck you open a security hole. So I recommend to first check if you can alter your application to comply with the rule and only if this is not possible to drop the rule.
    At the details_message_s column also you can find the matched pattern and configure the Exclusions
    Finally you can configure the request size limits according to your application
    Once you finalize your Azure Application Firewall configuration and you no longer have “Blocked” messages change it to “Prevention” mode to start protecting your web application.
    Reference:
    WAF Overview: https://docs.microsoft.com/en-us/azure/application-gateway/waf-overview
    WAF Configuration: https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-waf-configuration
    OWASP ModSecurity Core Rule Set (CRS): https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project
  6. proximagr
    First we need to install the Azure PowerShell module from http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409
     
    Then open PowerShell and follow the below commands:
     
    #Get your subscription file - The browser will open, you will need to login to the Azure Subscription and finally it will download the <subscriptonname>.publishsettings file
    Get-AzurePublishSettingsFile
     
    #Connect to your Subscription
    Import-AzurePublishSettingsFile -PublishSettingsFile "full path to downloaded file"
    Source: http://www.e-apostolidis.gr/microsoft/connect-powershell-to-azure/
  7. proximagr
    To connect PowerShell to Exchange Online, open the PowerShell and run:

    $UserCredential = Get-Credential $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $UserCredential -Authentication Basic -AllowRedirection Import-PSSession $Session

    source: http://www.e-apostolidis.gr/microsoft/connect-to-exchange-online/
  8. proximagr
    Connect two or more Azure Virtual Networks using one VPN Gateway
    Peering is a feature that allows to connect two or more virtual networks and act as one bigger network. At this post we will see how we can connect two Azure Virtual Networks, using peering and access the whole network using one VPN Gateway. We can connect Virtual Networks despite if they are in the same Subscription or not.
    I have created a diagram to help understand the topology.

    We have a Virtual Network with Site-2-Site VPN wto On Premises. It can also have Point-2-Site connection configured. The VNET A. We have another Virtual Network at the Same Subscription that we want to connect each other. The VNET B. Also we can have a third Virtual Network at a different subscription. The VNET C.

    In sort we need those peerings with the specific settings:
    At the VNETA Peering VNETA to VNETB with “Allow Gateway transit” At the VNETA Peering VNETA to VNET At the VNETB Peering VNETB to VNETA with “Use Remote Gateway” At the VNETB Peering VNETB to VNETC At the VNETC Peering VNETC to VNETA with “Use Remote Gateway” At the VNETC Peering VNETC to VNETB

    In order to be able to connect all those networks and also access them using the VPN Connection there are four requirements:
    The account that will be used to create the peering must have the “Network Contributor” Role. The Address Space must be different on each other and not overlap. All other Virtual Networks, except the one that has the VPN Connection must NOT have a VPN Gateway deployed. Of course at the local VPN device (router) we need to add the address spaces of all the Virtual Networks that we need to access.
    Lets lab it:
    HQ 192.168.0.0/16 –> The on-premises network VNET A 10.1.0.0/16 –> The Virtual Network that has the VPN Gateway (At my lab is named “devvn”) VNET B 10.229.128.0/24 –> THe virtual network at a different subscription of the Gateway (At my lab is named “Network prtg-rsg-vnet”) VNET C 172.16.1.0/24 –> The virtual network at the same subscription as the Gateway Network (At my lab is named “provsevnet)

    The on-premises network is connected with Site-to-site (IPsec) VPN to the VNETA

    Now we need to connect VNETA and VNETB using Vnet Peering. in order to have a Peering connection we need to create a connection from VNETA to VNETB and one from VNETB to VNETA.
    Open the VNETA Virtual Network, go to the Peerings setting and press +ADD
    Select the VNETB and check the “Allow Gateway transit” to allow the peer virtual network to use your virtual network gateway


    Then go to the VNETB, go to the Peerings setting and click +ADD.
    Select the VNETA Virtual Network and check the “Use Remote Gateway” to use the peer’s virtual network gateway. This way the VNETB will use the VNETA’s Gateway.


    Now we can contact the VNETB network from our on-premises network
    a multi-ping screenshot:
    From 10.229.128.5 (VNETB) to 192.168.0.4 (on-premises) & the opposite From 10..1.2.4 (VNETA) to 10.229.128.5 (VNETB) & to 192.168.0.4 (on-premises)

    The next step is to create a cross-subscription peering VNETA with VNETC
    Open the VNETA and create a peering by selecting the VNETC from the other Subscription and check the “allow gateway transit”

    Then go to the VNETC and create a peer with the VNETA and check the “use remote gaeway”

    With the two above connections we have connectivity between the on-premises network and the VNETC.
    The final step, to enable the connectivity between VNETB & VNETC. To accomplish this just create one peer from the VNETB to VNETC and one from VNETC to VNETB.
    Ping inception:

    In order to have client VPN connectivity to the whole network, create a Point-2-Site VPN at the VNETA. You can follow this guide: Azure Start Point | Point-to-Site VPN
    If you like my content you can follow my blog: e-apostolidis.gr
  9. proximagr
    Copy AZURE VHD to other storage account
     
    #Source storage account
    $context1 = new-azurestoragecontext -storageaccountname "name_source_account" -storageaccountkey "key_source_account"
     
    #Destination storage account
    $context2 = new-azurestoragecontext -storageaccountname "name_destination_account" -storageaccountkey "key_destination_account"
     
    #Initiate copy this might take a while
    Start-AzureStorageBlobCopy -SrcContainer "vhds" -SrcBlob "name_as_found_in_step_one.vhd" -SrcContext $context1 -DestContainer "vhds" -DestBlob "my_destination_name.vhd" -DestContext $context2
     
    Track Azure VHD copy process
     

    $context = new-azurestoragecontext -storageaccountname "name_destination_account" -storageaccountkey "key_destination_account"
     
    Get-AzureStorageBlobCopyState -Blob "file_name.vhd" -Container "vhds" -Context $context
     
    source: http://www.e-apostolidis.gr/microsoft/copy-azure-vhd-to-other-storage-account/
  10. proximagr
    There are many reasons to have your Disks stored at separate Storage Accounts, per Cloud Service. One is that a Storage Account in Azure provides 20000 IOPS and every disk in Standard Tier 500 IOPS. Azure support suggests to don’t have more than 40 disks per Storage Account. Also you may want to have your disks lined (go to Azure, Cloud Services, selsect a Cloud Service and you can see the “Lined Resources” tab, there you can link storage accounts to the Cloud Service) to the same Cloud Services as their VMs. The problem is that if you have an Azure VM and you try to “attach an empty disk” you will realize that the disk will be created at the default Storage Account of the Subscription and there is no option to change this.
     
    Here is a PowerShell command that creates a VHD at a specified Storage Account, creates a Disk and attaches it to a VM:
     
    Get-AzureVM "servicename -Name "vmname" | Add-AzureDataDisk -CreateNew -DiskSizeInGB XXX -DiskLabel "diskname" -MediaLocation "https://storageaccountname.blob.core.windows.net/vhds/vhdname.vhd"-LUN X | Update-AzureVM
     
    Some more info on this command:
     
    First of all you need to connect to your Azure Subscription, you can follow this Post on how to do it.
    Then create a Storage Account using the GUI or PowerShell, here is the Microsoft’s link http://azure.microsoft.com/en-us/documentation/articles/storage-create-storage-account/
    Then you need to list the disks that are already connected to your VM in order to view the LUN number that you will use. The OS disk is not listed on this command. The first data disk consumes the LUN 0, the second the LUN 1 and so on. The command is:
     
    Get-AzureVM -ServiceName "servicename" -Name "vmname" | Get-AzureDataDisk
     
    source: http://www.e-apostolidis.gr/microsoft/create-a-disk-in-specific-storage-account-and-attach-it-to-a-vm-in-azure/
  11. proximagr
    Create an Ultra High Available on-prem <-> Azure VPN Connection
    At this post we will see how to make a high available connection between our on-premises network and Azure. This way we will have an Active-Active Dual-Redundancy VPN Connection.
    The idea behind this is that we have a router/firewall cluster,connected with two ISPs and we want to also have a VPN connection with Azure using both ISPs actively. I call this an end-to-end high available connectivity between our on-premises infrastructure and Azure. Actually the active-active dual redundant connections needs to have two different on-premises VPN devices, but we can accomplish almost the same functionality with one device and two different interfaces with two different ISPs.

    The requirement for this topology, except the router/firewall cluster and the two ISPs is that the Azure VPN Gateway must be Standard or HighPerformance SKU. The Basic SKU does not support Active-Active mode.
    As you can see at the above diagram, the Active-Active VPN Gateway created two Active VPN Nodes. The connection of each node to each on-premises network interface in a mesh topology. All network traffic is distributed through all the connections. In order to accomplish this connectivity we need to also enable BGP to both on-premises device and Azure VPN Gateway with different ASN. Lets lab it:
    Create a Virtual Network Gateway, VPN, Route Based and SKU VpnGw1 or larger
    Enable active-active mode, this will create two nodes, and give the names of the two Public IPs.
    Check the Configure BGB ASN and change the default ASN, I used 65510
    wait a lot… more than the typical 45 minutes, a lot more…

    When the gateway is created you will see that the public ip address is called “First public IP address”. If you click the “see more” link you will see the second IP too.

    You can see both IP form the Properties page too.

    Second we need to create two Local network Gateways, to represent the two interfaces of our on-premises device. Both must be created with the same ASN. This ASM must be different than the Gateways’ and this ASN must be configured at the configuration of the local devices VPN connection.
    ]
    Now, create the connection

    And remember to enable BGP at the Connection’s Configuration

    As soon as the local device is configured both connections became connected.

    From powershell we can see both local IPs of the two nodes of the Azure VPN Gateway,
    Test and Troubleshooting
    Currently the only way to see the connections between the Azure Gateway Nodes and the local devices interfaces is the below powershell command
    Get-AzureRmVirtualNetworkGatewayBGpPeerStatus -VirtualNetworkGatewayName “gatewayname” -ResourceGroup “resourcegroupname”

    Every time you run this command you get answer from one of the two nodes at random. At the above screenshot, first is one node and second is the other.
    The first node’s peer, 192.168.xx.9 shows that is connected to the 10.xx.xx.2 local network’s peer and connecting at the second peer 10.xx.xx.1
    The second node’s peer, 192.168.xx.8 shows that is connected to the 10.xx.xx.1 local network’s peer and connecting at the second peer 10.xx.xx.2

    The test I performed was to unplug one interface from the local device. The azure gateway’s first node State was both Connecting and the second node was the same, connecting to .2 and connected to .1. At this test I did lost a single ping.
    After that I plugged the cable back, waited less than a minute and unplugged the second cable. Now the first node shows still disconnected but the first node connected to the .2 local IP and connecting to .1. With this test I lost only one ping. Also I realized that it is random which node’s private IP will connect with the local device’s private IP. Both Azure Gateway’s IPs 192.168.x.8 & 9 can connect with the local device’s IP 10.x.x.1 & 2 and this is the magic of the Active-Active Dual Redundancy VPN connection.
  12. proximagr
    Create Azure File Shares at your ARM template using PowerShell
    Using Azure Resource Manage template deployment, you can create a Storage account but you cannot create File Shares. Azure File Shares can be created using the Azure Portal, the Azure PowerShell or the Azure Cli.
    Mainly, the idea is to run a PowerShell script that will create the File Shares. This script will be invoked inside the ARM Template. In order to use a PowerShell script from a template, the script must be called from a URL. A good way to provide this is using the Git repository. One major thing to consider is the Storage Account key must be provided to the PowerShell script securely, since the PowerShell script is at a public URL.
    The PowerShell script will run inside a Virtual Machine and we will use a CustomScriptExtension Extension to provide it. To use this, at the Virtual Machine Resource of the JSON file add a resources section.
    The Custom Script Exception is located at the Virtual Machine resource. Lets assume that the last part of the Virtual Machine resource is the “diagnosticsProfile” so after the closure of the “diagnosticsProfile” we can add the “resources”. Inside the “resources” add the “extensions” resource that will add the “CustomScriptExtension”, like below.The Template Part
    This will be the addition at the Virtual Machine resource:
    "diagnosticsProfile": { "bootDiagnostics": { "enabled": true, "storageUri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/', variables('diagnosticStorageAccountName')), '2016-01-01').primaryEndpoints.blob)]" } } }, "resources": [ { "name": "AzureFileShares", "type": "extensions", "location": "[variables('location')]", "apiVersion": "2016-03-30", "dependsOn": [ "[resourceId('Microsoft.Compute/virtualMachines', parameters('VMName'))]", "[variables('AzureFilesStorageId')]" ], "tags": { "displayName": "AzureFileShares" }, "properties": { "publisher": "Microsoft.Compute", "type": "CustomScriptExtension", "typeHandlerVersion": "1.4", "autoUpgradeMinorVersion": true, "settings": { "fileUris": [ "https://raw.githubusercontent.com/######/#####/master/azurefiles.ps1" ] }, "protectedSettings": { "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ','azurefiles.ps1 -SAName ',parameters('AzureFilesStorageName'),' -SAKey ', listKeys(resourceId(variables('AzureFilesStorageAccountResourceGroup'),'Microsoft.Storage/storageAccounts', parameters('AzureFilesStorageName')), '2015-06-15').key1)]" } } } ] },
    The extension must be depended from the Virtual Machine that will run the script and the Storage Account that will bu used for the file shares.
    At the custom script properties add the public RAW url of the PowerShell script.
    Next lets see the Storage Account key and execution part. At the connandToExecute section, we will provide a variable that will pass the Storage Account key & Name inside the script for execution. The variable will get the Storage Account key from the Storage Account using the permissions of the Account running the Template Deployment.
    Of course to make the template more flexible I have added a variable for the Resource Group and a parameter for the AzureFilesStorageName, so the template will ask for the Storage Account name at the parameters.The PowerShell
    The PowerShell script is tested at Windows Server 2016 VM. You can find it below:
    Param ( [Parameter()] [String]$SAKey, [String]$SAName)Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -ForceSet-PSRepository -Name PSGallery -InstallationPolicy TrustedInstall-Module Azure -Confirm:$FalseImport-Module Azure$storageContext = New-AzureStorageContext -StorageAccountName $SAName -StorageAccountKey $SourceSAKey$storageContext | New-AzureStorageShare -Name #####
    [/url]
    The post Create Azure File Shares at your ARM template using PowerShell appeared first on Apostolidis IT Corner.


    Source
  13. proximagr
    Create Azure File Shares at your ARM template using PowerShell
    Using Azure Resource Manage template deployment, you can create a Storage account but you cannot create File Shares. Azure File Shares can be created using the Azure Portal, the Azure PowerShell or the Azure Cli.
    Mainly, the idea is to run a PowerShell script that will create the File Shares. This script will be invoked inside the ARM Template. In order to use a PowerShell script from a template, the script must be called from a URL. A good way to provide this is using the Git repository. One major thing to consider is the Storage Account key must be provided to the PowerShell script securely, since the PowerShell script is at a public URL.
    The PowerShell script will run inside a Virtual Machine and we will use a CustomScriptExtension Extension to provide it. To use this, at the Virtual Machine Resource of the JSON file add a resources section.
    The Custom Script Exception is located at the Virtual Machine resource. Lets assume that the last part of the Virtual Machine resource is the “diagnosticsProfile” so after the closure of the “diagnosticsProfile” we can add the “resources”. Inside the “resources” add the “extensions” resource that will add the “CustomScriptExtension”, like below. The Template Part
    This will be the addition at the Virtual Machine resource:
     
     
     
    "diagnosticsProfile": {
    "bootDiagnostics": {
    "enabled": true,
    "storageUri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/', variables('diagnosticStorageAccountName')), '2016-01-01').primaryEndpoints.blob)]"
    }
    }
    },
    "resources": [
    {
    "name": "AzureFileShares",
    "type": "extensions",
    "location": "[variables('location')]",
    "apiVersion": "2016-03-30",
    "dependsOn": [
    "[resourceId('Microsoft.Compute/virtualMachines', parameters('VMName'))]",
    "[variables('AzureFilesStorageId')]"
    ],
    "tags": {
    "displayName": "AzureFileShares"
    },
    "properties": {
    "publisher": "Microsoft.Compute",
    "type": "CustomScriptExtension",
    "typeHandlerVersion": "1.4",
    "autoUpgradeMinorVersion": true,
    "settings": {
    "fileUris": [
    "https://raw.githubusercontent.com/######/#####/master/azurefiles.ps1"
    ]
    },
    "protectedSettings": {
    "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ','azurefiles.ps1 -SAName ',parameters('AzureFilesStorageName'),' -SAKey ', listKeys(resourceId(variables('AzureFilesStorageAccountResourceGroup'),'Microsoft.Storage/storageAccounts', parameters('AzureFilesStorageName')), '2015-06-15').key1)]"
    }
    }
    }
    ]
    },
     
    The extension must be depended from the Virtual Machine that will run the script and the Storage Account that will bu used for the file shares.
    At the custom script properties add the public RAW url of the PowerShell script.
    Next lets see the Storage Account key and execution part. At the connandToExecute section, we will provide a variable that will pass the Storage Account key & Name inside the script for execution. The variable will get the Storage Account key from the Storage Account using the permissions of the Account running the Template Deployment.
    Of course to make the template more flexible I have added a variable for the Resource Group and a parameter for the AzureFilesStorageName, so the template will ask for the Storage Account name at the parameters. The PowerShell
    The PowerShell script is tested at Windows Server 2016 VM. You can find it below:
     
    Param (
    [Parameter()]
    [string]$SAKey,
    [string]$SAName
    )
    Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
    Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
    Install-Module Azure -Confirm:$False
    Import-Module Azure
    $storageContext = New-AzureStorageContext -StorageAccountName $SAName -StorageAccountKey $SourceSAKey
    $storageContext | New-AzureStorageShare -Name #####
     
    read
  14. proximagr
    <p>Open the Office 365 Exchange Administration Console and go to Recipients > Migration > More > Migration endpoints and click on the plus sign to add a new endpoint.</p>
    <p><a href="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme1.png"><imgclass="alignnone size-full wp-image-1002" src="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme1.png" alt="cme1" width="867" height="275" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme1.png 867w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme1-300x95.png 300w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme1-768x244.png 768w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme1-660x209.png 660w" sizes="(max-width: 867px) 100vw, 867px" /></a></p>
    <p>Select the type of migration endpoint (Outlook Anywhere) and enter the details requested:</p>
    <ul>
    <li>An email address that will be migrated – this is used to test mailbox access during configuration</li>
    <li>Account with privileges – usually a Domain Administrator, but it can be another user, in which case you must assign permissions as specified here</li>
    <li>The privileged account you specify will be used to autodiscover the connection settings and test access to the mailbox specified above.</li>
    </ul>
    <p>Click next and verify that the correct details have been populated in the next dialogue box:</p>
    <p><a href="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme2.png"><imgclass="alignnone size-full wp-image-1003" src="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme2.png" alt="cme2" width="335" height="325" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme2.png 335w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme2-300x291.png 300w" sizes="(max-width: 335px) 100vw, 335px" /></a></p>
    <p>Now that the endpoint has been tested you just need to define values for the number of concurrent migrations and supply a descriptive name for the endpoint.</p>
    <p><a href="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme3.png"><imgclass="alignnone size-full wp-image-1004" src="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme3.png" alt="cme3" width="352" height="292" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme3.png 352w, http://www.e-apostolidis.gr/wp-content/uploads/2016/05/cme3-300x249.png 300w" sizes="(max-width: 352px) 100vw, 352px" /></a></p>
    <p> </p>
    <p><a class="a2a_button_email" href="http://www.addtoany.com/add_to/email?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Foffice-365%2Fcreate-migration-endpoint-cutover-staging-migration%2F&linkname=Create%20migration%20endpoint%20%7C%20%28Cutover%20%26%20Staging%20Migration%29"title="Email" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/email.png" width="16" height="16" alt="Email"/></a><a class="a2a_button_print" href="http://www.addtoany.com/add_to/print?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Foffice-365%2Fcreate-migration-endpoint-cutover-staging-migration%2F&linkname=Create%20migration%20endpoint%20%7C%20%28Cutover%20%26%20Staging%20Migration%29" title="Print" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/print.png" width="16" height="16" alt="Print"/></a><a class="a2a_dd a2a_target addtoany_share_save" href="https://www.addtoany.com/share#url=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Foffice-365%2Fcreate-migration-endpoint-cutover-staging-migration%2F&title=Create%20migration%20endpoint%20%7C%20%28Cutover%20%26%20Staging%20Migration%29" id="wpa2a_2"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a></p><p>The post <a rel="nofollow" href="http://www.e-apostolidis.gr/microsoft/office-365/create-migration-endpoint-cutover-staging-migration/">Create migration endpoint | (Cutover & Staging Migration)</a> appeared first on <a rel="nofollow" href="http://www.e-apostolidis.gr">Proxima's IT Corner</a>.</p>


    <a href="http://www.e-apostolidis.gr/microsoft/office-365/create-migration-endpoint-cutover-staging-migration/"class='bbc_url' rel='nofollow external'>Source</a>
  15. proximagr
    Custom pfSense on Azure Rm | a complete guide
    A complete guide on how to create a pfSense VM on a local Hyper-V server, prepare it for Microsoft Azure, upload the disk to Azure and create a multi-NIC VM.
    Download the latest image from https://www.pfsense.org/download/

    Open Hyper-V Manager create a Generation 1 VM. I added 4096 ram, 2 cores, use VHD, add an extra NIC (for second interface) and select the downloaded ISO. (create a fixed VHD as Azure supports only fixed VHDs for custom VMs)

    Start the VM and at the first screen press enter.

    At all screens I accepted the default settings. Finally at the reboot prompt remove the installation ISO.
    There is no need to setup VLANs, select the second interface for WAN and the first for LAN.


    Once the pfSense is ready press 2 and change the LAN (hn0) interface IP to one at your network. Then select the option 14 to enable SSH.

    Now we can login with putty, with username admin password pfsense and press 8 for Shell access.

    The first thing is to update the packages running:
    pkg upgrade Python
    Then install Python, as it is requirement for the Azure Linux Agent.
    Search for Python packages running:
    pkg search python

    Install the latest Python package, setup tools and bash:
    pkg install -y python27-2.7.14
    pkg search setuptoolspkg install py27-setuptools-36.2.2ln -s /usr/local/bin/python /usr/local/bin/python2.7pkg install -y bash Azure Linux Agent
    ref: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/classic/freebsd-create-upload-vhd
    pkg install gitgit clone https://github.com/Azure/WALinuxAgent.gicd WALinuxAgentgit taggit checkout WALinuxAgent-2.1.1git checkout WALinuxAgent-2.0.16python setup.py installln -sf /usr/local/sbin/waagent /usr/sbin/waagent
    check the agent is running:
    waagent -Version

    One final step before uploading the VHD to Azure is to set the LAN interface as dhcp.
    This can be done by the web interface, go to https://lanaddress, login using admin / pfsense, and go to interfaces / LAN and select DHCPas ipv4 configuration.

    Now, shutdown the pfSense and upload it to Azure Storage.
    I use the Storage Explorer, https://azure.microsoft.com/en-us/features/storage-explorer/ a free and powerful tool to manage Azure Storage. Login to your Azure Account and press Upload. Select as Blob type: “Page blob”

    After the upload is completed we can create a multiple NIC VM. This cannot be accomplished from GUI. We will create this using PowerShell.
    $ResourceGroupName = "******"$pfresourcegroup = "*******"$StorageAccountName = "******"$vnetname = "*****"$NSGname = "******"$location = "West Europe"$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $ResourceGroupName$backendSubnet = Get-AzureRMVirtualNetworkSubnetConfig -Name default -VirtualNetwork $vnet$vmName="pfsense"$vmSize="Standard_F1"$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $ResourceGroupName$pubip = New-AzureRmPublicIpAddress -Name "PFPubIP" -ResourceGroupName $pfresourcegroup -Location $location -AllocationMethod Dynamic$nic1 = New-AzureRmNetworkInterface -Name "EXPFN1NIC1" -ResourceGroupName $pfresourcegroup -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pubip.Id$nic2 = New-AzureRmNetworkInterface -Name "EXPFN1NIC2" -ResourceGroupName $pfresourcegroup -Location $location -SubnetId $vnet.Subnets[0].Id$VM = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize$VM | Set-AzureRmVMOSDisk ` -VhdUri https://********.blob.core.windows.net/vhds/pfsensefix.vhd ` -Name pfsenseos -CreateOption attach -Linux -Caching ReadWrite$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic1.Id$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic2.Id$vm.NetworkProfile.NetworkInterfaces.Item(0).Primary = $trueNew-AzureRMVM -ResourceGroupName $pfresourcegroup -Location $locationName -VM $vm -Verbose
    Once the VM is created, go to the VM’s blade and scroll down to “Boot diagnostics”. There you can see a screenshot of the VM’s monitor.

    Then go to the Networking section and SSH to the Public IP.

    and also we can login to the Web Interface of the pfSense


    In my case I have added both NICs at the same Subnet, but at a production environment add the LAN interface to the backend subnet and the WAN interface to the DMZ (public) subnet.
    Of course more NICs can be added to the VM, one for each Subnet at our environment.Route external traffic through the pfSense
    We cannot change the gateway at an Azure VM, but we can use routing tables to route the traffic through the pfSense.
    From the Azure Portal, select New and search for Route table.

    We need to configure two things. One is to associate the Route table to a Subnet and the second is to create a Route.

    Open the “Route table” and click the “Routes”. Press “Add route” and in order to route all outbound traffic through the pfSense then add for Address prefix “0.0.0.0”, next hop type Virtual appliance” and Net hop address the ip address of the pfSense’s LAN interface IP.

    Then go to the “Subnets” and associate the required subnets.

     
    [/url]
    The post Custom pfSense on Azure Rm | a complete guide appeared first on Apostolidis IT Corner.


    Source
  16. proximagr
    This post is bout Exchange/Office 365 Hybrid Deployments, when for some reason we need to completely delete a user account and mailbox from Office 365 in order to re-sync it.
     
    First you need to exclude the user from DirSync
    Open the “Synchronization Service Manager” (cn be fount at “C:\Program Files\WindowsAzureActiveDirectorySync\SYNCBUS\Synchronization Service\UIShell\miisclient.exe”) Navigate to “Metaverse Search” and click on “Add Clause” Be sure that you choose Displayname as Attribute, and then configure your search Double click an entry, and open the tab connectors Activate the line with the “Active Directory Connector” Management Agent and click on “Disconnect… In the disconnect object accept question, choose “Disconnector (Default)” to remove the connector. Explicit Disconnector will lock the object to be a connector again.

    You can then rerun your search, and the specific account will not be shown anymore. And after a sync, the object will also be removed from the azure Directory
     
    Then you need to remove the user object from the Office 365 portal using the PowerShell
    Open PowerShell “Windows Azure Active Directory Module” $msolcred = get-credential connect-msolservice -credential $msolcred Get-MsolUser -ReturnDeletedUsers | FT UserP*,ObjectId Remove-MsolUser -ObjectId abc1234-12abc-123a-ab12-a12b3c4d5f6gah -RemoveFromRecycleBin -Force Get-MsolUser -ReturnDeletedUsers | Remove-MsolUser -RemoveFromRecycleBin -Force

    Then at the next scheduled sync of te DirSync the user will be recreated. Also you can force the DirSync to creaate the user faster.
     

    soure: http://www.e-apostolidis.gr/microsoft/delete-user-from-office-365-with-dirsync/
  17. proximagr
    Καλησπέρα στην κοινότητα. Μια ακόμα αναμέτρηση με το τέρας του Exchange. Έχουμε και λέμε, εγκατάσταση με Exchange Server 2010, έχω ρυθμίσει Hybrid με Office 365, DirSync, όλα καλά. Έχουν πάει επάνω users, contacts, distribution groups, mail contacts, έχω περάσει και Mail-Enabled Public Folders όλα καλά. Έλα όμως που στα Distribution Group members ενώ έχουν συγχρονίσει όλα τα members δεν έχει φέρει τους Mail-Enabled Public Folders. Τελικά μετά από ψάξιμο και διάφορα περίπλοκα PowerShell scripts είπα να δοκιμάσω απλά και όμορφα και μάλιστα μέσο GUI το εξής:
    Έφτιαξα ένα Mail Contact, του άλλαξα το TargetAddress να είναι ίδιο με του Mail-Enabled Public Folder, το έβαλα member στο Distribution Group, force DirSync και voilà, έχουμε mail delivery στο Public Folder!!!!
     
    Τώρα Step-By-Step:
     
    Ας πούμε ότι ο Mail-Enabled PF είναι mypublicfolder και το Email Address είναι [email protected]
     
    1. Ανοίξτε το Exchange Management Console
    2. Πηγαίνουμε “Receipient Configuration” / “Mail Contact” / “New Mail Contact”

    3. Επιλέγουμε“New Contact” & click Next
    4. Δίνω ένα όνομα που να θυμίζει τον Public Folder, εγώ έβαλα ένα -O365 στο τέλος:
    Name: mypublicfolder-O365
    Alias: mypublicfolder-O365
    External e-mail address: [email protected]

    5. Click Next, New & τελικά Finish για να δημιουργηθεί Mail Contact
    6. Ανοίγουμε Active Directory Users & Computers
    7. Από το menu στο “View” κάνουμε check το “Advanced Features”
    8. Πάμε στο container που είναι ο Mail Contact, τον επιλέγουμε και με διπλό click ανοίγουμε properties
    9. Πάμε στην καρτέλα “Attribute Editor” και βρίσκουμε το “TargetAddress”

    10.Κάνουμε Editκαι αλλάζουμε την διεύθυνση με αυτήν του Mail-Enable Public Folders , στη συγκεκριμένη περίπτωση βάζουμε: SMTP:[email protected]

    11. Κάνουμε Click OK, Apply και OK για να κλείσουμε το properties.
    12. Πάμε πίσω στο Exchange Management Console και πάμε στο “Reciepient Configuration” / “Distribution Group”

    13.Βρίσκουμε το Distribution Group και προσθέτουμε το Mail-Contact σαν μέλος
    14. Περιμένουμε το DirSync να κάνει ένα full sync ή κάνουμε force και τέλος.
  18. proximagr
    Αυτό το post είναι οι σημειώσεις μου από διάφορα migrations Exchange 2007 & 2010 σε Office 365 Hybrid Deployment. Για Exchange 2013 είναι σχεδόν το ίδιο, αλλά αρκετά πιο εύκολο!
    Όπως είπα είναι οι σημειώσει μου μαζί με διάφορες προσθέσεις από διάφορα blogs, κάτι σαν Checklist και όχι Tytorial ή Guide.
     
    1. Τι χρειάζεται:
    2 x ADFS NLB (for identity federation)
    2 x ADFS Proxy Servers NLB (for identity federation)
    1 x domain member server for DIrSync
    1 x SQL 2008 R2 server that will store the DirSync database
    1 x Exchange 2010 Service Pack 2 + based hybrid deployment server (for rich coexistence with Exchange Online)
    Access to public DNS of Domain (company.com)
    3rd Party Certificates (if you have on old exchange 2007 a wildcard export and import to 2010)
    Domain User for ADFS service account
    Configure UPN for company.com domain
     
    2. Γενικά τα βήματα:
    1. Add Domain (company.com) to Office 365
    2. Add TXT record to DNS for verification
    3. Specify domain cervices (Exchange, Lync, Sharepoint)
     
    4. ADFS (&/or Farm)
    Add IIS Role, Configure NLB sts.company.local (add hosts, add A record, enable MAC spoofing), add Certificate (SelfSigned or 3rd Party) & bind default site to 443
    Setup ADFS Federation server
    AD FS 2.0 Federation Server Configuration Wizard
    Domain User for ADFS service account
     
    5. ADFS Proxy (&/or Farm)
    Add IIS Role, Configure NLB sts.company.com (add hosts, add A record, enable MAC spoofing), add Certificate (SelfSigned or 3rd Party) & bind default site to 443
    Add host A to Public DNS (sts.company.com)
    Add host record to proxy servers for sts.company.local local IP (ADFS NLB Address)
    Setup ADFS Federation server proxy
    AD FS 2.0 Federation Server Configuration Wizard
     
    6. Convert Domain to a Federated Domain
    On Office 365 portal then downloads then step 3 “Set up and configure your office desktop apps”
    de-select everything (only to install MOSM for powershell)
    On office 365 portal then users then manage (SSO), install MOSM for powershell
    Open MOSM and “$Cred=Get-Credential” add creds, then “Connect-Msolservice –Credentials $Cred” then “Convert-MsolDomainToFederated –DomainName “office365lab.dk”” and “Get-MsolDomain | fl”
    Configure UPN for company.com domain
    Go to login.microsoftonline.com and check SSO login
     
    7. DirSync
    o365 portal then users then set up under directory synchronization (after activate needs some hours)
    o365 portal then users then set up ude active directory synchronization under step 4 download DirSync tool
    Verify dirsync:
    o365 portal then users then set up under actice directory synchronization check “active directory synchronization is activated” or powershell: “Get-MsolCompanyInformation | fl DirectorySynchronizationEnabled”
    Sync:
    run “Directory Sync Configuration”, add creds, check “Enable Exchange hybrid deployment”. If you want to select OU, groups, users, etc then dont check “synchronize directories now”
    Edit sync: “C:\Program Files\Microsoft Online Directory Sync\SYNCBUS\Synchronization Service\UIShell” and run “miisclient” guide (http://blogs.msdn.com/b/denotation/archive/2012/11/21/installing-and-configure-dirsync-with-ou-level-filtering-for-office365.aspx)
    Force Sync:
    With powershell go to C:\Program Files\Microsoft Online Directory Sync” folder and from here run the “DirScConfigshell.psc1” script and on the new windows run “Start-ynOnlineCoexistenceSync”
     
    8. Hybrid Deployment
    Configure NLB on Exchange 2010 HUB/CAS
    ADD 3rd party certificate (if you have on old exchange 2007 a wildcard export and import to 2010)
    assign services SMTP & IIS
    Configure URLS
    OWA
    Set-OwaVirtualDirectory -Identity “EX03\OWA (Default Web Site)” -InternalURLhttps://hybrid.office365lab.dk/OWA -ExternalURL https://hybrid.office365lab.dk/OWA
    Set-OwaVirtualDirectory -Identity “EX04\OWA (Default Web Site)” -InternalURLhttps://hybrid.office365lab.dk/OWA -ExternalURL https://hybrid.office365lab.dk/OWA
    ECP
    Set-EcpVirtualDirectory -Identity “EX03\ECP (Default Web Site)” -InternalURLhttps://hybrid.office365lab.dk/ECP -ExternalURL https://hybrid.office365lab.dk/ECP
    Set-EcpVirtualDirectory -Identity “EX04\ECP (Default Web Site)” -InternalURLhttps://hybrid.office365lab.dk/ECP -ExternalURL https://hybrid.office365lab.dk/ECP
    Active Sync
    Set-ActivesyncVirtualDirectory -Identity “EX03\Microsoft-Server-ActiveSync (Default Web Site)” -InternalURL https://hybrid.office365lab.dk/Microsoft-Server-Activesync -ExternalURLhttps://hybrid.office365lab.dk/Microsoft-Server-Activesync
    Set-ActivesyncVirtualDirectory -Identity “EX04\Microsoft-Server-ActiveSync (Default Web Site)” -InternalURL https://hybrid.office365lab.dk/Microsoft-Server-Activesync -ExternalURL https://hybrid.office365lab.dk/Microsoft-Server-Activesync
    OAB
    Set-OABVirtualDirectory -Identity “EX03\oab (Default Web Site)” -InternalUrlhttps://hybrid.office365lab.dk/oab -ExternalURL https://hybrid.office365lab.dk/oab
    Set-OABVirtualDirectory -Identity “EX04\oab (Default Web Site)” -InternalUrlhttps://hybrid.office365lab.dk/oab -ExternalURL https://hybrid.office365lab.dk/oab
    EWS
    Set-WebServicesVirtualDirectory -Identity “EX03\EWS (Default Web Site)” -InternalUrlhttps://hybrid.office365lab.dk/ews/exchange.asmx -ExternalURLhttps://hybrid.office365lab.dk/ews/exchange.asmx
    Set-WebServicesVirtualDirectory -Identity “EX04\EWS (Default Web Site)” -InternalUrlhttps://hybrid.office365lab.dk/ews/exchange.asmx -ExternalURLhttps://hybrid.office365lab.dk/ews/exchange.asmx
    Autodiscover
    Set-ClientAccessServer –Identity EX03 -AutoDiscoverServiceInternalUri:https://hybrid.office365lab.dk/Autodiscover/Autodiscover.xml
    Set-ClientAccessServer –Identity EX04 -AutoDiscoverServiceInternalUri: https://hybrid.office365lab.dk/Autodiscover/Autodiscover.xml
     
    9. Configure DNS to Exchange 2010
    Configure SPF Record (http://www.microsoft.com/mscorp/safety/content/technologies/senderid/wizard/)
    Add public DNS v=spf1 ip4:192.168.6.220 ip4:192.168.6.221 include:outlook.com -all
    o365 portal then domains then SMTP domain properties under DNS management create SPF TXT record (name @ value v=spf1 ip4:192.168.6.220 ip4:192.168.6.221 include:outlook.com -all)
     
    10. Add o365 Tenant to EMC
    from EMC add exchange forest
    Connect to Exchange Online with powershell “$TenantCreds = Get-Credential” then “$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUrihttps://ps.outlook.com/powershell/ -Credential $TenantCreds -Authentication Basic –AllowRedirection” then “Import-PSSession $Session” then to test “Get-Mailbox | Get-MailboxStatistics | ft -a” or “Get-AcceptedDomain”
     
    11. Configuring Exchange 2010 Hybrid
    EMC – on premises – Organization Configuration” – “Hybrid Configuration” – “New Hybrid Configuration”
    Add TXT record to public DNS
    Add transport certificate (3rd party)
     
    12. Now on can use EMS Get-HybridConfiguration για έλεγχο ότι όλα είναι OK.
    Checklist:
    EMC on-premises
    A federation trust with the Microsoft Federation Gateway (MFG) has been established for the specified domain | On-Premises Org Configuration – federation trust
    an organizational relationship has been established with the Exchange Online organization in Office 365 | On-Premises Org Configuration | organization relationships
    “tenant_name.mail.onmicrosoft.com” has been added as an accepted domain | on-premisis – org conf – hub – accepted domains
    “tenant_name.mail.onmicrosoft.com” and “office365lab.dk” has been added as a remote domain | on-premises – org conf – hub – remote domains
    The default E-Mail Address policy has been updated, so that it stamps a secondary proxy address (alias@tenant_name.mail.onmicrosoft.com) on mailbox user objects | on-premisis – org conf – hub – e-mail address policies
    The HCW also creates a receive connector on each of the hybrid servers | on-premiss – server conf – HUB – receive connectors
    the HCW will create a send connector that will route all e-mail messages destined for “tenant_name.mail.onmicrosoft.com” to Exchange Online in Office 365 | on-premisis – org conf – hub – send connectors
    EMS: Get-OrganizationRelationship | fl
    EMC online
    Org conf – HUB – remote domains
    Org conf – Organization Relationships
    FOPE (forerfront access form ECP – Mail Control
    check Two connectors (inbound & outbound)
     
    Move mailbox = new remote move request | it will move to Mail Contact
    New mailbox online: Mail Contact – new remote mailbox
     
    13. After move
    Generally, Windows Phone 8 and iOS clients will be able to automatically update the ActiveSync profile, while Android based clients must have their ActiveSync profile recreated.
    Outlook will need to close with admin message, re-open and add credentials
     
    14. Decommission
    Move all mailboxes to Exchange Online, point all on-premise line of business applications, network devices and so on to Exchange Online, configures mail flow to go directly in and out of Exchange Online. In this scenario, you decommission all on-premise Exchange servers, but still use DirSync and ADFS for federation. With DirSync, the on-premise Active Directory is the source of authority, which means you should provision users in the on-premise Active Directory and then have them synchronized to Office 365/Exchange Online. In this cae, it’s usually a good idea to keep a single Exchange 2010 server on-premise, so you can use the Exchange 2010 EMC or cmdlets for the provisioning. Alternatively, you remove all Exchange 2010 servers and have an identity solution such as FIM provision the on-premise Active Directory objects with the required mail attributes in order for Exchange Online to treat them as mail enabled users. Bear in mind that with DirSync enabled, most user/mailbox attributes in Exchange Online are read-only meaning you must write to them via the on-premise Active Directory user/group object.
     
    source: http://www.e-apostolidis.gr/everything/exchange-20072010-hybrid-deployment-migrating-to-office-365/
  19. proximagr
    Καλησπέρα στην κοινότητα. Θέλω να μοιραστώ μαζί σας τα προβλήματα που αντιμετώπισα σήμερα σε ένα Hybrid Configuration με Exchange 2010 SP3 UR6. Δεν είναι κάτι τραγικό, ούτε κάτι που δεν έχουμε αντιμετωπίσει στο παρελθόν αλλά πιστεύω ότι όσο μοιραζόμαστε τόσο μαθαίνουμε.
     
    Παραλείπω τα αρχικά, Domain verification, DirSync, Certificate request, το Outlook Anywhere ενεργό, όλα τα virtual directories φαίνονται μια χαρά, telnet 443 μια χαρά, OWA μια χαρά, γενικώς καλά και φτάνω στο Hybrid Wizard. Δημιουργία και πρώτο τρέξιμο για να φτιαχτεί το private certificate όλα καλά. Πάμε τώρα στο update για να βάλουμε credentials, IP, FQDN κλπ. Ξεκίνησα και εγώ όλο χαρά να τελειώσω το Hybrid Wizard. Αμ δε.
     
    Έχουμε και λέμε, φυσικά έσκασε, και ο πρώτος λόγος ήταν «Execution of the Get-FederationInformation cmdlet had thrown an exception” ή αλλιώς «βγάλε άκρη».
     
    Πολλά άρθρα, πολύ ωραία και όμορφα, κυρίως κατέληγαν στο εξής απλο… κάνε όλα τα test στο connectivity analyzer ευχαριστούμε τη Microsoft πολλά χρόνια για αυτό το πολυεργαλείο
     
    Θυμήθηκα που έλεγε ο Admin τους ότι «εμείς χρησιμοποιούμε VPN για να βλέπουμε τα mail από το Outlook από το σπίτι" και ξεκινάω με Outlook Connectivity, έπρεπε να το ψυλλιαστώ....
     
    The HTTP authentication test failed.
    Additional Details
    An HTTP 500 response was returned from Unknown
     
    Το https://mail.MyDomain.com/rpc/rpcproxy.dllέφερνε το 500 άρι. Με τα πολλά καταλήγω να κάνω επανεγκατάσταση RPC over HTTP με τα εξής βήματα:
    1.Απενεργοποίησα το Outlook Anywhere
    2.Απεγκατέστησα το RPC proxy (Σε 2012 & R2 Uninstall-WindowsFeature rpc-over-http-proxy)
    3.Επανεκκίνηση (Φυσικά)
    4.Εγκατάσταση RPC Proxy (Install-WindowsFeature rpc-over-http-proxy)
    5.Ενεργοποίηση Outlook Anywhere
    6.Επανεκκίνηση του Microsoft Active Directory Topology service
     
    Φυσικά και δεν έλυσε το πρόβλημα…. Ευτυχώς βρήκα αυτό το άρθρο https://support.microsoft.com/en-us/kb/2015129και πήγα με το χέρι και πρόσθεσα το "runtimeVersionv2.0" στο Applicationhost.config. Γιατί το aspnet_regiis.exe δεν παίζει σε 2012 και δεν βρήκα κάτι καλύτερο. Ως δια μαγείας έπαιξε με τι μία !!!!!
     
    Τι ωραία, τι καλά , τραλαλά, τρέχω τον Hybrid Wizard και .... ακριβώς το ίδιο error!
     
    Πάμε πάλι στον connectivity analyzer, τώρα έτρεξα το autodiscover test. Μια χαρά… όλα καλά, τρέχω και EWS test όλα καλά. Με τα πολλά λέω να κάνω reset το autodiscover, το λέγαν διάφοροι με πρόβλημα στο get-federatedinformation. Με τα πολλά τα βήματα είναι αυτά:
     
    •Reset the Autodiscover Virtual Directory
    •Reset the WSSecurityAuthentication to $true
    •IIS reset, then the get-federatedinformation worked!
     
    Ωραία λέω, πάμε από GUI να κάνω reset το autodiscover virtual directory http://technet.microsoft.com/en-us/library/ff629372.aspx. ΧΑΧΑΧΑΧΑΧΑΧΑΧΑ, ο exchange γελούσε με την πάρτη μου. Με το που πατάς το “reset virtual directories” από το GUI σκάει το Exchange Management Console (Exchange 2010 SP3 UR6). Έτσι απλά. Οπότε η δουλειά έγινε με Powershell και όλα καλά, έτρεξα το παρακάτω γιατι ήταν όλα Default:
     
    Get-AutodiscoverVirtualDirectory | Remove-AutodiscoverVirtualDirectory
    New-AutodiscoverVirtualDirectory -Websitename "Default Web Site" -BasicAuthentication:$true -WindowsAuthentication:$true
     
    Μετά το IISreset τρέχω να τρέξω το Hybrid Wizard!!! Όλο χαρά και πάλι, και φυσικά έσκασε!!! Αλλά αυτήν την φορά με άλλο error, το περάσαμε το get-federatedinformation!!!!
    Το νέο μας error: Subtask ValidateConfiguration execution failed: Configure Mail Flow, Ok λέω, αυτό το έχουμε ξαναδεί, όταν έχεις wildcard certificate φτιάχνει τους connectors με default server address, mail.domain.com, στην περίπτωσή μου τους έφτιαξε mail.xxxxx.gr αντί για mailx.xxxxx.gr που ήθελα.
     
    Πάω να τους διορθώσω, και στο check στον Outbound του Office 365 (mail flow/connectors/Hybrid Mail Flow Outbound Connector ) με κόβει στο verify. 450 4.4.101 Proxy session setup failed on Frontend with ‘451 4.4.0 Primary target IP address responded with: “451 5.7.3 STARTTLS is required to send mail.
     
    Χμ, μιλάω με τον Administrator τους να δει αν το Firewall Κάνει ESMTP inspection και μου λέει, «ααααα ξέρεις, το mail flow περνάει από το Symantec gateway μέσα και έξω…» όμορφα και ωραία το κάναμε bypass και από exchange και από firewall και διόρθωσα τους connectors. Μια χαρά.
     
    Με τα πολλά έκανα move ένα test mailbox στο office 365 και πήγε μια χαρά! Μεγάλες χαρές, στέλνει mail, λαμβάνει mail, κυριλέ. Mail flow πάνω κάτω, δεξιά αριστερά μια χαρά. Μεταφέραμε και μερικά ακόμα και η ζωή συνεχίζεται....
  20. proximagr
    <p>First we need to create a certificate request</p>
    <p>Open the Microsoft Exchange Management Console and navigate to Microsoft Exchange -> Server Configuration.</p>
    <p>On the right panel press the “New Exchange Certificate”</p>
    <p id="IcnajXr"><img class="alignnone size-full wp-image-1027 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b27be99f9e.png"alt="" /></p>
    <p>The “New Exchange Certificate” wizard will start. Enter a friendly name, just a name to remember what this certificate is about.</p>
    <p id="JDRaiCG"><img class="alignnone size-full wp-image-1028 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b27fdbb3cf.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b27fdbb3cf.png 591w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b27fdbb3cf-300x130.png 300w" sizes="(max-width: 591px) 100vw, 591px" /></p>
    <p>no need to check the wildcard option</p>
    <p id="EtdTZXg"><img class="alignnone size-full wp-image-1029 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2865f2737.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2865f2737.png 582w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2865f2737-300x123.png 300w" sizes="(max-width: 582px) 100vw, 582px" /></p>
    <p>At the next page select the services that you want, in most cases select all “Client Access Server”,</p>
    <p id="ZkhorfF"><img class="alignnone size-full wp-image-1031 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b28e1b354b.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b28e1b354b.png 592w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b28e1b354b-300x259.png 300w" sizes="(max-width: 592px) 100vw, 592px" /></p>
    <p>Next add all the alternative names that you want to include to the certificate</p>
    <p id="YlDGuRt"><img class="alignnone size-full wp-image-1032 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2937afb28.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2937afb28.png 588w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2937afb28-300x179.png 300w" sizes="(max-width: 588px) 100vw, 588px" /></p>
    <p>fill the Organization form and select the save path</p>
    <p id="fQORogU"><img class="alignnone size-full wp-image-1033 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b29920a85a.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b29920a85a.png 585w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b29920a85a-300x242.png 300w" sizes="(max-width: 585px) 100vw, 585px" /></p>
    <p>finally press “new” to create the certificate request</p>
    <p id="utNSVvF"><img class="alignnone size-full wp-image-1034 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b29c869221.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b29c869221.png 589w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b29c869221-300x256.png 300w" sizes="(max-width: 589px) 100vw, 589px" /></p>
    <p>after this at the Exchange Certificates windows of the Exchange Management Console you will see a new item that will say “Pending request”.</p>
    <p>Open the exported file with notepad and save it as “ASCII” encoding (the original is Unicode)</p>
    <p id="SZuTWJF"><img class="alignnone size-full wp-image-1035 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b03bfcb1.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b03bfcb1.png 319w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b03bfcb1-300x119.png 300w" sizes="(max-width: 319px) 100vw, 319px" /></p>
    <p>Now we need to go to our Domain’s Active Directory Certification Authority and open an elevated command prompt.</p>
    <p>Run the command:</p>
    <p>certreq.exe -submit -attrib CertificateTemplate:WebServer</p>
    <p id="UdQDUvn"><img class="alignnone size-full wp-image-1036 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b42f3883.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b42f3883.png 622w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b42f3883-300x37.png 300w" sizes="(max-width: 622px) 100vw, 622px" /></p>
    <p>It will ask you to select the request file, select the ACHII encoded file</p>
    <p id="hloRRHj"><img class="alignnone size-full wp-image-1037 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b6875e43.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b6875e43.png 504w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2b6875e43-300x105.png 300w" sizes="(max-width: 504px) 100vw, 504px" /></p>
    <p>and then select the Certification Authority</p>
    <p id="xKWCLlX"><img class="alignnone size-full wp-image-1038 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2bc027538.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2bc027538.png 371w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2bc027538-300x184.png 300w" sizes="(max-width: 371px) 100vw, 371px" /></p>
    <p>finally it will produce a cer file.</p>
    <p>Go back to the Exchange Certificates window of the Exchange Management Console, select the “pending certificate request” and press “complete pending request”. Select the cer file, select the services needed (IIS, SMTP, POP, IMAP) and the wizard will create the certificate and enable it for the services.</p>
    <p id="tWSVZDr"><img class="alignnone size-full wp-image-1039 " src="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2d1220d47.png"alt="" srcset="http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2d1220d47.png 606w, http://www.e-apostolidis.gr/wp-content/uploads/2016/07/img_579b2d1220d47-300x51.png 300w" sizes="(max-width: 606px) 100vw, 606px" /></p>
    <p> </p>
    <p><a class="a2a_button_email" href="http://www.addtoany.com/add_to/email?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fexchange%2Fexchange-2010-add-local-domain-ca-certificate%2F&linkname=Exchange%202010%20%7C%20add%20local%20domain%20CA%20certificate"title="Email" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/email.png" width="16" height="16" alt="Email"/></a><a class="a2a_button_print" href="http://www.addtoany.com/add_to/print?linkurl=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fexchange%2Fexchange-2010-add-local-domain-ca-certificate%2F&linkname=Exchange%202010%20%7C%20add%20local%20domain%20CA%20certificate" title="Print" rel="nofollow" target="_blank"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/icons/print.png" width="16" height="16" alt="Print"/></a><a class="a2a_dd a2a_target addtoany_share_save" href="https://www.addtoany.com/share#url=http%3A%2F%2Fwww.e-apostolidis.gr%2Fmicrosoft%2Fexchange%2Fexchange-2010-add-local-domain-ca-certificate%2F&title=Exchange%202010%20%7C%20add%20local%20domain%20CA%20certificate" id="wpa2a_2"><img src="http://www.e-apostolidis.gr/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a></p><p>The post <a rel="nofollow" href="http://www.e-apostolidis.gr/microsoft/exchange/exchange-2010-add-local-domain-ca-certificate/">Exchange 2010 | add local domain CA certificate</a> appeared first on <a rel="nofollow" href="http://www.e-apostolidis.gr">Proxima's IT Corner</a>.</p>


    <a href="http://www.e-apostolidis.gr/microsoft/exchange/exchange-2010-add-local-domain-ca-certificate/"class='bbc_url' rel='nofollow external'>Source</a>
  21. proximagr
    You can easily provide Full Access Permissions using the GUI, just Edit the mailbox you want, go to Mailbox Delegation and provide Full Access. Both Exchange 2013 and Online is the same. But if you have to provide Full Access massively then you need PowerShell.
     
    The command for a single user is:
    Add-MailboxPermission -Identity "employee" -User "manager" -AccessRights FullAccess
    with that command user “manager” will be granded with Full Access permissions to user “employee”
     
    Now lets see how the user “manager” can take Full Access to many users, lets say “all Sales department”. The steps are two, first we need to query the “Sales Department” users and then we need to pipeline it to provide access to user “manager”
    example 1: Using Active Directory OU container

    get-mailbox -OrganizationalUnit domain.local/users/salesdpt | Add-MailboxPermission -User "manager" -AccessRights FullAccess
    example 2: Using a txt list. As usual create a txt file and make a per-line list with title “employee” like this:
    employeeusername1username2username3
    Save it as c:\access.txt and then run this command:
    Import-CSV c:\access.txt | Foreach { Add-MailboxPermission -User "manager" -AccessRights FullAccess }
    To view the permission change the “Add-MailboxPermission” with “Get-MailboxPermission”
     
    To remove the permission change the “Add-MailboxPermission” with “Remove-MailboxPermission”
     
    Just a final addition, when you provide Full Access permission to a user, at my example the “manager”, Outlook auto-maps the accounts that the manager gains access. So the next time he will open outlook, all mailboxes will be visible. You can force to don’t auto-map by adding -AutoMapping:$false at the end of the script, like this:
    Add-MailboxPermission -Identity "employee" -User "manager" -AccessRights FullAccess -AutoMapping:$false
    Be careful: with great power comes great responsibility!
     
    source: http://www.e-apostolidis.gr/microsoft/exchange-2013-online-grand-full-access-to-mailboxes/
  22. proximagr
    The exchangeserverpro.com site has the below excellent articles,
    to create the certificate request:
    to compete the pending request:and to enable it:
    The post Exchange 2013 Add public certificate and enable it appeared first on Proxima's IT Corner.

    <a href="http://www.e-apostolidis.gr/microsoft/exchange/exchange-2013-add-public-certificate-enable/"class='bbc_url' rel='nofollow external'>Source</a>
  23. proximagr
    Exchange 2013/16 Set Virtual Directories Notes

    By Pantelis Apostolidis | December 13, 2016
    0 Comment
     
    You can find all this info at many many blogs allover the internet, I just want to have a note here to have them gathered for ease.
     
    Outlook Anywhare
    Get-OutlookAnywhere | Select Server,ExternalHostname,Internalhostname
     
    Get-OutlookAnywhere | Set-OutlookAnywhere -ExternalHostname mail.mydomain.com -InternalHostname mail.mydomain.com -ExternalClientsRequireSsl $true -InternalClientsRequireSsl $true -DefaultAuthenticationMethod NTLM
     
    MAPI
    Get-MapiVirtualDirectory | Select Server,ExternalURL,InternalURL | fl
     
    Get-MAPIVirtualDirectory | Set-MAPIVirtualDirectory -ExternalUrl https://mail.mydomain.com/mapi-InternalUrl https://mail.mydomain.com/mapi
     
    OWA
    Get-OwaVirtualDirectory | Select Server,ExternalURL,InternalURL | fl
     
    Get-OwaVirtualDirectory | Set-OwaVirtualDirectory -ExternalUrl https://mail.mydomain.com/owa-InternalUrl https://mail.mydomain.com/owa
     
    ECP
    Get-EcpVirtualDirectory | Set-EcpVirtualDirectory -ExternalUrl https://mail.mydomain.com/ecp-InternalUrl https://mail.mydomain.com/ecp
     

    ActiveSync
    Get-ActiveSyncVirtualDirectory | select server,externalurl,internalurl | fl
     
    Get-ActiveSyncVirtualDirectory | Set-ActiveSyncVirtualDirectory -ExternalUrl https://mail.mydomain.com/Microsoft-Server-ActiveSync-InternalUrl https://mail.mydomain.com/Microsoft-Server-ActiveSync
     
    EWS
    Get-WebServicesVirtualDirectory | Select Server,ExternalURL,InternalURL | fl
     
    Get-WebServicesVirtualDirectory | Set-WebServicesVirtualDirectory -ExternalUrl https://mail.mydomain.com/EWS/Exchange.asmx-InternalUrl https://mail.mydomain.com/EWS/Exchange.asmx
     
    OAB
    Get-OabVirtualDirectory | Select Server,ExternalURL,InternalURL | fl
     
    Get-OabVirtualDirectory | Set-OabVirtualDirectory -ExternalUrl https://mail.mydomain.com/OAB-InternalUrl https://mail.mydomain.com/OAB
     
    AUTODISCOVER SCP
    Get-ClientAccessServer | Select Name,AutoDiscoverServiceInternalURI
     
    Get-ClientAccessServer | Set-ClientAccessServer -AutoDiscoverServiceInternalUri https://mail.mydomain.com/Autodiscover/Autodiscover.xml
  24. proximagr
    This is a fast way to manage Calendar permissions of a mailbox. Same commands are for both Exchange on-premises and Exchange Online (Office 365). For Exchange Online first connect Powershell to Office365, as described to previous posts.
     

    # To check current permissions
    Get-MailboxFolderPermission -Identity "[email protected]":\calendar
    # To add calendar permissions, permission can be Editor,Reviewer,Author etc
    Add-MailboxFolderPermission -Identity "[email protected]":\calendar -User "manager@mydomain" -AccessRights Editor
    # To change the calendar permission of an existing access (thi swill change the access to Author
    Set-MailboxFolderPermission -Identity "[email protected]":\calendar -User "manager@mydomain" -AccessRights Author
    # To remove calendar permissions
    Remove-MailboxFolderPermission -Identity "[email protected]":\calendar -User "manager@mydomain"
    source: http://www.e-apostolidis.gr/microsoft/exchange-calendar-permissions-using-powershell/
  25. proximagr
    Excited to be speaking at Microsoft Ignite The Tour in Milan on Jan 27-28. Join me to learn how to use Azure Platform As A Service (PaaS) to design your apps with Elasticity, Resiliency & High Availability and how to Accelerate your web applications with the Azure Front Door Service.
    IT industry-leading conference is going to Milan. Don’t miss the very latest in cloud technologies and developer tools with guest speakers, industry experts, and more.
    I will deliver two sessions:
    A 45 minutes Breakthrough session, where I will talk about how to use Azure Platform as a Service (PaaS): Design your apps with Elasticity, Resiliency and High Availability very easy, fast and secure. Session code: BRK30169
    Session link: https://milan.myignitetour.techcommunity.microsoft.com/sessions/91113?source=sessions
    And a 15 minute Theater session, where I will talk about how to accelerate your web applications with Azure Front Door Service. Use the Azure WAN, 130+ edge sites with WAF & Layer 7 Load Balance at a global scale. Session code: THR30089
    Session link: https://milan.myignitetour.techcommunity.microsoft.com/sessions/91114?source=sessions
    YFeel free to find me at the Microsoft Showcase, where I will answer all your questions, discuss about Cloud Technologies and the future of our industry!
    Grab your ticket at https://www.microsoft.com/it-it/ignite-the-tour/milan
    See you at Milan!


    The post Excited to be speaking at Microsoft Ignite The Tour in Milan! appeared first on Apostolidis IT Corner.
     
     
     
×
×
  • Create New...